The rapid rise of artificial intelligence creates as much risk to the delivery of healthcare as it does opportunity. One of the most significant risks is that if we don’t develop and deploy AI tools carefully, they could widen existing disparities in care rather than enhance the industry’s efforts to improve health equity.
At Mass General Brigham, equity is central to everything we do. It is the driving force behind our United Against Racism initiative, which aims to eliminate the root cause of healthcare disparities. We recently launched an AI Governance Committee, comprising clinicians, legal professionals, technology experts and patient-experience and safety specialists. The committee will develop guidelines for using the technology responsibly and set out to tackle many questions, including how we can assess AI tools through a health equity framework.
There’s no simple answer to this question, but one fact is clear: Health systems can and should be at the forefront of ensuring such challenges are addressed before AI applications are deployed.
The risk that AI could widen disparities in care is significant. For example, we know that African American adults are less likely to receive interventions for heart attacks than white adults. The last thing we want to do is train AI algorithms in ways that could inadvertently perpetuate that outcome. Instead, we need to develop models using source data that are equitably distributed across demographic groups and are evaluated to drive equity. Once we achieve that goal, providers can then use AI to make predictions and personalized recommendations for therapy based on clinical context and known risk factors.
Hospitals are especially well-equipped to capitalize on AI to achieve health equity, but they can’t do it alone. At Mass General Brigham, we have access to more than 13 million patient records, containing billions of annotated images and clinical data points. Yet those data come from people who mainly live in Eastern Massachusetts and therefore don’t represent a diverse cross-section of the American patient population. That’s why it’s important for hospitals to partner with other health systems and organizations to pool diverse sets of de-identified patient data.
Then, we can use those pooled data to make inferences on best approaches to achieve high-quality, safe and equitable health outcomes.
Such a capability could be incredibly powerful, because it would produce insights that can help personalize treatment choices. For example, in a recent study published in BMC Medical Informatics and Decision Making, data were used from patients with hypertension to train machine-learning models based on individual patient profiles. AI models then generated personalized medication prescriptions, which, in turn, were predicted to lead to a reduction in systolic blood pressure levels that were 70% better than the standard of care. The AI tool was trained using demographic data that captured each patient’s age, sex, race, language, marital status and ZIP code, along with medical history and risk factors, to evaluate and refine the models to produce recommendations that would more equitably influence patient outcomes.
In addition to training AI algorithms using diverse datasets, health systems need to develop governance policies that guarantee full transparency around the technology. They must ensure processes are in place to make algorithms as explainable and traceable as possible. After all, if physicians can’t understand how an AI tool makes recommendations, they will be less likely to embrace the technology. We need to demonstrate that the AI we are deploying has been fully vetted and, importantly, is free of biases.
Large health systems have the resources to make these evaluations, which we can share with organizations that lack similar capabilities. This is critical for driving equity, because it’s often the smaller community organizations that act as a safety net for underserved populations. We don’t want to create a digital divide by leaving those organizations without the ability to take full advantage of technologies like AI.
Finally, health systems must use their voices to influence the public policy agenda around AI at the local and federal levels. Efforts are already underway, including the Coalition for Health AI, which in April released its Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare. To avoid bias in the use of AI in healthcare, the blueprint includes a recommendation that “there should be multiple checkpoints for every stage” of design and development, as well as continual monitoring throughout implementation.
The relentless pursuit of high-quality equitable care through the use of innovative technologies is something that we should embrace as a healthcare community and as patients—but can only be achieved by robust evaluation through an equity lens.