Artificial intelligence
Promoting a culture of transparency is the key to a trustworthy AI
November 1, 2022
Recently, I spoke at the 22nd Annual Healthcare Summit in Vancouver, where we explored the importance of promoting a culture of transparency and explainability in healthcare AI. According to research from the Brookings Institute, medicine trails every industry but construction in demand for AI-literate employees, and if providers don’t know what factors contributed to an AI’s decision, providers will be more reluctant to utilize the AI to aid in decision making. We need to promote a culture of transparency and explainability in healthcare AI otherwise the promise of new, innovative technologies like these will not be realized.
Interest in AI healthcare applications isn’t waning, though investment in startups has cooled slightly – we’re on pace for AI investments of $6 billion (US) in 2022, down from $10 billion last year, but still more than double the $2.4 billion invested in 2019. And AI applications won’t be as obvious as robots doing examinations. They may be largely hidden from the patient, for example, in scheduling processes, workflow, business processes and diagnostics.
So why the apparent trust hurdle?
The black box: A cornerstone of AI concerns is the “black box” issue, the lack of transparency about how applications make their algorithmic decisions. While data scientists and technologists strive to create models that reflect real-world conditions as accurately as possible, healthcare providers don’t know in more than the most general terms what information was fed to the model or what relative significance was associated with it. Sometimes, providers aren’t even informed of the most basic factors that contribute to models. As a result, practitioners will be inclined to lean on their own expertise and judgment.
Human expertise can be subject to bias. But so can modeling. And within the “black box,” how do we find the biases that can creep into our decision-making?
Biased models: Biases aren’t always overt or intentional. Modeling based on historical data collection can be vulnerable to these. Who could be underrepresented in the data collection, and why? Rural populations, for example, could be underrepresented simply because they have had less interaction with hospital data collection systems. Some populations can’t or won’t comply with data collection because of religious, cultural or language barriers, or because of historical mistreatment by the system.
Healthcare consumers are also increasingly aware of the risks of electronic data breaches and might opt out of data sharing. We don’t have to look too far back to cite a recent example of a data breach. According to the U.S.
Department of Health and Human Services, ransomware and data breach attacks have doubled in 2022 over last year.
How does this impact the deployment of the AI model used? Every region has its own distinct population characteristics and healthcare usage patterns; models must be tailored to local or regional populations.
Establishing an appropriate model for a population isn’t the end. Models must be monitored to ensure they’re fair and accurate over time. Data drift over a period can be a symptom of a model whose usefulness is fading. Metrics must be established to warn of unacceptable change levels; this could require retraining the model, or retiring it altogether.
Bias can creep in through data that is defined or used inappropriately. Consider the STONE score, used to evaluate patients presenting at an emergency room with flank pain to predict kidney stone risk. If a patient identifies as non-Black, three points are added to the score, increasing the likelihood of further investigation. A Black patient would have to present with much more serious symptoms to be assigned the same level of risk.
Since race is a social and a political construct, it shouldn’t be used for clinical decision-making unless there is a strong justification for doing so. But we still need to continue collecting sensitive data to test our AI models for bias. This information should be used for descriptive purposes, not for prescriptive purposes.
AI is worth it – here’s why: Speaking at the recent Toronto stop of the SAS Innovate tour, Reggie Townsend, director of the SAS Data Ethics Practice, pointed out how there is both risk and reward in the use of AI in healthcare, and public perception is focused heavily on the risk side. But what are the potential upsides? What is AI good at? How can it contribute to healthcare?
- AI is good at automating repetitive tasks. For example, Pinnacle Solutions has developed a No-Show Predictor that aggregates patient history, clinical records and demographic information with third-party data including weather and traffic reports to predict which patients are most likely to miss appointments, allowing clinics and hospitals to supplement appointment rosters to avoid blank slots and accommodate more patients.
- AI is good at making existing processes more efficient. Take an overburdened radiologist. Rather than screen every image in the order they are filed, they can prioritize the files prescreened by an application with a higher likelihood of variation, reducing the time to catch health threats to hours from perhaps days, depending on the volume of imagery.
- AI helps with decision-making. AI can discover relationships in big data that providers may not be able to discover on their own based on the information they have access to in the electronic health record. Providers can consider the information discovered by the AI to help influence the patient’s care.
AI will be an advisor, a tool, an instrument for healthcare providers. “There’s too much doom and gloom,” Townsend said, and too much focus on risk, he added, will drive AI underground to “bad actors.”
The primary mission of AI is the same as that of the medical profession: First, Do No Harm. We must cultivate a culture of transparency and explainability around AI or we won’t be able to realize the promise of innovative technologies. We must be clear and upfront in explaining how AI algorithms work. That includes knowing where the data comes from, and acknowledging possible biases in the repository.
Allie DeLonay is a Data Scientist with the SAS Institute Data Ethics Practice.