Artificial intelligence
More hospitals and medical centres are creating AI governance models
March 31, 2025
The Ottawa-based Children’s Hospital of Eastern Ontario (CHEO) saw the potential of AI but also understood the need for great responsibility. Meanwhile, CHEO Research Institute experts were developing AI solutions that demonstrated positive clinical impact but lacked a structured pathway towards implementation.
The two entities decided to collaborate last year and within months delivered Leading Responsibly, a framework of six main principles to guide the use of AI at CHEO, whether the solutions are bought from a vendor or built in-house.
“It was this magical moment where there was appetite and capacity and we jumped on it,” said CHEO project director, AI, Merryn Douglas. She calls the effort a “sprint” that also benefited from knowledge sharing with other groups and healthcare organizations.
The six principles that make up CHEO’s framework are summarized as Intentional, Accountable, Inclusive & Equitable, Trustworthy, Agile and The Right Fit. Each one outlines steps to ensure that any AI solution used within the hospital & whether embedded as a component within a tool or a standalone solution & is appropriately evaluated and managed throughout its lifecycle.
Part of the goal was to demystify the technology. “Anyone can have AI at their fingertips,” said Douglas. “We want to make sure everyone has the information they need to use it responsibly and to know when to stop to ask a question.”
The framework’s concept of being intentional is about keeping humans in the loop as AI solutions are developed and rolled out. AI tools are good at providing in-depth information to support decision making, but ultimately, it’s the providers who retain control.
“It’s really about strengthening that human connection that’s providing the care and helping providers perform their duties better,” said CHEO Research Institute’s director of research informatics, AI and technology, Ivan Terekhov. “We’re not here to replace everyone with robots. It’s about enhancing our ability to provide care for the patients.”
Accountable refers to the fact that every algorithm is assigned an owner who is responsible for evaluating and monitoring its performance. The framework also addresses the need to have “tough conversations” upfront to guard against bias.
For example, CHEO’s ThinkRare solution & expected to go live this spring & applies AI to help with early identification of rare genetic diseases in patients.
During the tool’s evaluation phase, the framework helped to surface an important consideration: CHEO serves patients from both Ontario and Quebec but because the genetic sequencing test applied in the model wasn’t funded for Quebec patients, they were missing out.
Because inclusivity and equity is part of the governance framework, they were prompted to question what they could do differently so that Quebec patients could also benefit from the technology. “It helped us to think through that problem and address it before the full rollout into a clinical space,” said Terekhov.
As part of the trustworthy principle, CHEO aims to leverage existing privacy and security processes so that patient information remains protected, end-to-end, in any AI solution. That includes knowing what data is being accessed, how AI models are trained and what, if any, data is being shared.
Agile means AI can never be a “set it and forget it” solution but rather one that requires constant attention and revision. Even the framework will be updated, said Terekhov, based on information gleaned as more algorithms come to the table for assessment.
Finally, the framework recognizes that not every problem requires an AI solution. “It’s the ëin vogue’ thing that’s happening right now in the world, but it doesn’t mean we have to shoehorn AI into absolutely everything,” he said. “It has to be the right problem where an algorithm might be performing something a human can’t do or is enhancing the ability of a person to make a decision or do a particular task.”
Founded in 2022, startup Signal 1 & which co-develops healthcare applications with Toronto-based St. Michael’s Hospital & is taking a fully integrated approach to delivering AI, providing both the technical infrastructure and the tools required to put models into production. Early on, the company identified a gap between building and validating models on paper and monitoring them in a live environment.
“I think everybody believes in the potential that AI tools can have for healthcare and at the same time, people are worried about using them in a way that is safe, ethical and effective,” said Signal 1 co-founder and COO Mara Lederman. Users are primarily concerned about accuracy, bias and performance degradation.
“Like any new technology, you want to find the balance between harnessing it for good while maintaining safety, or what we might call responsible and ethical AI, at its core,” she said.
Signal 1 addresses governance by providing automated tools for monitoring and reporting on AI models, enabling health systems to apply rigorous standards without the need to put resources in place, and support a great deal of manual effort.
For example, the platform includes a live monitor to display the performance of each AI model deployed, alerting users whenever performance degrades beyond a predetermined point. At that point, users can access raw data to try to fix or retrain their model.
“Hospitals are used to risk management,” said Lederman. “It’s just a matter of understanding how those risks arrive with AI and then applying a framework for how likely this is to cause a problem and how severe that problem is.”
Signal 1’s platform also generates detailed model validation reports that can be customized to adhere to any AI governance framework and measure different metrics depending on the nature of the model (operational or clinical), including bias.
For example, when validating a model designed to detect patients at high risk for clinical deterioration, an automated report may come back indicating that 80 percent of men were accurately classified compared to only 68 percent of women. The next step would be to investigate what is causing that particular AI model to miss more women than men.
“You might speculate that women have fewer tests taken and as a result have fewer lab results feeding the model Ö or is it that there weren’t enough women represented in the historical data to pick up those patterns (when the model was built)?,” explained Lederman.
Another important aspect of AI governance addressed by Signal 1 is enterprise model management, which can be likened to inventory management. Every model is registered on the platform, including details on when it was built, who built it, when it was approved, what the initial performance was like and how it is performing now.
Signal 1 is currently working with major healthcare organizations like Trillium Health Partners and Unity Health, and it is garnering attention from other provinces, as well. Users either access the platform directly to manage and monitor their AI solutions in-house, or contract with Signal 1 as a service provider.
“For a lot of hospitals, if you ask them what AI models they have running, they say they don’t actually know, or that they have it in a spreadsheet somewhere,” said Lederman. “We believe tools for management, monitoring and reporting are the things that will unlock AI at scale, that will turn AI from a couple of science experiments inside hospitals to a core piece of software that impacts hospital operations.”
INOVAIT, the Canadian image-guided therapy and artificial intelligence network established in 2020 by Sunnybrook Research Institute and supported by Canada’s Strategic Innovation Fund, is working to develop AI technologies that will revolutionize the diagnosis and treatment of disease and improve patient outcomes.
In February 2025, the network released a framework intended to help healthcare institutions to “confidently and responsibly engage in health data licensing with ethical private sector companies.”
Understanding that Canada is uniquely positioned to lead the development of AI-enabled technologies due to the consistent way diverse health data is collected here, the INOVAIT framework lays out eight principles for safe, ethical and trustworthy Canadian health data licensing.
They are: transparency, de-identified or anonymized data, acceptable risk, ethical partners, benefits public good, Indigenous data sovereignty, oversight and governance, and responsible stewardship.
The over-arching goal is to provide a reference point for groups who want to share data across the country, including hospitals, clinics, public health teams or administrators, said Dr. Brian Courtney, an interventional cardiologist, inventor and clinician-scientist at Sunnybrook Research Institute. He is also co-chair of INOVAIT’s Health Data Sharing & Governance Working group.
“We need to have a framework that allows us to do the good parts of what is possible (with AI) while having boundaries that provide protection to make sure the potential harms of this are significantly minimized,” said Dr. Courtney.
The multidisciplinary process to develop the framework, which can be downloaded at www.inovait.ca/data, included consultation with experts in ethics, privacy, clinical care, industry, Indigenous health, patient advocacy, health technology and data governance, as well as a roundtable with health data leaders from six provinces.
Stakeholders considered what would be practical, implementable and reasonable, and went through a high-level risk-benefit analysis.
“This working group came out of a commitment to the federal government that we made, saying we think we can do great research in the new technologies that are needed to improve healthcare, but we’re not just going to be technology centric; we’re going to provide a framework so this can be done in a way that the public will trust,” said Dr. Courtney.
For projects using anonymized data where the risk of re-identification is very low, applying the framework as a checklist might be reasonable. In cases where there’s a higher specificity of information, meaning it might be easier to identify patients if data anonymization isn’t entirely successful, more rigid processes may be required, such as establishing an independent review board.
“Similar to a research ethics board, it would review the application for the data sharing agreement and apply these principles, and then determine: Are there any other safeguards? Is there any auditing that’s required? Are there any covenants that need to be made? What are the consequences of a data breach?” he explained.
In other circumstances, the framework may require the establishment of federated learning systems, where instead of sending data back and forth, the processing of highly confidential and sensitive data would occur in an extremely secure data trust and only the algorithms would go back to the interested party.
The first step was to draft the principles and framework. The next step is action, using the framework to develop template legal documents to reduce risks associated with health data sharing, for example, or to identify what governance and oversight should look like.
“When we started doing this, it felt a little bit like we were doing this out of an ethereal concern. Is it overly academic? Who are we as a group to get together to tell other people how to do this,” said Dr. Courtney. “But as we moved forward with it, we got a lot of support from people saying this is an issue, they don’t know how to grapple with it, but they want to be able to do the data sharing.”