Canadian Healthcare Technology Logo
  • Issues
    • Current Print Issue
    • Print Archive
  • Advertise
    • Publishing Schedule
    • Circulation
    • Unit Sizes and Rates
    • Mechanical Requirements
    • Electronic Advertising
    • White Papers
  • Subscribe
    • Print Edition
    • e-Messenger
    • White Papers
  • Events
  • Vendors
  • About Us

Oracle

Oracle 1400x150

Philips

AGFA 1400x150

Artificial intelligence

Governing generative AI: From permission to competency to measurement

By Will Falk (w AI assistance)

February 26, 2026


This article is condensed from a longer paper written by Will Falk, to be published by the CSA Public Policy Centre and available on their website. Will is a Fellow at four Canadian Think Tanks and Universities and a contributing editor to CHT.

Policy and regulation for generative AI (GenAI) in healthcare remain unsettled in Canada and internationally. What policies and practices have emerged are the product of a period of managed experimentation, shaped less by formal approval pathways than by professional accountability, institutional oversight, and pragmatic restraint. In Canada, this has produced a system that is permissive in practice but cautious in posture: clinicians are expected to supervise GenAI within scope of practice rules, organizations remain accountable for deployment, and patients experimenting with AI are expected to act responsibly.

Some have criticized Canadian governments for moving slowly. In early 2026, that criticism looks increasingly unfair as the UK, US, and Europe have each retrenched in early 2026 or late 2025 while Canada has used safe harbour rules and vendor approvals to allow adoption within physicians’ scope of practice.

Rather than forcing the use of legacy regulatory categories, Health Canada has taken a deliberately incremental approach, issuing principles and guidance while observing how GenAI is actually used in clinical settings. This was in part due to the change of government and the appointment of Evan Solomon as the first AI Minister; this has caused a policy pause.

Health Canada’s AI4Health document in 2024 signalled that AI adoption could proceed provided safety, accountability, and trust were maintained. In February 2025, updated pre-market guidance for machine-learning-enabled medical devices clarified expectations around evidence, lifecycle management, and predetermined change control plans.

These steps were modest by design and remain anchored in Canada’s existing Software as a Medical Device (SaMD)framework, introduced in 2018 for a different generation of deterministic and narrowly scoped software. That framework was never designed for large, probabilistic, continuously evolving systems embedded directly into clinical workflows. Health Canada has not attempted to retrofit it aggressively. Instead, it has left room to learn.

That restraint now appears justified. Regulators in the US, UK, and Europe each initially signalled that GenAI would be regulated through existing device pathways or even more aggressively – with some European jurisdictions forcing registration as at least level 1 medical devices. Each has since narrowed or softened those ambitions. Many have moved to self-certification, which is largely performative. None has produced a stable, widely accepted framework. There are few global best practices to import.

Complement or substitute: The most important regulatory distinction is not whether a system uses generative AI, but whether it substitutes for clinical judgment. Is it a Complement or a Substitute.

Complementary GenAI systems support licensed professionals and operate within existing scopes of practice. Ambient scribes that draft clinical notes, second-screen clinical decision support (CDS) tools, intake and referral systems, administrative automation, and patient-facing information tools all fall into this category. Accountability remains human. These systems have scaled rapidly because they fit within established licensure, liability, privacy and insurance structures. They reduce documentation burden, lower cognitive load, and improve workflow efficiency without displacing professional responsibility.

Substitutive GenAI systems are fundamentally different. They aim to initiate diagnoses or treatments, replace professional judgment, or act independently. These systems carry materially higher risk and require materially higher evidence and governance thresholds. Outside of medical imaging and a small number of tightly supervised pilots, substitutive generative AI remains largely in research environments in Canada.

Regulatory triggers should attach to substitution of clinical judgment, not to the mere presence of AI. Treating supervised ambient clinical copilot systems as if they were autonomous actors would unnecessarily slow the deployment of tools already delivering benefit. A broad class of complementary systems can continue to operate without formal pre-market approval, provided accountability is clear, supervision is explicit, and harm thresholds remain low. Ambient scribes already reached 28 percent penetration by August 2025 according to a recent study by the CMA-CFIB. Safe-harbour approaches developed for ambient scribes should be expanded and replicated, not treated as exceptional.

Why legacy approval frameworks fail in practice: Attempts to force generative AI into drug or medical device frameworks show that a new approach is needed.

  • Drug approval pathways tolerate black boxes when safety and efficacy can be demonstrated, but drugs are approved as fixed interventions. Generative AI is adopted through learning, iteration, and real-world refinement. Requiring randomized controlled trials as a gating mechanism for deployment would delay adoption for years and suppress learning. Randomized trials will and should play a role as systems mature, particularly through academic centres, but they are learning studies, not approval studies.
  • Medical device regulation poses a different mismatch. Even SaMD approvals assume fixed functionality, bounded performance, and predictable change cycles. Predetermined change control plans were a reasonable adaptation for earlier machine-learning tools. They are a poor fit for generative AI systems that are probabilistic, adaptive, and deeply embedded in workflows. Full interpretability may not be for some time. Transparency matters but insisting on device-level approval as the default either blocks useful tools or drives clinicians toward ungoverned alternatives.

Jurisdictions that tightly constrained complementary tools have not eliminated AI use. They have displaced it into informal and unmeasured channels. The UK Royal College of Physicians report (late 2025) makes this clear. The lack of second screen CDS tools (like Doximity or OpenEvidence) has led to more grey usage of ChatGPT and other foundation models. This is very undesirable because the tools are less safe, less useful and not limited to clinicians through a registration process.

The Competency-based Option: A competency-based framework offers a more coherent alternative. Healthcare already regulates human intelligence through scopes of practice, supervision, and progressive responsibility. Medical students, residents, and fellows are not approved once and left alone. They are hired, supervised, evaluated, audited, retrained, re-credentialed and fired or retired. Competence is maintained across the life cycle of the intelligence’s employment.

A competency-based model would apply this logic to AI systems. What is regulated is the class of capability exercised in a defined clinical context. Complementary systems operate under supervision. Near substitutive systems face progressively higher bars as autonomy increases. Interpretability is not required at every level, just as it is not required for trainees practicing under supervision. What matters is demonstrated performance, safety, and accountability.

This shifts the regulatory centre of gravity away from pre-market approval and toward continuous measurement.

Early debates about generative AI focused on risks such as hallucinations, bias, and instability. Early measurement relied heavily on standardized exams, most notably U.S. medical licensing tests. Those benchmarks were quickly exhausted. By mid-2025, frontier models were scoring above 90 percent. At that point, the metric ceased to be informative. But getting a 100 percent on the US MLE doesn’t make an AI a doctor any more than a calculator is a math teacher.

More recent evaluation frameworks represent a meaningful shift. Studies such as MAI-Dx by Microsoft’s Cambridge lab and the NOHARM work from Stanford-Harvard group moved beyond factual recall to test diagnostic reasoning, error patterns, and comparative performance against human clinicians in controlled settings. These studies consistently show that single-agent foundation models now perform roughly at general practitioner (GP) level on diagnostic tasks constructed by specialist panels. This group also published the excellent ARISE Report on AI in Clinical Practice at the start of 2026. New frameworks are being proposed that go beyond simple recall including MAST and MedHELM.

A practical framework should track:

  • Transparency around model provenance and change history
  • Error rates, override frequency, and escalation behaviour
  • Bias and equity performance across populations
  • Guardrails that constrain unsafe outputs
  • Orchestration effects from multi-model and human-AI teaming

These are measurable properties. They can be audited. They align naturally with a competency-based approach. Under this model, regulation includes continuing medical education as well as licensure exams. They may not be familiar yet to digital health experts, but they will be! Model oversight, managing agentic resources, and local retraining will be much of the work of the future. This is exciting stuff and worth spending time understanding.

Canada’s current policies have allowed this logic to emerge. Complementary tools have scaled within existing accountability structures that addressed privacy, liability, and practice scope. Early safe-harbour work by OntarioMD and Canada Health Infoway created space for responsible deployment. Health Canada’s restraint avoided premature rigidity. Take the win. Even if it wasn’t 100 percent planned.

Federal–provincial roles, trade, and sovereignty: These regulatory questions do not sit neatly within existing constitutional lines. Healthcare delivery and professional regulation are provincial responsibilities. AI governance, competition policy, privacy, and trade are federal domains. Generative AI cuts them.

Provinces control procurement, deployment, and day-to-day clinical governance for digital health systems (if anyone does). The federal government sets the legal, economic, trade, and competitive conditions under which these systems operate.

Generative AI amplifies scale and affects FPT dynamics. Provinces cannot counter global platform power on their own. Global vendors will not navigate a fragmented patchwork of provincial rules. Only federal leadership will prevent fragmentation, regulatory arbitrage, and uneven standards.

A lean, AI-ready national capability could support measurement frameworks, oversee AI capability classification, manage shared datasets and synthetic testbeds, and coordinate competition policy with the Competition Bureau. Health Canada has renewed Bill C-72 by introducing Bill S-5 on patient data, interoperability and anti-blocking. Infrastructure needs to be built in Canada by Canadians for models, training data and synthetic data for all data and specifically for health data.

PreviousNext

HARRIS Arc patient timeline

HARRIS

e-Messenger

  • Feds pump $48M into new health R&D centre
  • ED serving double the patients it was designed for
  • Phone follow-ups smoothe transitions home
  • Quebec drops many ER stats from public dashboard
  • Search begins for new CEO of Nova Scotia Health
More from e-Messenger

Subscribe

Subscribe

Weekly blasts are sent each month, via e-mail, to over 7,000 senior managers and executives in hospitals, clinics and health regions. Learn More

NIHI

NIHI

CHT Subscribe

CHT Subscribe

Advertise with us

Advertise with us

Sectra

Sectra

Calian

Calian

OnX

OnX

Zebra

Zebra

MIIT

MIIT

HARRIS Arc patient timeline

HARRIS

Advertise with us

Advertise with us

Sectra

Sectra

Calian

Calian

OnX

OnX

Zebra

Zebra

MIIT

MIIT

Contact Us

Canadian Healthcare Technology
PO Box 907 183 Promenade Circle
Thornhill, Ontario L4J 8G7 Canada
Tel: 905-709-2330
Fax: 905-709-2258
info2@canhealth.com

  • Quick Links
    • Current Print Issue
    • Print Archive
    • Events
    • Vendors
    • About Us
  • Advertise
    • Publishing Schedule
    • Circulation
    • Unit Sizes and Rates
    • Mechanical Requirements
    • Electronic Advertising
    • White Papers
  • Subscribe
    • Print Edition
    • e-Messenger
    • White Papers
  • Resources
    • White Papers
    • Writers’ Guidelines
    • Privacy Policy
  • Topics
    • Administrative Solutions
    • Clinical Solutions
    • Companies
    • Continuing Care
    • Diagnostics
    • Education & Training
  •  
    • Electronic Records
    • Government & Policy
    • Infrastructure
    • Innovation
    • People
    • Privacy and Security

© 2026 Canadian Healthcare Technology

The content of Canadian Healthcare Technology is subject to copyright. Reproduction in whole or in part without prior written permission is strictly prohibited. Send all requests for permission to Jerry Zeidenberg, Publisher.

Search Site

Error: Enter a search term

  • Issues
    • Current Print Issue
    • Print Archive
  • Advertise
    • Publishing Schedule
    • Circulation
    • Unit Sizes and Rates
    • Mechanical Requirements
    • Electronic Advertising
    • White Papers
  • Subscribe
    • Print Edition
    • e-Messenger
    • White Papers
  • Events
  • Vendors
  • About Us