Education & Training
The AI-powered medical school: a revolution in how we train physicians
March 31, 2026
Modern medical education was built through imported infrastructure and retooled institutions. In the 1870s, Parisian physicians redefined medicine by anchoring learning in observation, pathology, and bedside correlation.
That model travelled to Berlin, where laboratory science and systematic clinical reasoning were integrated into training, and then to North America.
The transformation culminated in reforms associated with William Osler, a Canadian by birth, who helped move students out of lecture halls and into hospital wards, establishing clinical immersion as the core of physician training for the next century. Dr. Abraham Flexner took that model from Johns Hopkins and expanded it across North America and around the world.
We are now on the edge of a comparable shift driven by a change in available infrastructure, not a change in the science-based philosophy. Artificial intelligence is altering how knowledge is accessed, synthesized, tested, and applied. While much attention has focused on teaching students about AI, the more consequential change lies in using AI to reorganize how physicians are trained in the first place.
Medical education today still relies on systems designed for scarcity. Knowledge is delivered in fixed sequences. Feedback is episodic and delayed. Faculty supervision, though central, is constrained by time and scale.
Learning management systems largely function as repositories, placing the burden of search, synthesis, and translation on the learner. These constraints were tolerable when the volume of medical knowledge was smaller and the pace of change slower. They are increasingly mismatched to contemporary practice.
Retrieval-augmented generation changes this foundation. In a RAG-based system, a language model is coupled to a curated corpus of scientifically rigorous and approved educational materials. When a learner asks a question, the system retrieves relevant content from that knowledge base and generates an answer grounded explicitly in the curriculum, with traceable sources.
The result is not generic explanation, but context-specific guidance anchored in what the institution has decided is authoritative. The knowledge base can be continuously updated with best evidence.
Yale School of Medicine’s Curriculum Search illustrates how this works in practice. Developed by its educational technology and medical library teams in under one year, the tool allows students and faculty to query thousands of lectures, slides, and readings in seconds.
Questions such as where a topic is taught, how a concept evolves across courses, or what limits a specific intervention can be answered directly from the curriculum itself. Faculty use the same system to identify gaps, reduce redundancy, and understand how their teaching fits within the whole. The infrastructure serves learners and teachers simultaneously using the same underlying content.
The learning science behind this approach is well established. Active learning, appropriate challenge, and timely feedback consistently outperform passive instruction.
What has been missing is the ability to deliver those conditions at scale. Faculty cannot provide individualized coaching to every learner in real time. AI systems grounded in approved materials can. Novices can clarify foundational concepts as questions arise. Advanced learners can test their reasoning against guidelines using realistic (but synthetic) vignettes and receive immediate, structured critique from trained agents. The supervision model remains human, but the reach of that supervision expands.
Evidence supporting this shift is accumulating. Randomized studies and meta-analyses show improvements in practical skills when generative tools are used as learning supports. Studies comparing AI-generated feedback with expert faculty feedback on clinical reasoning tasks have found no meaningful differences in outcomes when the systems are properly constrained.
Simulation work using AI-based patients suggests learners benefit from low-stakes repetition and pacing that traditional settings cannot offer. These tools do not replace mentorship. They change how often and how effectively it can occur.
This matters beyond pedagogy. Training environments shape professional expectations. Physicians educated in AI-native settings will expect clinical systems that behave similarly: searchable by default, responsive to context, and capable of supporting reasoning rather than merely recording it.
They will not accept health IT that requires manual navigation through static screens to retrieve basic information. Just as “digital phones” became phones and “digital computers” became computers once the infrastructure matured, “digital health” will fade as a category and simply be healthcare.
At Scarborough Health Network, we are building toward this future deliberately. Our student programs have brought together learners from medicine, engineering, design, and business to prototype AI-enabled educational tools that address clinical and teaching problems.
Using AI-assisted development, teams moved from concept to tested prototype within weeks, producing patient-education tools for post-admission care and assessment systems suitable for busy clinical environments. Each project shipped with validation data and a pathway to pilot use. The lesson was not that AI accelerates coding, though it does. It was that when AI is treated as infrastructure rather than content, learners engage by building, testing, and iterating. Learning accelerates as a consequence.
This reorientation has direct implications for health IT leaders. Education and care delivery are not separate systems. The tools used to teach reasoning, documentation, and communication become the tools clinicians expect to use in practice.
AI-native graduates will route around systems that cannot support their workflows, just as earlier generations bypassed paper when electronic records became unavoidable. Health IT strategies that ignore how clinicians are trained will struggle to retain relevance, no matter how compliant or secure they may be.
We have already seen this in the past two years as nimble clinical decision support on iPhones has replaced online textbooks.
Medical education has always evolved by absorbing new infrastructure and reshaping institutions around it. Artificial intelligence is the next substrate. It will alter how knowledge is organized, how reasoning is practiced, and how judgment is formed well before it changes licensure or regulation.
Physicians trained in AI-native environments will expect systems that are adaptive and responsive by design, and they will carry those expectations into every setting they work in.
The question facing medical schools, health systems, and health IT leaders today is not whether this transformation will occur. It is whether they will shape it intentionally or accept the consequences of arriving late.
Samir C. Grover is a gastroenterologist, education researcher, and executive vice-president, Academics at Scarborough Health Network. Will Falk is a retired management consulting partner and public policy fellow focused on healthcare and technology.