
A patient arrives at the emergency room with sudden weakness on one side of their body. A brain MRI is ordered. Normally, the scan would sit in a queue — waiting hours, sometimes days, for a radiologist to review it. But a stroke doesn't wait.
This is the exact problem that researchers at the University of Michigan set out to solve. Their answer is Prima, a vision language model that can read a brain MRI and generate a diagnostic report in seconds. Published in Nature Biomedical Engineering in February 2026, the study behind Prima is one of the most significant advances in medical AI this year.
What Prima Does and Why It Matters
Prima isn't another narrow AI tool trained to detect one condition. It's a foundation model — a large-scale system that can diagnose over 50 different neurological conditions from a single brain MRI scan.
The model processes imaging data and clinical text simultaneously. Feed it MRI sequences along with the patient's medical history and the ordering physician's notes, and it returns a differential diagnosis, a recommended subspecialty referral (such as a stroke neurologist or neurosurgeon), and a priority level indicating how urgently the case needs attention.
In a year-long evaluation across 29,431 MRI studies at University of Michigan Health, Prima achieved a mean area under the curve (AUC) of 92.0% across 52 neurological diagnoses. In some categories, accuracy reached 97.5%. It consistently outperformed other state-of-the-art medical AI systems.
"Prima works like a radiologist by integrating information regarding the patient's medical history and imaging data," said Samir Harake, co-first author and data scientist at Michigan Medicine.
How Prima Actually Works
Earlier medical AI models were typically trained on manually curated datasets to detect a single condition — one model for strokes, another for tumors, another for hemorrhages. Prima takes a fundamentally different approach.
The model was trained on over 220,000 MRI studies, representing 5.6 million individual imaging sequences. This data was drawn from decades of routine clinical care at University of Michigan Health and includes real patient histories, physician imaging indications, and radiology reports. Rather than relying on hand-selected examples, Prima learned from the full spectrum of what a health system actually encounters.
Architecturally, Prima is a vision language model (VLM) — the same category of AI system behind multimodal tools like GPT-4V and Gemini. It simultaneously processes video, images, and text in real time, using a hierarchical vision architecture that extracts both general and specialized MRI features.
Dr. Todd Hollon, the study's senior author and a neurosurgeon at Michigan Medicine, describes Prima as "ChatGPT for medical imaging." The comparison is apt: just as large language models generate contextual text responses, Prima generates contextual diagnostic reports by understanding both visual patterns and clinical context.
The Radiology Bottleneck
The timing of Prima's development isn't coincidental. Global demand for MRI imaging has been rising steadily, but the supply of radiologists hasn't kept pace.
In some regions, there are fewer than one radiologist per 100,000 people. Emergency departments face overcrowding, and imaging backlogs can mean patients wait days for results on scans that should be reviewed within hours. For time-sensitive conditions like ischemic stroke or intracranial hemorrhage, delays of even minutes can lead to permanent disability or death.
"Accuracy is paramount when reading a brain MRI, but quick turnaround times are critical for timely diagnosis," said Yiwei Lyu, co-first author and postdoctoral fellow in Computer Science and Engineering at University of Michigan.
Prima addresses this bottleneck by providing immediate preliminary assessments. The moment a patient completes their MRI, the model can analyze the scan and alert the appropriate specialist. A stroke neurologist gets notified for suspected strokes. A neurosurgeon gets flagged for hemorrhages. The triage happens in seconds, not hours.
Over 1,000 FDA-Cleared AI Devices in Radiology
Prima is part of a much larger transformation. By the end of 2025, the FDA had granted marketing authorization to over 1,039 AI-enabled radiology devices — accounting for 77% of all AI medical device authorizations. In January 2026, Aidoc received clearance for a multi-condition triage platform that can flag 14 acute conditions from CT scans, marking a shift from single-condition tools to foundation model-driven approaches.
The regulatory landscape is evolving alongside the technology. In 2026, the EU AI Act requires radiology AI systems classified as "high-risk" to document their training data, conduct bias audits, and implement human oversight protocols.
Yet a gap remains between regulatory clearance and real-world adoption. Despite hundreds of FDA-cleared devices, only two CPT category 1 reimbursement codes exist for AI in radiology as of early 2026 — both limited to cardiac imaging. Without clear payment pathways, hospitals have limited financial incentive to deploy AI tools even when the clinical evidence supports them.
What Prima Can't Do Yet
Prima's researchers are transparent about its current limitations. The model remains in an early evaluation phase. It has not been tested in independent clinical deployment without physician oversight, and it isn't designed to replace radiologists.
"As the global demand for MRI rises and places significant strain on our physicians and health systems, our AI model has the potential to reduce burden," said Dr. Hollon. The operative word is "potential." Future research will focus on incorporating more detailed electronic medical records, expanding to other imaging modalities like mammograms, chest X-rays, and ultrasounds, and validating the system across multiple institutions.
There's also the question of equity. AI models trained predominantly on data from a single institution may not perform equally well across diverse patient populations. The research team evaluated Prima's performance across demographic groups and found equitable results, but broader multi-site validation will be essential before widespread adoption.
Why This Matters Beyond Radiology
Prima represents more than a faster way to read brain scans. It's a proof of concept for how vision language models — the same technology driving advances in AI agents and intelligent automation — can be applied to high-stakes professional domains.
The pattern is repeating across industries: train a large foundation model on domain-specific data, give it multimodal capabilities, and let it augment human experts rather than replace them. Healthcare happens to be one of the most impactful proving grounds for this approach, but the principles transfer to legal document analysis, financial risk assessment, engineering inspection, and any field where experts need to process complex information quickly and accurately.
The gap between what AI can do in a research setting and what it does in clinical practice is closing fast. With regulatory frameworks adapting, reimbursement models evolving, and models like Prima demonstrating real-world performance at scale, the question isn't whether AI will transform medical imaging — it's how quickly.
Key Takeaways
- Prima is a vision language model from the University of Michigan that reads brain MRIs and diagnoses 50+ neurological conditions in seconds
- Trained on 220,000+ MRI studies (5.6 million imaging sequences) from decades of clinical data
- Achieved 97.5% accuracy in certain categories and a 92% mean AUC across 52 diagnoses in a year-long evaluation of 29,431 scans
- Automatically triages urgent cases and alerts the appropriate specialist (stroke neurologist, neurosurgeon)
- Over 1,039 AI radiology devices are now FDA-cleared, but reimbursement pathways remain limited
- Prima is still in early evaluation — it's designed to augment radiologists, not replace them
- Published in Nature Biomedical Engineering, February 2026


