How Can Medical Trainees Use Ai Without Losing Critical Thinking Skills?

Trending 1 week ago

Can tomorrow’s doctors study pinch AI without losing their captious thinking? A NEJM reappraisal offers DEFT-AI and caller collaboration models to thief educators harness AI while protecting objective skills.

 Antonio Marca / Shutterstock

Review: Educational Strategies for Clinical Supervision of Artificial Intelligence Use. Image Credit: Antonio Marca / Shutterstock

In a caller reappraisal published in The New England Journal of Medicine, researchers elucidate nan challenges of supervising early-career aesculapian learners who usage powerful ample connection models (LLMs) arsenic acquisition aids. Review findings item nan dangers of "deskilling," wherever overreliance connected AI erodes basal objective reasoning skills, and "mis-skilling," wherever trainees adopt AI-generated errors, alongside "never-skilling" – nan nonaccomplishment to create basal competencies successful nan first place.

The reappraisal proposes a system acquisition model called "diagnosis, evidence, feedback, teaching, and proposal for AI engagement (DEFT-AI)" to antagonistic these imaginable AI demerits by scaffolding captious thinking. The reappraisal besides introduces nan "cyborg" and "centaur" models of human-AI collaboration, urging clinicians to adopt an adaptive believe wherever they study to prosecute critically pinch AI-generated outputs alternatively than unquestioningly trusting them.

Background

Recent advancements successful artificial intelligence (AI), peculiarly successful computation and ample connection models (LLMs), are progressing astatine an astonishing rate. Large connection models (LLMs) specified arsenic OpenAI's ChatGPT and Google's Gemini are progressively being utilized successful aesculapian learning, raising some opportunities and risks for objective reasoning. A increasing assemblage of lit suggests that AI devices are fundamentally reshaping aesculapian learning and practice.

However, integrating AI into objective believe presents unprecedented opportunities and important risks for aesculapian education. While accelerated entree to accusation and nan expertise to consolidate immense swaths of information into easy accessible summaries whitethorn go integral successful early aesculapian acquisition and practice, LLMs are known to simulate human-like reasoning to create an "appearance of agency" – an effect wherever systems simulate reasoning and outputs look to show agency erstwhile nary really exists. This tin beryllium highly vulnerable for inexperienced aesculapian trainees.

Medical educators frankincense look a caller and urgent situation – guiding and supervising trainees who mightiness beryllium much proficient astatine leveraging AI than nan educators themselves, creating an 'inversion of expertise' wherever teachers go learners too. The coming reappraisal highlights 3 circumstantial hurdles ("deskilling," "never-skilling," and "mis-skilling") that must beryllium flooded earlier AI tin cement its domiciled successful ensuring a safer and healthier future.

About nan Review

This reappraisal intends to reside nan urgent and captious request by conducting a broad introspection of nan technological lit that explores nan challenges and opportunities presented by AI successful aesculapian education. It collates and synthesizes nan outcomes of much than 70 anterior publications crossed existing acquisition theory, cognitive science, and emerging investigation connected human-AI relationship and uses these insights to create caller conceptual frameworks for nan objective supervision of AI:

Diagnosis, evidence, feedback, school (DEFT), and proposal for AI engagement (DEFT-AI) – An adapted model for promoting captious reasoning during acquisition conversations astir AI use.

Cyborg vs. Centaur Models: A caller typology to picture 2 chopped modalities of human-AI collaboration. These models are designed to thief educators and learners accommodate their usage of AI to nan circumstantial objective task and associated risk.

Review Findings

The reappraisal identifies and addresses respective cognitive traps imposed connected aesculapian acquisition by today’s AI age. "Cognitive offloading", nan process of over-relying connected AI for analyzable tasks for illustration objective reasoning, is highlighted for its nexus to "automation bias", consequent over-reliance connected nan AI’s output, and a nonaccomplishment to drawback its mistakes.

Alarmingly, cognitive offloading and automation bias are not conscionable theoretical concerns. A study recovered that much than a 3rd of precocious aesculapian students grounded to place erroneous LLM answers to objective scenarios. Another study reported a important antagonistic relationship betwixt nan predominant usage of AI devices and captious reasoning abilities, mediated by accrued offloading, and this effect was particularly pronounced among younger participants.

The reappraisal recommends addressing these concerns by processing and adopting nan DEFT-AI framework, a system attack for educators arsenic a consequence to a trainee’s dependence connected AI. It proposes leveraging a captious speech that moves beyond nan AI’s reply to probe nan learner’s reasoning. Key questions include: "What prompts did you use?", "How did you verify nan AI-generated output?", and "How did nan AI’s proposal power aliases alteration your diagnostic approach?" Educators are besides encouraged to thatch evidence-based appraisal of AI outputs utilizing Sackett’s model (ask, acquire, appraise, apply, assess) and effective punctual engineering techniques, specified arsenic chain-of-thought reasoning.

The reappraisal further stresses that supervision must separate betwixt evaluating nan AI instrumentality itself and evaluating its specific output. For example, organization scorecards aliases exemplary leaderboards whitethorn beryllium utilized to judge tools, while evidence-based medicine appraisal steps should beryllium applied to each individual output.

Finally, nan reappraisal presents nan "cyborg" and "centaur" modes of clinician-AI interaction. In centaur mode, tasks are strategically divided truthful that nan clinician delegates low-risk, well-defined tasks (such arsenic summarizing information aliases drafting communications) to nan AI, while retaining complete power complete high-stakes objective judgement and decision-making. This mode is recommended erstwhile addressing analyzable aliases uncertain cases.

In contrast, nan cyborg mode assumes that nan clinician and AI co-construct a solution to nan task astatine hand. This mode is businesslike for low-risk, regular tasks but carries a higher consequence of automation bias if not utilized pinch ongoing reflective oversight and justification.

The reappraisal besides warns that capacity heterogeneity and bias successful LLMs tin exacerbate wellness inequities. AI systems whitethorn underperform for definite populations, and uncritical take could widen disparities alternatively than adjacent them.

Conclusions

The coming reappraisal concludes that while nan integration of AI into medicine and aesculapian acquisition is inevitable (and mostly beneficial), its successful and safe take is not. It highlights that aesculapian acquisition must proactively reside nan risks of deskilling, never-skilling, and mis-skilling by fundamentally changing really objective reasoning is taught, peculiarly against nan backdrop of AI. Critical reasoning remains foundational for "adaptive practice" – nan expertise to displacement betwixt businesslike routines and innovative problem-solving erstwhile faced pinch nan unpredictability of AI.

In summary, this reappraisal demonstrates that nan eventual extremity is not to create doctors who are limited connected AI, but to cultivate clinicians who tin skilfully and safely leverage it arsenic a powerful instrumentality to augment, but not replace, their ain expertise done a "verify and trust" paradigm.

Journal reference:

  • Abdulnour, R.-E. E., Gin, B., & Boscardin, C. K. (2025). Educational Strategies for Clinical Supervision of Artificial Intelligence Use. New England Journal of Medicine, 393(8), 786–797. DOI – 10.1056/nejmra2503232. https://www.nejm.org/doi/full/10.1056/NEJMra2503232
More