Thursday, January 9, 2025

Recommendations to Ensure Safety of AI in Real-World Clinical Care | Artificial Intelligence | JAMA | JAMA Network

Global Healthcare Leaders Set Framework for AI Safety in Clinical Practice

Leading medical institutions worldwide have established key guidelines for implementing artificial intelligence in patient care, emphasizing human oversight and shared decision-making.

In JAMA's November 2024 publication, University of Texas researchers Dean Sittig, PhD, and Hardeep Singh, MD, MPH, from the DeBakey VA Medical Center outline specific requirements for healthcare organizations implementing AI tools, including mandatory safety committees, real-world testing, and clear protocols for disabling problematic systems.

A complementary framework published in Current Oncology by bioethicists Rosanna Macri and Shannon Roberts from the University of Toronto provides guidance for incorporating patient values into AI clinical decisions. Their approach emphasizes trust, privacy protection, and equitable access.

These frameworks align with UC Davis Health's implementation strategy, where CEO David Lubarsky and AI Advisor Dennis Chornenky are leading a 40-institution collaborative focused on responsible AI adoption. "Doctors and nurses will always remain in charge of decision-making," emphasizes Lubarsky.

International consensus centers on:

  • - Maintaining human oversight of clinical decisions
  • - Requiring validation before deployment
  • - Protecting patient privacy through encrypted analysis
  • - Monitoring for bias and inequities
  • - Reducing administrative burden while preserving patient-provider relationships


The European Medicines Agency and US FDA are developing parallel regulatory frameworks to ensure consistent safety standards across jurisdictions. These guidelines aim to harness AI's benefits while maintaining healthcare quality and human judgment at the core of patient care.

Leading medical institutions worldwide have established key guidelines for implementing artificial intelligence in patient care, emphasizing human oversight and shared decision-making.

Writing in JAMA (November 27, 2024), University of Texas researchers Dean Sittig, PhD, and Hardeep Singh, MD, MPH, from the DeBakey VA Medical Center outline specific requirements for healthcare organizations implementing AI tools[1]. Their framework mandates safety committees, real-world testing, and clear protocols for disabling problematic systems.

A complementary framework in Current Oncology by University of Toronto bioethicists Rosanna Macri and Shannon Roberts (February 2023) provides guidance for incorporating patient values into AI clinical decisions[2]. Their values-based approach emphasizes trust, privacy protection, and equitable access.

Recent BMC Medical Education research by Alowais et al. (September 2023) reinforces these principles, highlighting AI's potential to revolutionize healthcare while maintaining human oversight[3].

UC Davis Health's implementation strategy aligns with these frameworks, as documented in their recent leadership discussion between CEO David Lubarsky and AI Advisor Dennis Chornenky[4].

Citations:

  1. [1] Sittig DF, Singh H. Recommendations to Ensure Safety of AI in Real-World Clinical Care | Artificial Intelligence | JAMA | JAMA Network. JAMA. Published online November 27, 2024:E1-E2.
  2. [2] Macri R, Roberts SL. The use of artificial intelligence in clinical care: a values-based guide for shared decision making. Curr Oncol. 2023;30:2178-2186.
  3. [3] Alowais SA, et al. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Med Educ. 2023;23:689.
  4. [4] UC Davis Health. AI in Healthcare: A Conversation with UC Davis Health Leaders. 2024.

How AI May Support Your Cancer Care: A Patient Guide

AI tools are increasingly being used to assist your healthcare team, but doctors and nurses will always remain in charge of your care. Here's what you can expect:

  • Diagnosis and Screening
    • - AI may help analyze medical images like X-rays, mammograms, and pathology slides
    • - Your doctor will always review and confirm any AI findings
    • - AI can help detect issues earlier and more consistently
  • Treatment Planning
    • - AI can analyze your medical data to suggest treatment options
    • - Your doctor will discuss all options with you and make decisions together
    • - AI recommendations are based on data from many other patient cases
  • Monitoring Your Care
    • - AI may help track your symptoms and side effects
    • - Can alert your care team early if there are concerns
    • - Helps coordinate care between different providers
  • Important Things to Know:
    • - You have the right to know when AI is being used in your care
    • - Your privacy and data security will be protected
    • - You can always ask questions about how AI is being used
    • - Your doctor remains your main point of contact
    • - Treatment decisions will be made through discussions with your healthcare team, not by AI alone

 

No comments:

Post a Comment

Recommendations to Ensure Safety of AI in Real-World Clinical Care | Artificial Intelligence | JAMA | JAMA Network

Global Healthcare Leaders Set Framework for AI Safety in Clinical Practice Leading medical institutions worldwide have established key guide...