Description

The article examines the growing trend of patients using large language model chatbots for health advice and situates this behaviour within broader concerns about explainable AI and trust in medical decision-making. It highlights that while systems like ChatGPT can provide rapid, accessible information and emotional support, they operate as opaque, probabilistic text generators rather than transparent clinical tools, making it difficult for users or clinicians to understand how outputs are produced or to verify their reliability. The piece argues that this lack of explainability, combined with the risk of hallucinated content and hidden bias, undermines appropriate trust calibration and may encourage over-reliance on AI in place of qualified medical professionals. It concludes that AI in healthcare must be explicitly designed and governed as a support to human clinicians, with robust safeguards, interpretability, and accountability mechanisms, if it is to be safely integrated into high-stakes clinical practice.

Subject

Explainable AI, Trust, and Digital Health / Medical AI Ethics

Period24 Nov 2025

Media contributions

1

Media contributions

  • TitleDr ChatGPT will see you now: how AI could be bad for your health
    Media name/outletRTE Brainstorm
    Media typeWeb
    Country/TerritoryIreland
    Date24/11/25
    DescriptionThe article examines the growing trend of patients using large language model chatbots for health advice and situates this behaviour within broader concerns about explainable AI and trust in medical decision-making. It highlights that while systems like ChatGPT can provide rapid, accessible information and emotional support, they operate as opaque, probabilistic text generators rather than transparent clinical tools, making it difficult for users or clinicians to understand how outputs are produced or to verify their reliability. The piece argues that this lack of explainability, combined with the risk of hallucinated content and hidden bias, undermines appropriate trust calibration and may encourage over-reliance on AI in place of qualified medical professionals. It concludes that AI in healthcare must be explicitly designed and governed as a support to human clinicians, with robust safeguards, interpretability, and accountability mechanisms, if it is to be safely integrated into high-stakes clinical practice.
    Producer/AuthorCelina Caroto, Anthony Kelly
    URLhttps://www.rte.ie/brainstorm/2025/1124/1545495-health-medical-advice-ai-chatbot-chatgpt/
    PersonsCelina Caroto, Anthony Kelly

Keywords

  • Explainable AI
  • Large Language Models
  • Model interpretability and transparency
  • Trust calibration in socio-technical systems
  • Clinical decision support systems
  • Algorithmic bias
  • Artificial Intelligence
  • Trust in medical AI