TY - JOUR
T1 - Clinical Negligence in an Age of Machine Learning
T2 - Res ipsa loquitur to the Rescue?
AU - Bartlett, Benjamin
N1 - Publisher Copyright:
© 2024 Walter de Gruyter GmbH, Berlin/Boston.
PY - 2024/12/1
Y1 - 2024/12/1
N2 - Advanced artificial intelligence (AI) techniques such as 'deep learning' holds promise in healthcare but introduces novel legal problems. Complex machine learning algorithms are intrinsically opaque, and the autonomous nature of systems can produce unexpected harms, which leaves open questions around responsibility for error at the clinician/AI interface. This raises concerns for compensation systems based in negligence because claimants must establish that a duty exists and demonstrate the specific fault that caused harm.This paper argues that clinicians should not ordinarily be negligent for following AI recommendations, and developers are unlikely to hold a duty of care to patients. The healthcare provider is likely to be the duty holder for AI systems. There are practical and conceptual problems with comparing AI errors to human performance or other AI systems to determine negligence. This could leave claimants with unsurmountable technical and legal challenges to obtaining compensation. Res ipsa loquitur could solve these problems by allowing the courts to draw an inference of negligence when unexpected harm occurs that would not ordinarily happen without negligence. This legal framework is potentially well-suited to addressing the challenges of AI systems. However, I argue res ipsa loquitur is primarily an instrument of discretion, which may perpetuate legal uncertainty and still leave some claimants without a remedy.
AB - Advanced artificial intelligence (AI) techniques such as 'deep learning' holds promise in healthcare but introduces novel legal problems. Complex machine learning algorithms are intrinsically opaque, and the autonomous nature of systems can produce unexpected harms, which leaves open questions around responsibility for error at the clinician/AI interface. This raises concerns for compensation systems based in negligence because claimants must establish that a duty exists and demonstrate the specific fault that caused harm.This paper argues that clinicians should not ordinarily be negligent for following AI recommendations, and developers are unlikely to hold a duty of care to patients. The healthcare provider is likely to be the duty holder for AI systems. There are practical and conceptual problems with comparing AI errors to human performance or other AI systems to determine negligence. This could leave claimants with unsurmountable technical and legal challenges to obtaining compensation. Res ipsa loquitur could solve these problems by allowing the courts to draw an inference of negligence when unexpected harm occurs that would not ordinarily happen without negligence. This legal framework is potentially well-suited to addressing the challenges of AI systems. However, I argue res ipsa loquitur is primarily an instrument of discretion, which may perpetuate legal uncertainty and still leave some claimants without a remedy.
UR - http://www.scopus.com/inward/record.url?scp=85213411141&partnerID=8YFLogxK
U2 - 10.1515/jetl-2024-0015
DO - 10.1515/jetl-2024-0015
M3 - Article
AN - SCOPUS:85213411141
SN - 1868-9612
VL - 15
SP - 295
EP - 326
JO - Journal of European Tort Law
JF - Journal of European Tort Law
IS - 3
ER -