Arq. Bras. Cardiol. 2025; 122(5): e20250300
Will You Tell Them When They Are Wrong? Automation Bias and Artificial Intelligence in Medicine
The use of decision support tools and automation in medical practice is not new. Early tools employed simple regression models to reduce heterogeneity and offer more structured care. In cardiology, initial examples of decision support tools included the Framingham risk scores for primary prevention risk stratification and automated sliding scale dose adjustments for heparin or warfarin. Since then, automation processes have become more sophisticated and are now widespread in many routine aspects of medical practice. In cardiology, these include automated preliminary interpretations of electrocardiography (ECG) tracings as well as automated measurements and image analysis in echocardiography, nuclear medicine, and cardiac magnetic resonance imaging. Other automated clinical decision support systems help reduce prescribing errors by alerting clinicians to drug-drug interactions or incorrect dosages, for example. With recent advances in artificial intelligence (AI), expectations for the future of such tools have grown dramatically. Yet, the potential negative implications of these technologies have received considerably less attention.
Clinical decision support systems can improve medical decision-making and patient outcomes. However, these systems are not flawless and may produce incorrect outputs. Most studies assessing their performance rely on conventional medical metrics such as sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve. Yet, it is far more important to evaluate the real-world impact of these tools, particularly in light of how clinicians interact with them. This is especially relevant given that the European Union Artificial Intelligence Act mandates human oversight for high-risk AI systems. The issue becomes even more critical with the widespread use of generative AI chatbots, as the current generation of complex AI tools has led to a growing trend of overreliance on imperfect automation.
[…]
Keywords: Artificial Intelligence; Bias; Cardiology
117