Who's Accountable When Healthcare AI Makes a Mistake?

Consider this scenario: a radiologist reviews 200 chest X-rays daily with AI assistance. On scan #147, the AI misses a small tumour. The doctor, trusting the AI's recommendation, moves on. Three months later, the patient returns with stage 3 cancer.
It's hypothetical, but the question it raises is not. Who's liable?
According to Ireland's Medical Council, the answer is clear: the doctor is1.
The hospital's quarterly audit won't catch it for months. The AI vendor's logs show "functioning normally." Yet the Medical Council's October 2025 position paper states doctors "ultimately remain responsible for their clinical decisions"1.
The problem: How can doctors be confident in AI tools they don't fully understand? How can they maintain accountability when AI systems fail invisibly?
The Accountability Gap
AI is increasingly deployed across Irish healthcare. The Mater Hospital established a Centre for AI and Digital Health in 20252. Radiology departments use AI to flag abnormalities. Pathology labs employ AI for tissue analysis. Emergency departments use AI triage.
The Medical Council says AI should "augment, rather than replace, clinical decision-making"1. But this creates a gap:
•Doctors are responsible for AI-assisted decisions
•But AI systems are black boxes with opaque decision-making processes
•And traditional monitoring only catches failures after harm occurs
As Jantze Cotter, Executive Director of Regulatory Policy at the Medical Council, noted: "AI advancements hold great potential for the medical field but also introduce significant ethical, legal, regulatory and professional challenges"1.
Why Traditional Monitoring Fails
Healthcare organisations monitor AI through periodic audits and retrospective reviews. These approaches miss critical failures:
Consider these failure modes that traditional monitoring misses:
| Failure Mode | Clinical Impact | Why Traditional Monitoring Misses It | How Aqta Catches It |
|---|---|---|---|
| Loop drift | AI gets stuck in repetitive diagnostic patterns | Only visible in aggregate trends over time | Real-time hash detection + circuit breaker |
| Bias amplification | Underdiagnosis in underrepresented populations | Requires demographic analysis across cases | Continuous demographic outcome monitoring |
| Model drift | Accuracy degrades over time without warning | Performance changes are gradual and subtle | Statistical validation on every prediction |
| Integration failures | AI receives incomplete or corrupted patient data | Data quality issues aren't logged systematically | Input validation with data completeness checks |
The Medical Council acknowledges: "Doctors must have confidence in the standard of the tool they are utilising"1. Confidence requires visibility, most healthcare AI systems operate as black boxes.
EU AI Act Requirements
Healthcare AI falls under the EU AI Act's "high-risk" category. Requirements take effect August 2026, now months away3:
•Risk management systems with continuous monitoring
•Data governance ensuring training data quality and representativeness
•Technical documentation proving system reliability
•Human oversight with meaningful intervention capabilities
•Transparency enabling users to interpret outputs
•Accuracy, robustness and cybersecurity throughout the lifecycle
Non-compliance: up to €35 million or 7% of global turnover3.
Beyond compliance: How do you give doctors confidence while maintaining efficiency?
Healthcare AI Governance Requirements
The Medical Council's principles map to technical requirements:
1. Transparency and Auditability
"Patients must be informed when AI tools are used"1. Healthcare organisations need:
•Complete audit trails showing when AI was used, what data it processed, and what recommendations it made
•Explainability that clinicians can understand and communicate to patients
•Version tracking to identify which model version was used for each decision
2. Human-in-the-Loop
"AI should augment, not replace, clinical decision-making"1:
•Mandatory review points where clinicians must actively confirm AI recommendations
•Override capabilities that preserve clinical judgment
•Escalation protocols when AI confidence is low or results are ambiguous
3. Bias Detection
AI "could reinforce bias, particularly affecting vulnerable groups"1. Studies show AI diagnostic tools trained on white patients underperform on patients of colour4. Requirements:
•Demographic monitoring to detect performance disparities
•Regular bias audits across patient populations
•Diverse training data requirements
4. Real-Time Safeguards
Healthcare AI needs real-time protection, not just retrospective review:
•Loop detection to catch AI systems stuck in repetitive patterns
•Anomaly detection flagging unusual behaviour before harm occurs
•Automatic circuit breakers that pause AI systems when safety thresholds are breached
Ireland's Position
Ireland is uniquely positioned: EU member state, thriving tech sector, English-speaking healthcare. Irish hospitals are early AI adopters.
Opportunity
Irish healthcare organisations that implement robust AI governance now can become exemplars for EU AI Act compliance, attracting partnerships and research funding.
Risk
Organisations that deploy AI without visibility into how it behaves face accountability gaps that are hard to close after the fact.
Closing the accountability gap
If doctors are accountable for AI-assisted decisions, they need real visibility into what those systems are doing, not a quarterly audit report. The requirements that follow from the Medical Council's principles and the EU AI Act are concrete. They resolve to four technical capabilities, each of which the regulation already names.
Real-Time Loop Detection
Catches AI systems stuck in repetitive diagnostic patterns before they impact patient care. Automatic circuit breakers pause systems when safety thresholds are breached.
Complete Audit Trails
Every AI interaction is logged with timestamps, model versions, input data, and recommendations, meeting both Medical Council transparency requirements and EU AI Act documentation standards.
Human-in-the-Loop Workflows
Configurable review points ensure clinicians maintain authority over AI-assisted decisions. Override capabilities preserve clinical judgment while maintaining audit trails.
Bias and Fairness Monitoring
Track AI performance across demographic groups to detect and address disparities. Automated alerts when performance varies significantly by population.
Compliance Reporting
Pre-built reports for Medical Council audits, EU AI Act documentation, and HIPAA/GDPR compliance. Export audit trails for regulatory review.
Data Sovereignty
Self-hosted deployment option keeps patient data within your infrastructure. EU-based hosting available for organisations requiring data residency.
The accountability gap in healthcare AI is real and it is not going to close by itself. The question is whether you build visibility in from the start or try to reconstruct it after something goes wrong.
Learn more about Aqta's healthcare AI governance solutions or schedule a demo to see how we help healthcare organisations meet Medical Council guidance and EU AI Act requirements while maintaining clinical efficiency.
Get Started →
References
- Medical Council of Ireland. "Position Paper: Use of Artificial Intelligence in Medical Practice". 21 October 2025. Source
- Mater Misericordiae University Hospital. "Centre for AI and Digital Health Launch". Dublin, Ireland. 2025.
- European Parliament and Council. "Regulation (EU) 2024/1689 on Artificial Intelligence (AI Act)". Official Journal of the European Union, 12 July 2024. Source
- Obermeyer, Z., et al. "Dissecting racial bias in an algorithm used to manage the health of populations". Science, Vol. 366, Issue 6464, pp. 447-453, 2019. Source

Anya Chueayen
Founder of Aqta. Before this, I worked on integrity at social media platforms, the unglamorous side of AI where human behaviour, edge cases, and ethics collide at scale. That work convinced me that responsible AI needs infrastructure, not just good intentions. Based in Dublin, closely watching how regulation is reshaping what we build and how.
Related Articles
The Human Supply Chain Behind AI
As models become commodities, the real risk lives in the software and human supply chains beneath them.
I Built a Real-Time AI Agent That Sees Your Screen and Does the Clicking
Building Spectra, a real-time AI agent powered by Gemini Live API that sees your screen, hears your voice, and does the clicking for you.