Basic web developer tools can expose 1,000 private patient conversations from a medical AI chatbot.
Developers often assume that the AI layer provides an inherent security barrier for sensitive data. This case study found that a deployed medical RAG system was leaking system prompts and private API keys through the browser. Anyone with a right-click could see the full history of patient interactions and the internal instructions given to the model. This level of exposure bypasses almost all HIPAA protections and standard privacy expectations. It highlights a massive gap between the hype of AI deployment and the reality of basic web security.
When RAG Chatbots Expose Their Backend: An Anonymized Case Study of Privacy and Security Risks in Patient-Facing Medical AI
arXiv · 2605.00796
Background: Patient-facing medical chatbots based on retrieval-augmented generation (RAG) are increasingly promoted to deliver accessible, grounded health information. AI-assisted development lowers the barrier to building them, but they still demand rigorous security, privacy, and governance controls. Objective: To report an anonymized, non-destructive security assessment of a publicly accessible patient-facing medical RAG chatbot and identify governance lessons for safe deployment of generativ