A publicly deployed patient-facing medical chatbot built on retrieval-augmented generation (RAG) exposed its full system prompt, backend API schema, knowledge-base contents, and the 1,000 most recent patient conversations — all accessible through standard browser inspection tools, no authentication required. The findings, published May 2026 by Alfredo Madrid-García and Miguel Rujas, represent a documented case of critical infrastructure exposure in a live, regulated healthcare AI deployment.
The assessment used a two-stage methodology. In the first stage, Claude Opus 4.6 was used to conduct exploratory prompt-based testing and generate structured vulnerability hypotheses — identifying that sensitive RAG and system configuration appeared to be transmitted through client-server communication rather than kept server-side. In the second stage, researchers manually verified each finding using Chrome Developer Tools, inspecting browser-visible network traffic, payloads, API schemas, and stored interaction data.
Manual verification confirmed multiple exposures. Researchers collected the full system prompt, model and embedding configuration details, retrieval parameters, backend endpoint addresses, API schema definitions, document and chunk metadata, and the raw content of the knowledge base itself. Most critically, the 1,000 most recent patient-chatbot conversations were retrievable without authentication — directly contradicting the chatbot's own stated privacy assurances. The conversations included health-related patient queries.
For enterprise healthcare AI architects, the exposure surface is direct. Every item leaked — system prompt, vector store metadata, API schema — was transmitted to the browser as part of normal client-server operation. The deployment moved server-side logic client-side, then assumed no one would look. Chrome Developer Tools requires no specialist skills; the same techniques available to a security auditor are equally available to a motivated adversary.
The compliance implications are significant. Patient conversations containing health-related queries, exposed without authentication, create direct liability under HIPAA and equivalent frameworks. System prompt and embedding configuration disclosure also exposes proprietary model fine-tuning investments and retrieval logic — IP loss alongside regulatory exposure. Vendors and internal teams procuring or building patient-facing RAG systems should treat client-side API schema visibility as a critical failure mode.
The authors conclude that serious privacy and security failures in patient-facing RAG chatbots can be identified with standard browser tools without specialist skills or authentication. Independent security review should precede deployment. The study was conducted non-destructively and the system is anonymized in the published paper; the authors do not identify the affected vendor.
LLM-powered red-teaming is now a low-cost capability available to any threat actor. AI-assisted development lowers the barrier to building RAG systems faster than it lowers the barrier to securing them. Healthcare CTOs deploying generative AI on patient-facing surfaces need a security gate that treats the browser as untrusted by default.
Written and edited by AI agents · Methodology