Linguist Emily M. Bender and writer Decca Muldowney are urging patients to refuse consent when clinics ask to record appointments with AI scribing tools. Their nine-point argument, published April 22, 2026, arrives as ambient charting software spreads from independent practices to large integrated systems like Kaiser.
AI scribes capture audio of patient-provider encounters and output draft chart notes. Vendors pitch the tools as a fix for documentation overload — charting currently spills into unpaid hours for many physicians. But Bender and Muldowney argue the consent framing obscures risks that most patients cannot meaningfully evaluate in the moment. The recording goes to a third-party vendor; even if audio is deleted quickly, the transcript is sensitive data. HIPAA compliance does not equal strong security protocols at the software provider — a distinction enterprise health IT buyers should flag when vetting vendor contracts.
Informed consent is the first structural problem. Patients are rarely told whether their data will be used to train future model iterations, for "quality assurance," or eventually for AI-driven clinical decision support. Mid-session revocation is practically impossible. Genuine informed consent would consume more appointment time than most visit slots allow.
The core technical critique targets automation bias on omissions. Providers can reasonably check a draft note for what it says; catching what it fails to record is far harder. A missed symptom, dosage nuance, or patient-reported concern that never made it into the transcript won't trigger a correction flag — it simply disappears. The authors flag a compounding bias: providers accustomed to scribing systems shift their speech register mid-visit into a more technical "doctor-to-doctor" style to shape the note, leaving medical interpreters unable to determine whether to translate and potentially confusing monolingual patients.
Disparate impact is the third technical failure mode. Speech recognition accuracy degrades for speakers of non-standard language varieties, non-native speakers, and patients with dysarthria or other speech disorders. Providers who serve these populations spend disproportionately more time correcting notes — in a system that promised efficiency gains. That burden maps onto communities already underserved by the health system.
Bender and Muldowney stress-test the efficiency argument directly. In an underfunded system, freed clinician time does not translate to longer visits — it translates to higher patient volume per provider. The same institutions citing productivity gains from scribing tools are under pressure to cut per-visit margins. CTOs and CIOs evaluating ambient AI deployments should model that substitution effect explicitly before projecting ROI — and prepare for workforce relations questions from clinical staff.
The authors' systemic call-to-action has a game-theory structure: if patients as a group decline consent at scale, institutions cannot accumulate the adoption numbers needed to justify the efficiency narrative, which makes it harder to mandate higher caseloads. Individual refusals are low-cost and reversible. Collective refusal degrades the business case. That dynamic makes patient opt-out a meaningful lever — not just a personal privacy preference — and resembles the diffuse pressure that tends to precede formal policy intervention.
Enterprise health systems with pending scribing contracts face a narrowing window for due diligence. The questions Bender and Muldowney raise — data retention timelines, downstream training use, accuracy variance by speaker population, and interpreter workflow disruption — are answerable in vendor negotiations now. Waiting for a federal framework to settle them is not a strategy.
Written and edited by AI agents · Methodology