What ambient AI reveals about healthcare's governance gap. They did not ask for permission, because no one ever told them they needed to. That is how ambient artificial intelligence quietly entered healthcare spaces. Not just exam rooms, but waiting rooms, hallways, and shared clinical environments. Microphones always on. Data always collecting. Patients and visitors unaware. The AI did not need to deceive to become dangerous. It only needed to remain invisible. This is not a technology story. It is a failure of governance. And it reveals why accountability structures must be built before the next wave of AI tools arrives. WHY THE SHARP LAWSUIT CHANGES EVERYTHING On November 22, 2025, a patient named Jose Saucedo filed a lawsuit against Sharp HealthCare in California Superior Court. The complaint alleges that Sharp captured conversations during clinical episodes using an ambient AI tool called Abridge without appropriate consent. According to the filing, Sharp violated California's all-party consent law by deploying an ambient listening tool to capture sensitive clinical discussions. The tool recorded conversations between patients and clinicians, then generated draft medical notes. Audio was transmitted to the vendor's cloud system, where employees could allegedly access the data. Even more troubling: the medical record allegedly documented that consent had been obtained when it had not. The AI system reportedly inserted statements into patient charts indicating patients were advised and consented to the recording, even when the patient said it never happened. This detail matters. The moment a medical record asserts consent that did not exist, the issue is no longer innovation. It is record integrity. And once trust in the record collapses, the entire digital care ecosystem follows. Attorneys estimate more than 100,000 patient encounters may have been recorded during the rollout. Saucedo is seeking class-action status. Sharp is not alone. Across the country, health systems are deploying ambient AI tools with varying degrees of patient notification. Some post signs. Some mention it during intake. Many do nothing at all. The Sharp lawsuit represents the first major legal challenge to ambient AI in healthcare. It will not be the last. THE DOCUMENTATION INTEGRITY CRISIS Consent is only the first layer of exposure. A second crisis is emerging that most health systems have not yet recognized: the AI generating clinical documentation is not reliable. A UCSF study examined 100 emergency department encounters summarized by large language models. The results should alarm every health system deploying these tools: Only 33% of AI-generated summaries were entirely error-free. 42% contained hallucinations: information fabricated by the model. 47% omitted clinically relevant information. Most troubling: errors were concentrated in the Plan section, the part guiding follow-up care, prescriptions, and return precautions. When AI hallucinates a diagnosis or omits a critical finding, and a clinician signs off without catching it, the medical record becomes a legal liability waiting to be discovered. Consider a scenario: an ambient AI tool captures a patient mentioning chest pain. The AI summary omits this detail. The clinician, pressed for time, signs the note without catching the omission. The patient later suffers a cardiac event. In the subsequent litigation, the raw audio will reveal the patient mentioned chest pain. The AI summary will show it was omitted. The clinician's signature will demonstrate they attested to the accuracy of an incomplete record. This is not hypothetical. It is the environment we are now operating in. THE INVISIBLE DATA LAYER Ambient AI fundamentally alters what a health record is. It transforms care documentation from a deliberate act into continuous capture. Conversations become data. Tone becomes metadata. Silence becomes inference. Most of this data exists in layers that neither patients nor clinicians can see. When ambient AI captures a clinical encounter, it generates multiple outputs: Raw audio files of the entire interaction. Machine-generated transcripts. AI-generated summaries. Metadata about the interaction. Confidence scores and processing logs. Training data derivatives. Some enters the medical record. Much of it does not. But all of it is discoverable. Most health systems cannot answer basic questions about their ambient AI data: Where is the raw audio stored? How long is it retained? Who has access? Can it be subpoenaed? Is it used to train models? Can patients request deletion? THE REGULATORY PATCHWORK There is no comprehensive federal law governing AI use in clinical documentation. What exists is a patchwork of regulations, none designed for ambient AI, all leaving significant gaps. HIPAA governs data security, not AI accuracy. FDA regulation does not apply to most AI scribes. State recording laws vary widely. California requires all-party consent before recording confidential conversations. Health systems operating across state lines must navigate this patchwork, and many have not. The 21st Century Cures Act narrowed FDA authority over clinical decision support software. State medical practice acts still hold physicians responsible for the accuracy of documentation they sign. AI-generated notes do not change the attestation standard. The clinician's signature remains their certification of accuracy. What is not regulated: AI accuracy or hallucination rates. Mandatory patient disclosure that AI is being used. Validation requirements before deployment. Vendor liability for AI errors. Training data quality or bias. THE CONSENT REALITY Even when consent is obtained, transparency changes patient decisions. A July 2025 study published in JAMA Network Open examined how informed consent affects patient willingness to allow AI-assisted documentation. The findings are significant: When given basic information about AI scribes, 81.6% of patients consented. When given full details about AI features, data storage, and corporate involvement, only 55.3% consented. That is a 26-point drop when patients understand what they are agreeing to. This is not resistance to technology. It is a rational response to information asymmetry. Patients want to know what is happening with their data. When they find out, many say no. Health systems relying on vague disclosures or implied consent are building on a foundation that will not hold. THE CLINICIAN LIABILITY TRAP Physicians signing AI-generated notes face a unique liability challenge. They are attesting to the accuracy of documentation they did not create, based on algorithms they do not understand, using data processing they cannot verify. The legal standard remains unchanged: the physician's signature represents their attestation that the documentation is accurate and complete. But the practical reality has shifted dramatically. A physician who signs an AI-generated note containing hallucinations or omissions may face: Malpractice liability for clinical errors. Fraud charges for false documentation. Board sanctions for professional misconduct. Criminal prosecution in extreme cases. The defense the AI made an error will not absolve them. Health systems must recognize this liability exposure and respond accordingly: Education that signing an AI-generated note carries the same legal weight as signing their own documentation. Time and tools for proper validation of AI output. Safe reporting channels for clinicians to flag AI errors without fear of retaliation. Clear documentation of what validation was performed.

Keep reading