Governance that scales is not about adding more process. It is about building structures that can grow with your AI portfolio without creating drag that slows down the work. Based on the emerging legal and operational landscape, several elements are becoming essential. Consent that is explicit, encounter-specific, and reversible. Silent opt-in is not consent. Patients must be clearly informed before any recording begins. They must affirmatively agree. And they must be able to withdraw that consent at any time. The notification must explain what is being captured, how it will be used, and who will have access. Vague language or buried disclosures will not hold up under scrutiny. Consent must also be separated from documentation. If your AI system auto-populates language like 'patient consented' into the medical record, that feature should be disabled immediately. Consent capture must be a distinct, auditable process with clear separation from clinical documentation fields. Data lifecycle mapping before deployment. Before any ambient AI tool goes live, the full data lifecycle must be documented: What data is captured (audio, text, metadata)? Where is it stored (on-premise, vendor cloud, third-party)? How long is it retained? Who can access it (internal staff, vendor personnel, subcontractors)? Is it used for model training or improvement? How can it be deleted on request? If these questions cannot be answered clearly, the tool is not ready for deployment. This is not a one-time exercise. Data lifecycle mapping must be revisited whenever vendor terms change, new features are enabled, or the tool expands to new departments. Validation workflows that match AI output volume. AI-generated documentation is not automatically accurate. Studies have shown significant rates of hallucination and omission in AI-generated clinical summaries. Errors tend to concentrate in the sections that guide follow-up care. Clinicians signing AI-generated notes are attesting to their accuracy. That legal standard has not changed. But the practical reality has shifted dramatically. Governance structures must account for this. If clinicians need time to validate AI output against their memory of the encounter, that time must be built into workflows. If validation is not happening consistently, the tool is creating liability rather than efficiency. Vendor contracts that allocate risk appropriately. Most ambient AI tools are provided by third-party vendors. These vendors make claims about accuracy, security, and compliance. But the contracts often include provisions that shift risk back to the health system: Broad rights to use customer data for improvement purposes. Disclaimers of liability for AI errors. Vague data retention and deletion policies. No commitment to transparency about model performance. Governance that scales requires contracts that include: Customer-controlled retention and deletion. No secondary use of data without explicit consent. Immutable logging of access and deletion. Prohibition on vendor personnel accessing identifiable recordings unless specifically authorized. Audit rights and clear performance transparency. Vendor relationships are the beginning of governance, not the end. Regular review of vendor practices and model updates is essential. A single registry of all AI tools touching patient data. You cannot govern what you cannot see. Every AI tool in clinical or operational use needs to be inventoried, with clear documentation of what it does, what data it captures, where data flows, who owns it, what consent process applies, and what validation process exists. This registry must be maintained actively. It is not a document created once and forgotten. A front door for AI deployment. No AI tool should reach patients without passing through a defined governance pathway. This does not mean bureaucratic delay. It means a clear process that ensures consent, data governance, validation, and ownership questions are answered before deployment rather than after. Cross-functional oversight. AI governance cannot be delegated to any single function. Clinical leaders understand workflow impact. Legal understands liability exposure. Compliance understands regulatory requirements. IT understands technical architecture. Effective governance requires all of these perspectives at the table, with clear escalation pathways when concerns arise. Keeping Governance Grounded. In the previous edition, I wrote about signal integrity, the work of keeping AI connected to the real world it is meant to serve. Governance is part of that same work. The teams that maintain better alignment tend to share a few consistent habits. They notice weak signals early. They make it easy to raise questions. They revisit assumptions on a regular cadence. They treat lived experience as primary evidence. Those same habits apply to governance. The earliest signs of a governance gap are almost always small. A brief comment from a clinician. A question from a patient. A detail in a vendor update that does not quite match what was originally agreed. These weak signals rarely announce themselves. They need to be noticed, supported, and taken seriously. Governance works when it becomes a predictable part of how teams stay aligned, not an event triggered only when something goes wrong. The Choice Ahead. The Sharp lawsuit is not an isolated incident. It is the beginning of a broader reckoning with how healthcare deploys AI tools that capture, process, and document sensitive information at scale. The question for every health system is whether governance will be built proactively or explained reactively. The tools are already here. The regulatory framework has not caught up. The burden of accountability falls on the organizations deploying these systems. Building governance that scales is not about slowing down innovation. It is about creating the infrastructure that allows innovation to be trusted. It is about ensuring that the tools designed to reduce burden do not create new forms of risk. It is about protecting the patients whose voices are being captured and the clinicians whose signatures attest to the accuracy of what AI produces. Governance is not a constraint on progress. It is the foundation that makes progress sustainable.
