A patient told his doctor of 25 years: "I won't consent to AI recording our conversations." He wasn't being difficult. He was being rational. Christopher Pukit has seen the same primary care physician for a quarter century. They talk about his health. They also talk about golf. About family. About life. That relationship was built on something AI cannot replicate: time, consistency, and earned trust. When his doctor mentioned the organization was implementing ambient AI to record visits, Chris said no. Not because he opposes technology. Because the system had not earned the right to be in that room. And his doctor didn't seem thrilled about it either. THE TENSION NO ONE WANTS TO NAME We are asking patients to consent to systems they do not understand, operated by vendors they have never heard of, governed by policies they will never read. We are asking them to trust institutions that have not always been trustworthy. Consider what consent actually looks like today in most health systems: A tablet shoved toward a patient in the waiting room. Legal language written for lawyers, not people. No explanation of what "AI-assisted documentation" actually means. No clarity on where the data goes, who sees it, or how long it lives. No real option to say no without feeling like a problem patient. This is not informed consent. It is compliance theater. And patients are starting to notice. REFUSAL IS NOT RESISTANCE The instinct in healthcare leadership is to frame patient hesitation as a barrier to overcome. A change management problem. A communication gap to close. But what if refusal is the appropriate response? The Sharp HealthCare lawsuit alleged that more than 100,000 patient encounters were recorded without proper consent using an ambient AI tool called Abridge. According to the complaint, the AI auto-populated consent language into the medical record—documenting that patients "were advised" and "consented" when they never did. When a system asserts consent that did not exist, this is not an innovation story. It is a record integrity crisis. Patients who say no are not resisting progress. They are responding rationally to a system that has not yet earned trust. THE TRUST DEFICIT IS OLDER THAN AI I spoke recently with Larry Kuhn, Psy.D., who has written extensively about the erosion of trust in human relationships. His point stayed with me: before we can figure out how to trust AI, we have to address the trust deficit we already have. The data confirms what many of us sense. According to Gallup, Americans' average confidence across major institutions has fallen to just 27%—the lowest point in nearly five decades of measurement. Trust in the federal government to handle domestic problems has collapsed from 54% among opposition-party supporters in the 1970s to just 18% today. This is the context AI is entering. The technology inherits the credibility of the organization behind it. If your patients do not trust the health system, they will not trust the AI the health system deploys. THE VALIDATION GAP Even when consent is obtained, a deeper problem remains: clinicians are being asked to validate AI output they cannot fully verify. A recent study in npj Digital Medicine found that ambient AI clinical documentation tools produced hallucinations in 1.47% of sentences and omissions in 3.45% of sentences. While those percentages may sound small, 44% of the hallucinations were classified as "major"—meaning they could impact patient diagnosis or management if left uncorrected. The most concerning hallucinations appeared in the Plan section of clinical notes—the section that often contains direct instructions to colleagues or patients. Physicians are catching errors in summaries generated from conversations they may not fully remember. They are signing notes produced by algorithms they did not select and cannot audit. "The AI made an error" is not a legal defense. The signature on the record is. WHAT MEANINGFUL CONSENT LOOKS LIKE Before AI can scale in healthcare, trust has to be rebuilt. Not in the technology. In the relationships the technology is being inserted into. That means: 1. Transparency about what is being recorded and why. Patients deserve to know—in plain language—what the AI is capturing, what it is generating, and how it will be used. Not buried in a privacy policy. Spoken out loud. 2. Clarity about who has access and for how long. Where does the recording go? Who can see the transcript? Is it stored with a third-party vendor? For how long? These are not edge-case questions. They are baseline expectations. 3. Honoring refusal without penalty or friction. If a patient says no, the visit should proceed without awkwardness, delay, or subtle pressure. Refusal should be as easy as consent—and just as respected. 4. Designing consent as a conversation, not a checkbox. Consent is not a form. It is a relationship. The organizations that treat it as a one-time transaction will lose the patients who understand the difference. 5. Separating consent capture from clinical documentation. The Sharp lawsuit revealed a critical design flaw: the AI itself was populating consent language in the medical record. Consent capture must be independent of the system being consented to. The organizations that get this right will not be the ones that move fastest. They will be the ones that move with the trust of the people they serve. So the question is not: How do we get patients to consent? The question is: Have we built a system worthy of their trust? If the answer is not yet—that is not a failure. That is a starting point.
