I previously discussed shadow AI, such as the director utilizing Claude to draft workflow proposals, the case manager texting AI to generate templates, and the resident practicing her presentations with an AI voice tool while driving. The feedback indicated that a significant number of people are troubled by this, and I aim to explain why. Thus, I aim to explain why it is important to go beyond describing shadow AI to assessing why this phenomenon is accelerating, the real implications of unmitigated liability exposure, and the more sophisticated future state of governance that is required. THE NUMBERS SHOW HOW BAD IT IS The survey conducted by Wolters Kluwer earlier this month indicates that nearly 40% of healthcare workers know of peers using organizationally unapproved AI tools. Almost 20% self-reported using unapproved AI. According to an UpGuard report from November, over 80% of workers, regardless of industry, utilize unapproved AI tools in their work. The greatest offenders were security professionals; almost 90% of them acknowledged using unapproved AI. The most surprising finding was that executives reported the highest frequency of unapproved AI use. The individuals who design the policies are the most likely to disregard them. According to IBM's 2025 Cost of a Data Breach report, 20% of companies experienced a breach due to shadow AI. Breaches due to shadow AI are $200,000 above the average cost of a breach. The absence of Security Skill Shortage and Shadow AI are, respectively, the first and third most costly breach factors. This is the new normal, not an edge case. WHY PROHIBITION FAILS The impulse is to ban and surveil. Write policies that are more explicit. Consequences for not following the rules. Three issues: The applications and tools are ubiquitous. While you may be able to restrict access to ChatGPT on your network, you cannot do so on an individual's devices, whether that be a personal phone, a home computer, or a public computer at a coffee shop. The need exists. More than half of administrators and 45% of providers cited speed of workflows as the reason for use of unapproved AI in the Wolters Kluwer survey. An additional 40% of administrators reported using unapproved AI because there were no approved options. The invisibility of the workarounds. Whereas unauthorized software use, or shadow IT, can be detected, shadow AI offers no evidence within your systems. The manipulation and processing take place externally. Only the results come into your system. Organizations reliant on prohibitions are not reducing shadow AI. They are reducing the insight they have into it. THE UNSEEN RISK When AI-enabled work is accepted into your systems without documentation, you lose the ability to understand the context in which decisions were made. Think about a quality improvement director who analyzes a workflow using an external AI tool to draft the workflows. The AI identifies patterns and structures the proposal. The leadership adopts the recommendations. A year later, an adverse outcome is traced back to that initiative. Over the course of writing the review, how were the recommendations and suggested revisions prepared? What thought process and assumptions were utilized? The candid response is: "I relied on AI to assist in pondering the analysis." Currently, you are using AI to aid in the making of decisions and there is no audit trail, no corroborating documents, and no means to establish whether the AI caused any erroneous information which influenced the final result. IBM discovered that of the various types of shadow AI incidents, the most compromised data type was customers' personally identifiable information. Of the cases that were tracked, over 40% had compromised intellectual property. This is not an issue regarding HIPAA, if there is no PHI, however, there certainly is a liability issue, which, unfortunately, your incident reporting systems will fail to capture. WHAT AI GOVERNANCE MUST BECOME The present AI Governance frameworks presume that integration of AI is completed via procurement. They point out a vendor, a contract, a deployment, and focus on the actions that take place within your systems. Shadow AI reverses all of these assumptions. It is accessed via your employees, not your suppliers. It is not contained within your systems. It is the systems, and is utilized because there are no sanctioned alternatives to the provision or because the provision is inadequate. Four shifts: 1. Move Toward Reality. Your employees are utilizing external AI right now. It is not a failure and should not be viewed as a corrective action, but as a reality that requires governance. Policies that ignore the fact are not governance. 2. Capture the need, not this behavior. Shadow AI demonstrates where cognitive load is greatest and where there is the least supported. Those are your governance priorities. 3. Compete with governed alternatives. If your sanctioned AI solution asks for three clicks, two-factor authentication, and a load time, while ChatGPT is just one open tab, you've lost the battle. Governed alternatives must be improved, not just sanctioned. 4. Set norms for disclosure. Instead of pretending that shadow AI is a problem that does not exist, create a framework for describing the use of shadow AI. "This analysis was developed with AI assistance" isn't a confession. It's evidence. The shadow AI that is operating without governance is embedding itself into how work gets done. It is used to build workflows, refine templates, and shape institutional knowledge. This is un-documented, un-validated, and un-audited. When organizations are finally able to identify this type of exposure, it will not just be an edge case to address. It will be an infra-structure to be detangled. Every unauthorized query, every external tool, every workaround your systems needed are shouting what your workforce needs. The question isn't if you will listen, it's if you will listen in time to respond.

Keep reading