CBIZ

Insights. Applied. Integrated solutions that turn strategy into action.

  • Article
April 09, 2026

The AI Risk You Probably Are Not Managing

By John Verry, Managing Director Linkedin
The AI Risk You Probably Are Not Managing
Table of Contents

Shadow AI is already running inside your organization. Here is how to find it, understand the risk, and build a strategy to govern it.

Picture this. Your VP of Marketing is preparing a competitive analysis for the board. She pastes the draft into ChatGPT, asks it to sharpen the narrative, and gets back a polished version in seconds. Down the hall, your controller is using an AI-enabled add-in inside Microsoft Excel to help forecast next quarter’s cash position. In the CXO suite, an AI-enabled transcription bot joins the meeting at a board member’s request.

None of this was formally sanctioned by IT. This Shadow AI is not in your software inventory. And none of it has been reviewed to understand the use case, the data being shared, who can access the data, whether it is secure, relevant regulations, risk, or compliance implications.

When we asked respondents whether they had a Shadow AI inventory and a way to identify and monitor unsanctioned AI use, 91% said no.

That should worry all of us. Not because AI tools are inherently dangerous, but because risk you cannot see is risk you cannot manage.

The Scale of the Problem Is Bigger Than You Think

Here is where the math gets uncomfortable. According to BetterCloud’s 2024 SaaS research, the average mid-market organization runs approximately 100-130 SaaS applications. It is also widely estimated that 64% of SaaS applications now include some form of AI-enabled functionality.

Do the math. A mid-market company running 100 SaaS applications is likely operating with 60-65 AI-enabled tools in its environment. Most of those were not purchased as AI tools. They are project management platforms, CRM systems, HR software, communication tools, and productivity suites that have quietly added AI capabilities through product updates, often without any formal notification to the IT or security team.

The challenge grows larger when you consider direct LLM usage: employees accessing ChatGPT, Claude, Google Gemini, Microsoft Copilot, and a growing list of purpose-built AI tools for tasks ranging from writing and research to code generation and legal drafting.

The question is no longer whether AI is in your environment. It is whether you know where it is, what data it is touching, and what the vendor is doing with the data you share.

Why Shadow AI Is a Materially Different Risk

Shadow IT has existed for decades. The Shadow AI problem is relatively new, and its risk characteristics make it more consequential than a rogue SaaS subscription. We have noted that clients often recognize the following risks.

Data input risk

Unlike a traditional SaaS application that stores and processes data in a defined way, LLMs and AI-enabled features may use your inputs to improve their models. Consumer-grade ChatGPT, for example, has historically used conversation data for training unless explicitly opted out. An employee adding a client proposal, a financial projection, or a personnel record into a consumer AI tool may be feeding that data into a system with no data processing agreement, no retention limits, and no audit trail.

Accuracy and over-reliance risk

AI is probabilistic, not deterministic. AI outputs are not answers; they are predictions. Employees who do not understand this distinction will act on AI-generated content without appropriate validation. In legal and financial contexts, this creates real liability exposure.

Vendor terms.Vendor contract risk

Most organizations have never reviewed the AI-specific terms in their SaaS vendor agreements. What does Salesforce do with the data you feed into Einstein? What are the data handling terms for the AI transcription feature inside your video conferencing platform? These questions are rarely asked and rarely answered before the tool is in production.

Regulatory and compliance exposure

For organizations subject to HIPAA, PCI DSS, SOC 2, or state privacy laws, the unreviewed use of AI tools that process regulated data is not a theoretical risk. It is a gap that auditors and regulators are increasingly prepared to ask about.

As AI rapidly evolves, a new class of risks has evolved around decision-making and autonomy:

  • Beyond data exposure, arguably the most underappreciated risk is that AI outputs are increasingly being used to drive real business decisions, often without adequate human validation. Pricing strategies, hiring recommendations, credit assessments, and operational calls are being informed or dictated by tools whose reasoning is opaque, whose training data is unknown, and whose error modes are not well understood by the people acting on them.
  • The risk profile escalates further as AI becomes agentic. Agentic AI does not just generate content for a human to review. It takes actions: sending communications, executing workflows, modifying records, and interacting with external systems, often in chains of automated steps, again often with no human in the loop. If an agentic tool makes a bad decision, the damage is done before you even know it.

What a Shadow AI Strategy Actually Looks Like

The goal is not to ban AI; it is to enable AI to be used to achieve organizational objectives without increasing risk. The goal is visibility, governance, and intentional adoption. Here is how to approach it.

Inventory before policy.

You cannot govern what you cannot see. This means identifying every SaaS application in the environment and determining which have AI features, what those features do, what data they touch, and what due diligence we need to do on the vendor’s cybersecurity, privacy, and AI practices. Unfortunately, it’s not a one-time exercise. It needs to be an ongoing operational practice, as new SaaS products are acquired, new use cases for existing tools evolve, and vendors continually add updates to their products.

Classify your data and map it to AI touchpoints.

Not all data carries the same risk. A tiered data classification model is foundational for AI governance. Once you classify data based on sensitivity/risk, you can define acceptable use cases and tools. This gives employees the practical guidance they need to move forward confidently and securely.

Establish an Acceptable Use Policy that employees will actually read.

An AI Acceptable Use Policy needs to speak plainly: what tools are approved, what data classifications can be used with which tools, what outputs require human review before being acted on, and what to do if an employee is unsure. It should be as short as possible, but no shorter.

Build an approval workflow for new AI tools.

The pace of AI adoption is not slowing. Without a clear, lightweight process for employees to request and receive approval for new AI tools, shadow adoption will outpace governance. The approval workflow needs to be faster and easier than going around it, which means quick turnaround, clear criteria, and a designated owner.

Assign ownership and make it visible.

Shadow AI governance requires tone at the top. Senior management needs to support it and ensure that ownership and responsibility are clearly defined.

Tools That Can Help

This is a rapidly emerging space, and helpful tools often provide one or more capabilities, such as inventorying, monitoring, enforcement, safelisting, metrification, compliance, etc. Having a clear understanding of which elements are most valuable to you before going to market is essential. 

  • For organizations already in the Microsoft ecosystem, Microsoft Defender for Cloud Apps provides Shadow AI discovery. It catalogs cloud application usage across the environment, assigns risk scores, and has been updated to flag generative AI tools and AI-enabled SaaS specifically. For most mid-market organizations, Defender for Cloud Apps is often the fastest and most cost-effective way to gain initial Shadow AI visibility.
  • SaaS Management Platforms (SMPs) such as Zylo, BetterCloud, and Productiv provide automated SaaS discovery, usage analytics, and contract visibility. They identify AI-enabled applications that can be used to monitor for new AI tool adoption.
  • Cloud Access Security Brokers (CASBs) sit between users and cloud services and provide visibility into the applications accessed, the users, and what data is being transmitted. If you already have a CASB capability in place, this is a great solution.
  • Browser-based security platforms such as LayerX, Island, and Talon operate at the browser layer. They can identify when employees access AI sites, submit sensitive data to AI tools, and use browser extensions with AI functionality. This approach is particularly effective for catching direct LLM usage that does not flow through traditional network monitoring.
  • AI-specific governance platforms such as Tenable One, Evoke Security, and Mindgard are emerging specifically to address AI risk, including model inventory, data lineage, and policy enforcement. These are more relevant to organizations that are beginning to develop or deploy their own AI models, but the vendor risk management capabilities also apply to AI consumers.

As no single tool will likely identify all AI utilization, a blended approach may be necessary.

The Governance Framework Connection

For organizations that are already operating within a recognized cybersecurity framework, Shadow AI governance does not require a parallel structure. It extends what should already exist.

ISO 27001 organizations already have asset management, supplier relationship management, and access control domains that directly apply to AI tool governance.

NIST Cybersecurity Framework organizations have an Identify function that is the natural home for Shadow AI discovery and inventory. The Protect and Detect functions cover policy enforcement and monitoring.

Addressing Shadow AI is an element of a broader AI Governance/Risk Management program that you may align with the NIST AI Risk Management Framework and ISO 42001.

The Bottom Line

91% of the security and risk professionals we polled reported that their organization does not have a Shadow AI inventory and monitoring capability. Unfortunately, the challenge has emerged faster than most organizations could respond.

But the window for treating this as a future concern is closing. Employees are using AI tools today, on real data, with real clients, in real business processes. The risk is not hypothetical.

The answer is not to ban or limit AI; it is to build visibility, establish sensible guardrails, and create a culture where employees understand both the power of these tools and their responsibility to use them securely.

If your organization does not know what AI is running in your environment today, that is the right place to start.

Contact a CBIZ AI Risk & Governance Specialist to help control associated ai risks.

© Copyright CBIZ, Inc. All rights reserved. Use of the material contained herein without the express written consent of the firms is prohibited by law. This publication is distributed with the understanding that CBIZ is not rendering legal, accounting or other professional advice. The reader is advised to contact a tax professional prior to taking any action based upon this information. CBIZ assumes no liability whatsoever in connection with the use of this information and assumes no obligation to inform the reader of any changes in tax laws or other factors that could affect the information contained herein. Material contained in this publication is informational and promotional in nature and not intended to be specific financial, tax or consulting advice. Readers are advised to seek professional consultation regarding circumstances affecting their organization.

“CBIZ” is the brand name under which CBIZ CPAs P.C. and CBIZ, Inc. and its subsidiaries, including CBIZ Advisors, LLC, provide professional services. CBIZ CPAs P.C. and CBIZ, Inc. (and its subsidiaries) practice as an alternative practice structure in accordance with the AICPA Code of Professional Conduct and applicable law, regulations, and professional standards. CBIZ CPAs P.C. is a licensed independent CPA firm that provides attest services to its clients. CBIZ, Inc. and its subsidiary entities provide tax, advisory, and consulting services to their clients. CBIZ, Inc. and its subsidiary entities are not licensed CPA firms and, therefore, cannot provide attest services.

Let’s Connect

Our team is here to help. Whether you’re looking for business solutions, financial strategies, or industry insights, we’re ready to collaborate. Fill out the form, and we’ll be in touch soon.

This field is for validation purposes and should be left unchanged.