All articles

Financial Risk Modeling Turns Shadow AI Exposure Into Strategic Insight

The Security Digest - News Team
Published
February 20, 2026

Is the "Shadow AI" used by your employees a threat or an opportunity? Boris Khazin, a Strategic GRC & Digital Risk Executive at ClearView MRI, argues it's a source of vital business intelligence.

Credit: Outlever

Key Points

  • While the rise of "Shadow AI" presents significant new risks, one expert argues that it also offers a valuable opportunity for business intelligence.

  • Boris Khazin, AI Compliance Advisor for ClearView MRI, explains that when employees sidestep official tools, they are signaling that a better alternative exists.

  • He recommends that leaders treat this trend not as a rebellion to be punished, but as a chance to identify tooling gaps, optimize workflows, and build a more effective organization.

Shadow AI isn’t a new rebellion. It’s the same old pattern of people trying to do their jobs better. The difference now is that the risk isn’t just security. It’s also regulatory, financial, and existential.

Boris Khazin

AI Compliance Advisor

Boris Khazin

AI Compliance Advisor
ClearView MRI

The unmanaged flood of generative AI tools in 2026 has changed the shape of shadow IT forever, introducing a new class of regulatory, financial, and operational risks that many existing security playbooks are unprepared to manage. With roughly half of all employees now using unmanaged AI tools, the problem affects organizations ranging from small businesses to global enterprises.

Boris Khazin, AI Compliance Advisor for ClearView MRI, confronts this reality every day. A Strategic GRC and Digital Risk executive, he is building a modern compliance program from the ground up, bringing experience from EPAM where he launched and scaled the firm’s GRC services division. For Khazin, getting shadow AI under control starts with recognizing it for what it is. “Shadow AI isn’t a new rebellion. It’s the same old pattern of people trying to do their jobs better. The difference now is that the risk isn’t just security. It’s also regulatory, financial, and existential,” says Khazin.

  • A new level of risk: He contrasts the low-stakes problem of a small company using pirated software with the potential of a modern data leak through an unmanaged tool: "If that same company has its data, PII, or a trade secret leaked, that could be the destruction of the whole company." As organizations navigate a growing web of AI regulations, the danger is often a well-intentioned employee looking for a better way to work.

  • The compliance crunch: "The cybersecurity aspect is still important, but now you have new regulatory risks. When you use AI, data is key. An unauthorized tool that needs your data to make a decision is, by definition, not secured. You don't have controls in place, like the data loss prevention required under GDPR, and you aren't allowing your internal team to validate for issues like data bias or potential errors."

  • The 101st time: "Humans tend to go the easy path. If they see something give you the right answer 100 times, they might stop checking it. Unfortunately, the 101st time could be the one that blows up your whole system because of a hallucination. The problem isn't new, but the risks and the controls needed to manage them are."

Before leaders can address the cultural drivers behind this behavior, the first step is to grasp its scope. Khazin suggests setting up and monitoring an up-to-date list of all tools used that provide AI capabilities. Khazin advises leaders to resist the initial impulse for a punitive crackdown, saying the approach is often counterproductive. Leaders who reorient their security playbook can transform Shadow AI into a source of invaluable business intelligence on tooling gaps and workflow inefficiencies.

  • Partner, don't punish: "As we learned from years of experience with Shadow IT, the right approach is not to be punitive. Instead, you must work with employees to better understand why they are using an external Shadow AI tool. In over 90% of cases, it's because the tool lets them do their job in a better, more meaningful way, at least in their view.

  • Rebels and rule-followers: "You have to understand the psychology of your team. You're going to have 'rebels,' especially in creative fields, who want to use the most efficient tools. Then you're going to have 'followers' who will just follow orders, not even examining if a better tool exists. If a large part of your team is using an outside tool and you don't see that as an issue, you don't understand how to manage people or optimize your organization." When you see 40% of your data team gravitating toward an unsanctioned tool like Claude, Khazin says, you have to investigate. You might find it offers a key feature your official tool lacks.

Ignoring this trend isn't a pragmatic, budget-conscious decision, according to Khazin. He reframes it as a miscalculation of operational risk. When a large part of a team sidesteps approved tools, it’s a clear signal that the current setup is inefficient, exposing the organization to unmanaged risks while also depressing productivity.

Understanding that risk is often the key to securing executive buy-in. To win budget and drive change, leaders need to speak C-suite, moving the conversation into the concrete world of financial modeling. Here, the costs of unmanaged AI are weighed against the ROI of investing in better processes, often supported by modern security tooling.

  • Show me the money: "A common issue with many traditional risk methodologies is their reliance on a simple score system, like red, yellow, green. But what does that score really mean? It doesn't translate into a tangible business impact. You can't go to your board and report an abstract risk score of 87 out of 100." In his view, that number often fails to provide the analytical basis to decide which path to take.

  • Know your number: "It's much easier to convince the C-suite with dollars than with abstract warnings about a breach, as their first question will always be about the financial cost. Sometimes C-suites will have to accept certain risks and are willing to pay, say, 3 or 4 million dollars in potential penalties or loss of effort or loss of work, because they feel that in the long run, it's still going to cost them less than other options. But at least using the dollar aspect presents it in a manner that everybody understands."

Ultimately, Khazin's strategy rests on two core recommendations. First, engage the entire organization—not just key decision-makers—to understand real-world business needs from the ground up. Second, translate all risks, controls, and efficiency gains into a clear dollar value. Adopting this approach helps redefine the entire challenge. The primary goal becomes not just mitigating risk, but building a more effective company by listening to its employees.

"As AI develops, you're going to see a lot more Shadow AI happening, because the better the tool, the more efficient an individual feels they can be. So, don't make this punitive. Understand that the reason your employees are doing this is because they want to be better, not because they want to hurt the organization."