All articles

Disciplined AI Adoption Restores Executive Control Over Data And Risk

The Security Digest - News Team
Published
February 17, 2026

Andy James of Custodian360 explores how top-down AI adoption often bypasses proper processes and security, creating hidden risks for businesses.

Credit: Outlever

Key Points

  • Many organizations are rushing to adopt AI to save money or avoid being left behind without fully understanding the risks.

  • Andy James, Founder and CEO of Custodian360, warns that top-down decisions often deploy AI before processes, data ownership, or security are clearly defined.

  • Responsible AI adoption requires discipline, careful evaluation, and expertise to manage risk, protect data, and maintain trust.

CEOs are told they can save money when they decide to use AI, so the technology is just deployed without being securely engineered from the ground up and without thought to the associated risks.

Andy James

Founder & CEO

Andy James

Founder & CEO
Custodian360

Many organizations are adopting AI for a single reason: to save money. That belief drives top-down mandates to deploy AI tools quickly, often before anyone has defined how those systems work, where data goes, or who is responsible for the risk they introduce. As a result, AI is being integrated into businesses without the same security, governance, or engineering discipline applied to other critical technologies.

Andy James sees this pattern play out repeatedly in his work with organizations navigating AI adoption. As Founder and CEO of security provider Custodian360, he helps companies manage the consequences of rushing AI into production without understanding how it handles data, where it is stored, or how risk is being accepted across the organization. In his view, the core problem is not the technology itself, but how the decision to deploy it is made. Too often, that decision comes from the top down.

"CEOs are told they can save money when they decide to use AI, so the technology is just deployed without being securely engineered from the ground up and without thought to the associated risks," he says. That top-down mandate is especially risky because AI does not behave like traditional enterprise technology. Legacy systems operate on defined and understandable protocols, while many AI systems function as opaque engines that even their operators struggle to explain.

  • A dark art: "With AI, how applications actually work is a dark art. Nobody truly understands what’s going on under the hood." That opacity creates real, C-suite-level risk. When leaders rush adoption without asking basic questions about data control and ownership, compliance gaps emerge first, followed by deeper erosion of business and customer trust.

  • Data drift: One of the most overlooked risks in AI adoption is what happens to data once it enters an AI system. Where is it processed, where is it stored, and who ultimately owns it? As James puts it, "the moment you put your information in, they become the owner, not you," a reality that many organizations fail to consider until compliance or trust issues surface.

Many of the biggest AI security risks stem less from the technology itself and more from how adoption decisions are made. AI purchases are often driven by executives who are accustomed to accepting risk as part of building a business, while security teams are left to manage the fallout after deployment. Because security has historically been implemented from the ground up, this shift creates a structural imbalance, one where AI-related risk is accepted at the top without a clear understanding of how much exposure is actually being introduced.

  • The FOMO factor: Much of the rush to adopt AI is driven by fear of being left behind. That executive FOMO can push leaders to abandon the discipline and caution that helped them succeed, creating a shortcut to uncalculated risk. As James warns, "A lot of people are rushing into areas they don’t fully understand, and that is always a dangerous path. For most organizations, the enthusiasm needs to be dialed back to allow the technology to develop further, until we start to get correctly defined protocols for AI that make it easier to understand what these applications are actually doing." This cautionary approach is essential if organizations want to manage risk effectively while integrating AI responsibly.

For James, the solution to reckless AI adoption is a return to fundamental business discipline. Existing rules for data management still apply, but they are often overlooked in the rush to implement new tools. AI should be treated like any other data-driven process, with clear ownership, defined responsibilities, and consistent safeguards at every stage. Without this structure, organizations risk losing control of their data and the trust that depends on it.

  • Discipline over hype: James applies the same caution he advises to others. His company offers a clear example of how to navigate AI adoption without rushing. Rather than reacting to every new tool, they prioritize a structured evaluation process. For now, AI is used only in limited marketing functions. Tools are tested against clear business outcomes, and anything that does not meet those standards is set aside. "If someone finds a tool they want to use, we will build a business case on it, evaluate it, and then decide if that's a path we want to follow."

  • Find a translator: Many organizations struggle to understand AI in practical, business-relevant terms. Bringing in an external expert who can explain what a tool can actually do and how it can be integrated into existing environments is often a crucial step toward responsible adoption. As James notes, "The most important thing is to find people who truly understand the implications of these solutions, what they can actually do, and how they can be integrated into existing environments." With this guidance, leaders can have clear, end-to-end conversations about AI tools and make informed decisions that balance opportunity with security.

Organizations are rushing to adopt AI to save money or avoid falling behind, but this can hide a bigger problem. AI can create risks that no one fully understands. When decisions come from the top without clear processes, defined data ownership, or careful evaluation, businesses may trade human limitations they know for technology weaknesses they do not. The effects on safety, compliance, and trust are still uncertain, which is why a careful, disciplined approach and outside expertise are so important. "With AI, we are dealing with a weakness we do not fully understand because we cannot predict how it will interpret small changes in a task," James concludes. "Whether that trade-off is beneficial is something we will start to see over the course of this year and next."