
Machine-speed attacks overwhelm human-driven SOCs, where the real failure is not missing signals but connecting them fast enough to act before damage occurs.
John Sapp, VP of Information Security and CISO at Texas Mutual Insurance Company, frames AI as a governed partner that adds speed and context while keeping humans accountable for final decisions.
He defines the solution as a hybrid SOC where AI handles early triage, signal correlation, and documentation, while guardrails, asset visibility, and human judgment prevent costly automation mistakes.
Modern security operations are being outrun. As attackers automate and compress timelines, the Security Operations Center now lives or dies by two things: speed and context. Human-driven workflows struggle to connect signals fast enough to separate noise from an attack in motion. The result isn’t blind spots, but hesitation at the exact moment decisiveness matters most.
We spoke with John Sapp, the Vice President of Information Security & CISO at Texas Mutual Insurance Company. A recognized thought leader with over 35 years of experience, Sapp's perspective is forged from decades on the front lines, with a work history that includes senior roles at Accenture, Orthofix, and Oracle. He sees a new chapter in how security leaders approach defense, trust, and automation.
"The first place a human-driven SOC breaks down is speed. It’s the ability to recognize threat indicators quickly and decide whether something is just an event of interest or an attack in progress. Ultimately, it comes down to how well, how consistently, and how accurately you connect signals to understand the situation and decide what action to take," says Sapp. But his solution isn’t just more automation. Instead, he calls for governing AI with rigorous discipline, explaining that tools without constraints can introduce their own set of risks.
Normalizing the norm: For Sapp, the solution lies in creating a truly connected security operation where AI is trained on the unique context of the business it is meant to protect. "AI has to help build the guardrails by understanding what is normal operation. For example, if a server goes down during a maintenance window at 2 a.m. every night, that's the norm. But if it goes down at another time, that's an anomaly," he explains. "You have to use your normal operations to create those guardrails."
Know your assets: Sapp’s playbook for CISOs starts with a basic rule: "Know your assets." That means understanding not only what systems exist, but how they connect and depend on one another, especially once AI is introduced. He stresses the difference between IT Asset Management, which inventories what an organization owns, and a CMDB, which shows how those assets interact, and urges leaders to use frameworks like the EU AI Act to build a true inventory of AI systems and their dependencies. The risk of getting it wrong is already visible in the software supply chain: "A service like Perplexity seems like a standard AI tool, but it uses a sub-processor, DeepSeek, which stores data in the People’s Republic of China. Not knowing those sub-processors is a huge blind spot. In this case, fourth-party risk is more important than ever," says Sapp.
Sapp’s vision for the future SOC is a hybrid model where AI and human analysts work in tandem, with clear boundaries around decision-making. AI handles early-stage triage at Level One and Two, reducing noise and generating consistent incident documentation that strengthens the organization’s knowledge base over time. As investigations escalate, AI’s role shifts from filtering to contextualizing.
The analyst's assistant: At that point, it becomes the analyst’s assistant, sharpening judgment rather than replacing it. "At Level Three, it can help you fine-tune whether this is really a needle in the haystack. It can see that all the signals point to an absolute attack because it’s tied to this particular attacker’s M.O. or coming from a known malicious IP address," he explains. "It can present that case to the analyst so they can make a quick decision based on all the information gathered."
Consistency is key: For Sapp, AI is both a defensive capability and a force multiplier for his team. Its value lies in consistency, following the same process every time, which creates a built-in training effect that helps analysts sharpen their own judgment. That human-centric view is paired with a practical business lens, using AI to maintain continuity and absorb workload as roles naturally turn over, without weakening defenses. "With natural attrition, we can evaluate whether it’s an opportunity to use more AI instead of backfilling a role. AI doesn’t get sick. It doesn’t go on vacation. It may hallucinate, but it’s always going to show up when I need it," he explains. "You balance the risk of a hallucination against the consistency. And I’ll take the consistency with the occasional hallucination that may come along with it."
Looking ahead, Sapp sees the weaponization of AI as a primary challenge, pointing to real-world attacks as evidence that the concept is no longer theoretical. To combat this, he proposes a new, principled industry response for the responsible and secure adoption of AI, much like the secure software manifestos of the past. Such a doctrine is important, he explains, because these weaponized AIs will target the supply chain. "AI is an application in itself, but it is a dynamically generated application. We've talked about low code, no code, and everything as code. Now, AI as code is the latest threat," Sapp concludes.