All articles
The Path To AI Native Security Depends On Codifying Analyst Expertise
FinTech Deputy CISO Aby Rao outlines the path to AI-native security, starting with codifying analyst expertise and ending with well-governed AI working hand in hand with SOC teams.

AI can’t replace context. It amplifies it. Your SOC’s advantage comes from pairing machine speed with human judgment.
For a not-insignificant number of security teams, artificial intelligence is a tool awkwardly jammed into existing workflows rather than the engine running the show. To actually make the leap from random experimentation to AI-native operations requires a practical maturity ladder that addresses core issues in security automation, such as alert fatigue.
Cybersecurity leaders are working with SOCs to do the hard work of translating an analyst's undocumented intuition into rules a machine can read, eventually letting automation handle the noise so humans can focus on the real threats. As a Deputy Chief Information Security Officer in the FinTech industry with nearly two decades in the trenches as a consultant and advisor for organizations such as Expert, Greenfield Partners, and Duke University, Aby Rao understands that an AI-native security apparatus cannot exist outside human expertise. "AI can’t replace context," Rao says. "It amplifies it. Your SOC’s advantage comes from pairing machine speed with human judgment."
Spreading the spark: The push toward an AI-native setup rarely starts in the boardroom, according to Rao. It usually sparks at the practitioner level. Individual contributors and engineers begin experimenting on their own, then share what works with peers. "There is a lot of energy coming from the bottom up. Individual contributors, analysts, and engineers are exploring things. What that is doing is putting AI in practice rather than just AI in principle."
Paying to play: Leaders like Rao eventually find they need to budget for real usage rather than rely on capped free accounts to see measurable organizational value. "How do you buy licenses that enable your talent to really use AI to its full potential and move toward an AI-native approach? AI is expensive. You have licensing costs, which depend on variable, real-time usage. So there has to be allocation."
Scaling organic momentum requires executives to take a structured approach: granting basic access, funding enterprise licenses, establishing communication channels, and eventually turning successful experiments into production-ready patterns. This, however, opens up a new problem. Specifically, outside of budget and community, most off-the-shelf tools rarely arrive with built‑in knowledge of the business they are supposed to protect.
The solution? Codification of a team's knowledge. Teams must extract the gut reactions of experienced analysts and feed them into the models in a structured way. As that internal knowledge base matures, organizations can layer in external signals from peer groups to support advanced forecasting and planning. Building that knowledge base then becomes the foundation for gradually reducing human oversight and trusting the machine.
Mind to machine: The real differentiator, Rao argues, is the tacit knowledge that veteran analysts carry. "Really good SOC analysts are the ones who have this complex algorithm running in their head. Codification takes tremendous effort and a conscious investment, with the extended value of having that knowledge fed into your AI platform."
Loosening the leash: That codification process is, for Rao, what allows teams to calibrate their human-in-the-loop posture over time. "Early on in the process, you would start with fairly substantial human involvement. But as you train and educate your AI platform on a continuous basis, the human-in-the-loop work will drop, and AI decision-making will gain more confidence."
Because codification relies entirely on human expertise, many security leaders are beginning to ask what automation means for staffing. For Rao, a more realistic path is for automation to absorb the repetitive L1 and L2 workload, allowing people to step into more strategic L3 and L4 responsibilities. Retaining those analysts is a strategic advantage for anticipating future threats.
Trading noise for nuance: Rather than treating AI as a path to smaller teams, Rao frames it as a lever for shifting the SOC's center of gravity from reactivity to forward-looking strategy. "I don't think there is an approach where investing in AI will very quickly reduce headcount. It's more of a repositioning play. Teams should focus on moving from operations to SOC strategy, upskilling analysts, and moving team members to get the most from them. "
Repositioning the roster: As analysts transition into those higher-level roles, Rao argues that they naturally take the lead on governing the technology. "The way I would urge leaders to think about this is, you were doing this noise-to-signal ratio dance for a very long time using L1 and L2 analysts. How do you transition that to AI so that your L1 and L2 analysts can become L3 and L4 analysts and really strategize around future threats and improve your effectiveness?"
Rao operates on a four‑pillar framework for responsible AI in cybersecurity: establishing clear operational trust, checking for demographic bias, protecting data privacy, and monitoring compliance risk. By building trust through transparent engagement models, security teams can create a useful ecosystem without overexposing the network. At the same time, they must stay alert to bias risks in sensitive areas like insider threat monitoring and recognize that sensitive data might occasionally slip into logs.
Dodging the data: One of Rao’s most practical workflows is to narrow the data that an AI model can use, or outright ban certain kinds of data access. "Logs, whether it's access logs, activity logs, or system logs, will often touch sensitive data. Trust that you're building there is that we don't deal with that data."
Within that boundary, Rao still insists on privacy protections and regulatory awareness under regimes such as the GDPR and the LGPD. That methodology aligns closely with emerging responsible AI practices across the market. Looking ahead, he sees a fundamental transition from the classic "build versus buy" debate to the question of autonomous DIY defense. "Innovation where you can really build things that meet your needs has only amplified with the emergence of AI-assisted development," Rao says. "I see this as a very pragmatic way of approaching risk within the cybersecurity space, where you take control of your own risk rather than relying on vendors."
The views and opinions expressed are those of Aby Rao and do not represent the official policy or position of any organization.






