
Enterprises move fast with AI, but new security gaps, insider risks, and fragile trust slow progress when protection and governance lag behind innovation.
Palanivel Rajan Mylsamy, Director of Engineering Program Management at Cisco, explains why cybersecurity must function as a foundational pillar of AI, not a downstream control.
He outlines a disciplined, cross-functional approach that uses proven development processes and phased rollouts to scale AI securely while keeping humans in control.
AI adoption is reshaping how enterprises think about cybersecurity. What once sat in a supporting role now determines how fast organizations can innovate without exposing themselves to new forms of risk. From AI-enabled insider threats to systems that can slip past long-standing web defenses, the stakes have changed. Innovation now depends on trust, and security has become a strategic requirement for sustaining both speed and confidence at scale.
Palanivel Rajan Mylsamy, Director of Engineering Program Management at Cisco, shares his perspective on managing these changes. With over two decades of leadership in technology and security, Mylsamy is known for spearheading large-scale digital transformations and pioneering the hybrid Agile/Waterfall Software Development Lifecycles needed for managing today’s AI-driven systems. He frames the AI challenge around three core pillars: architecture, innovation, and cybersecurity.
"Cybersecurity is one of the pillars of AI, because the moment we start putting confidential and sensitive data into these systems, protecting that data and building trust becomes non-negotiable," says Mylsamy. But calling security a "pillar" is one thing; making it act like one is another. For Mylsamy, that means defining its core functions as an active enabler of trust. Executing this mandate can be a difficult undertaking with real operational and economic challenges, and often requires leaders to translate technical risk into business value.
A trinity of trust: "Cybersecurity's mandate in the AI era is threefold: first, to protect the sensitive and confidential data being fed to these agents; second, to prevent the manipulation of that data or the model itself; and above all, to build foundational trust in the entire system," he explains. "The challenges are real. You have the high implementation cost for infrastructure, resources, and strong security experts, and then you have the ongoing complexity and maintenance. These automated systems must be maintained; otherwise, you are giving intruders creative opportunities to exploit your security."
Once the foundation is solid, the focus can turn to governing the autonomous systems themselves. His solution? Go back to basics. For Mylsamy, a key part of the solution lies in returning to the time-tested disciplines of Agile development, an approach that aligns with calls for formal AI "stress tests" and the use of frameworks to manage AI-specific risks. This approach supports the creation of a hybrid model where AI handles speed and humans provide judgment. He justifies the intense focus on process with a simple, unchangeable truth of the internet age.
Old rules for new tools: "To govern AI, we have to go back to basics. The same process discipline we learned when the industry evolved from Waterfall to Agile to manage complexity can be applied to the challenges of AI today," says Mylsamy. "There are core criteria that must be met, like a 'definition of done' and 'definition of readiness' that prevent code from even entering the deployment pipeline until it's ready. These playbooks must be applied before any solution is released to the broader world."
The network's nature: "The moment data is on a network, it is available. Though we can call it secure, it is on the internet in some form and is potentially accessible to anyone," he emphasizes. But what about the people? The promise of powerful AI-driven security offerings always comes with questions about the human workforce. Instead of a simple story of job replacement, Mylsamy suggests a change in skill requirements, positing that AI has pressed a "reset button" on the entire industry and created both new risks and new opportunities.
Even the best tech and processes are useless without the right culture. Safely scaling AI from a proof of concept to an enterprise-wide capability often presents cultural challenges that are as real as the technical and procedural ones. Mylsamy explains that the entire process must be managed through a "phased rollout" model, where the governance stage playbooks are applied at each step. This phased approach, he explains, is most effective in a cross-functional culture. This collective effort, moving from internal sandbox development to lab testing with trusted partners, then into phased customer betas, all while gathering metrics, creates the resilience needed to move fast without breaking things.
"The old model of siloed development, where one team hands off code to the next, is over. Today, it must be a collective and collaborative effort across all people and skill sets, including early engagement with customers. That is how we can scale and deliver things faster." Mylsamy concludes that this integrated, cross-functional model, combining process discipline with a collaborative culture, is designed to put organizations in a position to build faster while making sure their solutions are highly secure, highly productive, and deliver real business value.