
Many organizations rely on AI generated code and automated tools without redesigning their security foundations, which allows broken access control and business logic flaws to scale faster than teams can catch them.
Muhammad Khizer Javed, Lead Penetration Tester at SecurityWall, says modern security starts with adversarial thinking, human accountability, and clear communication with AI agents from the design stage forward.
Teams reduce risk by embedding security aware leaders in early design reviews, manually reviewing AI generated code, and integrating continuous security scanning into development pipelines.
In cybersecurity, the call to "get back to the basics" is a comforting refrain. But this advice is incomplete when the basics themselves have changed. Foundational security is no longer about mastering syntax or following checklists; it's about security-first thinking, adversarial design before a single line of code is written, and clear human-AI communication about intent and boundaries. This misplaced trust in automated frameworks and AI-generated code helps explain why foundational vulnerabilities persist—making the old playbook increasingly irrelevant.
We spoke with Muhammad Khizer Javed, a cybersecurity professional with over nine years of experience actively involved in penetration testing and bug bounty hunting. As the current Lead Penetration Tester at SecurityWall and a two-time BlackHat speaker, Javed has been recognized by over 500 organizations, including Apple, Microsoft, and the US Department of Defense, for his contributions to the field. His position is that to navigate the risks of the AI era, we must first accept that the fundamentals themselves have changed.
"Basics have changed a lot. The basics now are how a person can communicate better not just with humans, but with computers and AI agents," says Javed. This new perspective directly challenges the long-held idea of a “balanced” approach to security. The conventional wisdom of trading security for speed becomes an increasingly risky trade-off, especially as AI agents introduce new and unpredictable risks. Javed makes the case for a proactive philosophy grounded in a simple truth: accountability remains a non-delegable human responsibility. Assuming a tool can shoulder that responsibility is a key factor in many modern security failures, a dynamic that often leaves leaders accountable for risks they can't control.
An adversarial mindset: Javed has shifted away from the traditional 'balanced' approach to security, emphasizing proactive, security-first thinking and execution. "Teams need to learn to 'think evil' before they think better. You have to build a system as if it were going to court tomorrow, considering every security and privacy aspect from the outset." He notes that human oversight remains the cornerstone of secure systems, even as AI is joining in and handling more tasks. "In our rush to be proactive, we often let the computer take control, and critical things can get missed," he says. "At the end of the day, a computer cannot be held accountable for its actions. Only the humans who built and deployed it can be."
Adopting this new mindset is important because modern development practices can inadvertently scale insecure coding patterns. These vulnerabilities are rarely simple one-off mistakes, and often surface as symptoms of deeper, systemic flaws. Research indicating that AI-generated code introduces security flaws in nearly half of its tests provides quantitative evidence for a problem practitioners see daily. Javed argues that these patterns are not just technical debt but organizational debt, accumulating when leadership fails to create an environment that protects its teams from the pressure to prioritize speed over safety.
Identifying vulnerability patterns: Security flaws often appear in clusters, revealing deeper weaknesses in a system's framework. "When we find one authorization vulnerability, we often find many more because it points to a flawed framework," Javed says. This exercise also highlights the importance of teams tracking and addressing root causes, not just individual errors.
AI amplification of risks: While AI accelerates development and helps identify issues at scale, it can also propagate mistakes faster than humans can catch them. That’s why integrating human oversight at every stage is essential to ensure security keeps pace with speed. Javed warns, "An AI is not trained to think like a security researcher. You cannot simply command the AI to build something securely and trust the result. Every line of AI-generated code must be manually reviewed to verify it is secure."
Systemic pressures, not developer blame: Javed is quick to point out that security issues often stem from organizational pressures rather than individual mistakes. "Developers have time constraints, KPIs to fulfill, and features to deliver, so security gets left behind," he says. These pressures can lead to real-world consequences, like a dating app accidentally exposing a user's location data in an API response. "It’s not just the developer's fault; the whole ecosystem has to become security-aware."
A significant push within the industry toward an agentic SOC is fueled by the promise of reducing analyst burnout, but that efficiency introduces new risks. As reports of exploits like the "IDEsaster" show, AI development tools can be turned against their users, and some agentic browsers can bypass decades of web security. Javed cautions that a fully autonomous SOC could become a single point of failure, if not managed properly.
The risks of autonomy: The key, he states, is simply defining safe boundaries. A safe use case could be investigation on data that is already collected, compared to the potential risk from using it on live data coming into the system, because that's where AI can be exploited. "When an AI is designed to act like a human, it also introduces vulnerabilities like a human," Javed explains. "If an agentic SOC model receives a payload with a prompt injection and executes it incorrectly, the entire SOC could be compromised."
Get disciplined: Javed centers his solution on discipline at both the human and technical layers. “The one thing that truly works is communication with a security-aware person,” he says. “Sit with the development team at every stage and discuss potential vulnerabilities in the design before a single line of code is written. Having a security-aware person in that room is what makes the real difference, whether it's the project manager or a developer.” That oversight continues after development. “The other half of the solution is a continuous build system with security scanning integrated into the pipeline. After a system is built, you scan it with basic code analysis tools like CodeRabbit or SonarQube. These are things that actually work to save you from pushing vulnerabilities into production.”
Javed's final thoughts highlight the two biggest blind spots he's observing that stem from the inherent security risks of large language models in AI-generated code: broken access control and business logic flaws. With broken access control, the AI builds the components but fails to implement authorization correctly, allowing users to access what they shouldn't. Business logic flaws occur when the AI doesn't understand the application's intent. For example, you can explain a rule about who is allowed to send a message, but the AI won't grasp the nuance and will implement it incorrectly. Javed believes addressing these flaws requires continuous human oversight, clear AI governance, and a security-first mindset to ensure AI remains a powerful tool rather than another source of unchecked risk.