
Burnout in cybersecurity is increasingly being reframed as an operational resilience risk, as fatigue degrades decision-making and creates vulnerabilities during high-stakes incidents.
Amelia Hewitt, Co-Founder and Director of Cyber Consulting at Principle Defence and Founder of CybAid, argues that human cognitive endurance must be treated as a finite resource in security planning.
She outlines how governance-first AI, improved contextual visibility, and scenario-based resilience testing can reduce cognitive load and strengthen human decision-making during incidents.
The conversation around burnout in cybersecurity is changing. Once seen primarily as a personal wellness or HR issue, it is now being reframed as a key vulnerability in operational resilience. This perspective challenges a flawed assumption at the heart of many security models: that human cognitive endurance is a limitless resource. A growing number of security leaders now believe that when responders become fatigued, they make materially worse decisions, creating a major and often unmeasured risk.
Amelia Hewitt regularly advises organizations on this challenge. As the Co-Founder and Director of Cyber Consulting at Principle Defence and Founder of the non-profit CybAid, she specializes in securing critical national infrastructure. Hewitt is also a recognized TechWomen100 award winner, co-author of the book Securely Yours, and co-host of the Cyber Agony Aunts podcast.
Hewitt explains that to build truly resilient organizations, leaders must stop treating burnout as a personal failing and start treating it as a systemic risk. As she notes, when a security analyst is working on their 17th consecutive hour, their cognitive performance is on par with someone who is legally intoxicated. "Burnout is not only a human issue, it’s a resilience issue within cybersecurity. If we leave it unaddressed, it will impact the cybersecurity space, particularly during incidents," says Hewitt.
Exploiting empathy: The slow-burn attack vector is often paired with social engineering, which takes advantage of a fundamental human trait: empathy. Hewitt points to a string of recent attacks where IT help desks were the primary point of entry. "We're people with empathy. If someone calls us in IT, says they're having a terrible day, and they need support, you're more inclined to help them and not question them. And that is why the Help Desk is one of the primary sources for these attacks."
The long game: Hewitt explains the most acute risk emerges not from the sensational "big bang" attacks, but from "enduring incidents." These prolonged campaigns, where attackers have been inside a system for months, test the limits of a defender’s finite cognitive endurance. It’s a reality that demands a new approach to incident response planning and one that finally accounts for the human limits of the response team.
With threat actors using their own AI to compress attack timelines, the focus moves toward providing better technological support for human defenders. For Hewitt, adopting AI requires a pragmatic, governance-first mindset. "Many vendors say their products are AI-powered as if that’s an inherent benefit," Hewitt says. "As security professionals, what we actually need are well-considered use cases that address not just security, but also AI governance and ethics.”
Governing the machine: Hewitt advises leaders to first ask hard questions about accountability and impact before deploying any automation, grounding governance in established frameworks. "Before you automate a process, you have to ask yourself: would I be happy if this went wrong? Would you be happy with an inconsistent output, and what would the real-world impact of that be? Would it have an impact on someone's personal data or an outcome that impacted an individual?"
In Hewitt's view, AI's job is to automate the intensive context-gathering and triage work that precedes a high-stakes human decision. This approach allows AI to handle autonomous investigations and deliver rich context so the analyst can make a faster, more informed judgment while reserving the final decision-making authority for the human.
Context is key: Hewitt’s philosophy of providing technological context extends beyond individual alerts to the entire organizational structure. In multi-layered environments like critical infrastructure, siloed IT and OT teams often lack a shared understanding during an incident. Here, a "single pane of glass" approach can be the key to a cohesive response. "From a cognitive load perspective, the goal is to require less human input," Hewitt explains. "Context is key, and the value we get from these AI systems is context. If an analyst can log on and immediately see which assets are affected, they can respond in a much more targeted and appropriate way."
Making risk quantifiable: She also believes risk management should play an active, operational role. "Too often, risk registers just collect dust on a shelf," she says. "Risk management must be made operational. Using a methodology like FAIR gives stakeholders a quantifiable view, in pounds and pennies, of the financial impact of these incidents."
Practice makes perfect: To turn abstract risks into manageable, operational processes, Hewitt recommends embedding them into hard, auditable frameworks. "Human involvement in your response planning is vital. It’s time to move beyond simple tabletop exercises and conduct more robust, scenario-based resilience testing with a much wider audience to build true muscle memory across the organization."
Ultimately, Hewitt’s message encourages a change in perspective on burnout for leadership. "If you think about it in terms of decision making," she says, "how can we expect our staff to make effective decisions if they don't have the mental capacity to do so because they're feeling burnt out or exhausted?" Hewitt suggests leaders should stop viewing burnout as a personal, emotional problem and instead recognize it as the serious and unaddressed security risk that it is.