All articles

AI Shifts Healthcare Security From Systems to Outcomes, Says Availity CISO

The Security Digest - News Team
Published
December 23, 2025

Mike Green, CISO at Availity, explains why healthcare security must protect patient outcomes as AI use grows.

Credit: CoreDesignKEY (edited)

Key Points

  • As healthcare teams use more AI, security leaders are shifting from protecting IT systems to protecting patient outcomes.

  • Mike Green, CISO at Availity, says AI makes attackers more dynamic, but it can also help lean security teams find and fix issues faster.

  • Green's framework includes clear guardrails that keep humans responsible, avoids fully automated decisions, and prioritizes fast recovery so care can continue after an incident.

Anybody can get breached. Just trying to keep from getting breached isn’t enough.

Mike Green

CISO

Mike Green

CISO
Availity

AI adoption in healthcare is forcing security leaders to shift their focus. Now, risk management is evolving from protecting systems to protecting outcomes. Subtle AI model errors, hallucinations, and automation shortcuts can quietly degrade diagnostic accuracy and care decisions without triggering traditional security alarms, creating tangible patient safety issues beyond abstract debates about responsible AI.

Mike Green, the Chief Information Security Officer at Availity, the nation's largest health information network, has been navigating these challenges across a two-decade career in information technology and security. Today, his perspective is informed by experience in senior leadership roles at major organizations, including Evernorth Health Services, Express Scripts, Mars, and Deloitte.

AI is a paradox for modern security teams, according to Green. “Anybody can get breached. Just trying to keep from getting breached isn’t enough," Green says. "You must be able to recover really quickly.” Today, technology is both a weapon being used against organizations with increasing sophistication—and a powerful shield for defense.

  • A shield and a sword: AI has sharpened the attacker’s sword, Green continues, with AI-perfected social engineering making phishing attacks harder than ever to spot. "No longer can a user identify a phishing email by a poorly placed comma, misspelled word, or grammatical errors. AI papers over that quite nicely." But AI is also emerging as a democratizing force in defense—a much-needed tool, given the foundational constraints that challenge the industry.

Here, Green identifies a few persistent issues: skills gaps, as overworked security teams lack time to reskill for new threats; severe budget limitations, especially for smaller healthcare organizations; and what he calls "a really complex ecosystem" of third parties. It's a world where "people who aren't in your house have access to things that belong in your house," he continues. The result is a massive surface area that's increasingly impossible to control.

  • The democratized advantage: For organizations facing these realities, AI can offer a tactical advantage. "AI is now bringing the basic pen test into the realm of everyone," Green says. "You can run it continuously to identify a misconfiguration much more quickly and economically."

In response to the hype, Green's answer is pragmatic governance. Leveraging national standards such as NIST's artificial intelligence risk management framework and CISA's Secure by Design principles, he avoids the liability of unchecked automation by using AI as a tool to augment human expertise. "We've leaned much more into using AI to enhance," he says. "Our goal is to inform a medical professional rather than have AI make the discrete decision on its own."

Drawing a direct parallel to the "big data" rush a decade ago, Green explains that this is not the first time the industry has been drawn to technology hype. Then, a similar race to unlock enterprise insights without proper guardrails led to predictable pitfalls. But the rush also introduced a subtle danger: the risk that authorized users would draw flawed conclusions from misunderstood data. "If you don't know what the data is and you don't know what you're looking at, or you don't know why you're looking at it, you can come to some really specious conclusions."

  • A bit of a boondoggle: Green's realism also extends to his skepticism of vendor hype. Whether it's the promise of a "single pane of glass" or a fully autonomous security operations center, his philosophy is to ground technology decisions in real-world value. "I love the idea of getting to a single pane of glass, but it can be a bit of a boondoggle if you don't know your environment and what you're trying to do with it."

  • Impress the front lines: For instance, trying to view patient-critical IoT devices through the same lens as employee cell phones is often a mistake, he explains. The risk models are fundamentally different. "My analysts would have to be excited about it. The reality is, it's got to impress the folks who are actually on the front lines. You have something valuable when a tool enhances their role in a way that allows them to focus on secondary or tertiary level items, not just operate as an entry-level analyst," he says, emphasizing that team buy-in is the ultimate metric.

The Change Healthcare breach highlighted the limitations of a security model focused solely on prevention. Green champions a "rapid recovery" model that can significantly reduce the time to obtain a third-party attestation. Instead of a technical benchmark, Green frames this metric as a defining measure of business continuity. Without that attestation, partners will not reconnect, and patient care will stall.

Ultimately, providers are still recovering financially from that outage, and basic infrastructure remains a target. Building a resilient operational model is no longer just an IT objective, Green concludes. It is a fundamental requirement for patient safety in an ecosystem that can no longer be perfectly secured. "We're simply too early in the AI space. It is too dangerous, with too many hallucinations, to allow AI to make any discrete decisions."