All articles
From Security To Scalability: AI Programs Hold Up When Governance Becomes Architecture
Teri Green, Vice President of Technology at Elevate Energy, says companies can manage AI responsibly by defining how systems operate, ensuring human oversight, and creating practical tools to drive smarter decision-making.

Key Points
The rush to adopt AI without understanding what systems touch, execute, or expose is creating security risks that can lead to data breaches and operational failures.
Teri Green, Vice President of Technology at Elevate Energy, says companies fail when they treat AI as an add-on and skip securing models before scaling.
Effective AI governance relies on clear definitions, human accountability, and frameworks like Green-Manson’s T.E.S.T. model to guide safe AI use.
If it’s not a model you can secure, it’s not a model you can scale; and most teams are skipping that step entirely.
The corporate rush to adopt AI is creating a major—and avoidable—security risk. The issue is lack of clarity and governance. Companies are deploying AI without knowing what these systems touch, execute, or expose, creating blind spots that hide vulnerabilities in plain sight. Actionable frameworks are rising to meet the challenge.
Teri Green, Vice President of Technology at Elevate, a not-for-profit clean energy solutions provider, is an experienced technology executive with past leadership roles at KIPP SoCal Public Schools and Lite Technology Solutions. She argues that governance starts with precise definitions and human understanding, not more tools. Her approach helps organizations reconcile human behavior, technical complexity, and regulatory requirements.
"If it’s not a model you can secure, it’s not a model you can scale; and most teams are skipping that step entirely," Green says. Securing a model also requires clearly defining the data it uses. Otherwise, gaps emerge that attackers can exploit. "I just talked to somebody who said, ‘We don’t want AI touching PII. I said, ‘PII is broad. I need you to name it. Are we talking about names, Social Security numbers?’ If you don’t have concrete definitions, teams will continue down a rabbit hole," Green explains.
The human factor: Green explains that 60% of risk comes from people and 40% from technology, adding, "People are more important than anything within the organization. We’ve veered away from what’s most important. We’re so busy focused on regulatory and compliance that we aren’t thinking about the people. Yet the people are the problem, and the people are the solution." This human factor directly influences how organizations respond to AI adoption pressures.
A CISO's vulnerabilities: Misalignment between teams and leadership is rapidly widening the gap between rapid deployment and proper governance, leaving CISOs to accept unknown exposures. "People are afraid of being obsolete. So you either move fast and accept or minimize the risk, or move slow and become obsolete. This fear is driving adoption, which is pushing risk into the environment without full awareness. That’s the most fragile position for a CISO; they can be the first ones out the door," she says.
The tension between human pressures and organizational oversight often collides with technical realities. Green notes that legacy systems can amplify unknown risk if teams aren’t proactive, "When you say legacy equipment, I think about things like COBOL. You have to understand the technical constraints and have strict pipelines and containerization. If you don’t, you’re as blind as you were yesterday." Returning to fundamentals is key to understanding the potential challenges and vulnerabilities ahead.
Back to basics: "A lot of teams have looked at all these new tools and forgotten the basics. I went back to the basics with a DevOps team. A year later, it’s a completely different team. Out of sight is out of mind, and that’s what’s happening in real life," Green says. New tools cannot replace a foundation of cybersecurity basics.
Confidently wrong: Green created the T.E.S.T. framework: Touch, Execute, Store, Trust. She explains that organizations must understand what AI is touching, monitor how it executes on that data, ensure storage is secure, and maintain human oversight to verify outputs. "AI can be completely wrong, but it sounds confident. That can be very dangerous. T.E.S.T. forces clarity and accountability," she adds.
That output is designed to guide decision-making at every level of the business. The framework translates risk into three dashboards for decision-making: board-level summaries, technical engineering views, and executive-focused overviews, ensuring clarity drives action, not just awareness. "It gives leaders the information they need so they can drive decisions, not just sit back and watch," says Green.
Companies must re-center on the fundamentals, establishing clear data definitions, reinforcing human accountability, and adhering to foundational security principles before layering on additional AI capabilities. Doing so ensures innovation can be scaled responsibly, preserving control, accountability, and operational integrity. "Start with the people. Start with clarity. Then build your tools on a foundation that makes sense," Green concludes.






