All articles
Security Leaders Guide AI Adoption With Smarter Controls and Real-World Deployment Models
Olha Kolomoiets, Delivery Manager and VP of AI Engineering at Apriorit, explains why responsible AI deployment depends on transparency, compliance discipline, and knowing where human judgment must stay in control.

Key Points
The AI conversation in cybersecurity is moving past hype into harder operational questions about compliance, sensitive data exposure, and secure deployment of models in production.
Olha Kolomoiets, Delivery Manager and VP of AI Engineering at Apriorit, identifies two dangerous trends pulling organizations in opposite directions: moving too slowly out of fear, or rushing into third-party tools they don't fully understand.
Teams that get the most value from AI focus on well-defined processes with historical data, keep decision-making human, and combine automation with experienced practitioners.
SOC teams are drowning in alerts and logs. They don’t need more data. They need smarter preprocessing and automation to make sense of it all.
A year ago, the AI conversation at the RSA Conference was about presence. Every booth, every panel, every pitch deck featured AI prominently. In 2026, that conversation is narrowing. The question is no longer whether to adopt AI but how to deploy it without creating new attack surfaces, leaking sensitive data to third parties, or breaking the compliance frameworks that organizations are still figuring out how to follow.
Olha Kolomoiets is Delivery Manager and VP of AI Engineering and Integration at Apriorit, a software engineering firm specializing in low-level development, cybersecurity R&D, and AI integration across kernel, network, and systems-level projects. Olha leads AI project delivery and oversees how the company adopts and deploys AI tooling responsibly, including building custom code review automation for sensitive development environments. Her priority is ensuring AI systems hold up under real-world security and compliance pressure.
“The market is shifting from just adopting AI to adopting it responsibly,” Kolomoiets says. “How it affects outcomes, revenue, and security really matters.” That shift is creating a central tension she sees play out across her customer base. Organizations are getting pulled toward two extremes, both risky. Some are paralyzed by the complexity of compliance requirements and avoid new tooling entirely. Others rush to deploy AI solutions from third-party vendors they barely understand, sometimes exposing sensitive telemetry through cloud processing pipelines in the process.
The cost of inaction: “Some companies are just afraid. They fear compliance, they fear the difficulty, and they don’t know how to implement these new tools,” Olha says. “That’s a vulnerability in itself, because attackers are not waiting. They’re scaling faster and more intelligently using AI.”
The cost of recklessness: The opposite extreme is just as dangerous. “Other companies rush into AI adoption and deploy solutions they don’t fully understand. They trust third-party tools, they break compliance just to release something quickly,” she says. In cybersecurity environments, where endpoint telemetry routinely contains sensitive data, sending that information to external cloud services without proper controls is a compliance failure waiting to happen.
Olha says the path through both extremes is transparency supported by deep expertise. Organizations need both visibility and the right specialists to properly design, configure, and validate AI systems. They must be explicit about where their AI models are deployed, what country and what service, whether on-premise or cloud. They need to document what data trained those models and how those models are tested, including penetration testing of AI components that are often deployed without adequate security review.
Education and enforcement together: The human side of the equation mirrors the same split. Some employees refuse to engage with new tools; others use anything they can find without organizational oversight. Olha says companies need both education and technical guardrails. “DLP solutions are not ready to protect against data sharing across new chatbots and services, because technologies appear so fast,” she explains. “You need time to prepare new features to technically block dangerous behavior. So both are important: educate people how to act, and put limits in place from the technical side.”
Smarter alerts, not more alerts: For SOC teams, the priority is clear. Analysts are buried in telemetry and alert volume, and adding more detection layers only compounds the problem. Olha sees the focus shifting toward automating sub-processes: smarter preprocessing, better filtering, and contextual triage that helps analysts understand what is happening faster.
Her own team at Apriorit offers a working example of what controlled adoption looks like. The company develops at the kernel and network level in C, C++, and Rust, working with code that constitutes intellectual property. Sensitive workloads run on-premise. Where AI-assisted tooling is used, such as automated code review for vulnerability and standards checking, the team built its own solution rather than trusting a generic third-party tool. “We use a combination of the deep experience of our senior developers and new tools,” she says. “Sometimes the automated reviewer catches vulnerabilities humans miss. But the reverse is also true. It’s about control and using tools in a smart way.”
The broader takeaway is that AI works best when it meets a clear operational foundation. Olha says the strongest candidates for AI integration are processes that are already defined, where historical data exists and repetitive work consumes time. Where those conditions are missing, AI amplifies disorder rather than resolving it. “The best parts to integrate AI are where processes are defined, historical data exists, and routine tasks can be automated,” she says. “AI will not fix your processes. It will not structure chaos. Automate the daily routine that takes time, and leave the decision-making to yourself.”






