This blog post is the second part of a three-part series discussing The Rise of the AI-enabled, Distributed SOC. The series is based on the roundtable discussion between Brian Cotton (SVP of Global Advisory Services at Frost & Sullivan), Lucas Ferreyra (Senior Cybersecurity Industry Analyst at Frost & Sullivan), Dean De Beer (Cofounder & CTO at Command Zero), Alfred Huger (Cofounder & CPO at Command Zero) and Erdem Menges (VP of Product Marketing at Command Zero). You can watch the full recording here. (video, 34 minutes)
You can read Part I: The Inflection Point – Why Traditional SOCs Must Evolve here.
“AI is certainly making a significant impact in the way we think about cybersecurity operations, but to truly evolve those cybersecurity operations, organizations need to trust that these AI agents with more autonomy are doing what they’re supposed to be doing.” – Lucas Ferreyra, Frost & Sullivan
The evolution toward AI-powered Security Operations Centers represents a fundamental shift in cybersecurity decision-making, with trust emerging as the critical success factor that determines implementation success or failure.
The Trust Imperative and Transparency for the Evolving SOC
“ From the perspective of a SOC, trust has always been and continues to be a substantial issue, particularly with human analysts because the quality of your team is, widely varied. What you’re not looking for in any security response or investigation is a wide variety of outcomes in terms of skill. You want them to be highly consistent. They don’t have that today.” Alfred Huger, Command Zero
The core challenge facing security operations centers involves a critical trust equation. While trusting expert cybersecurity analysts comes naturally through experience and proven expertise, extending similar confidence to AI agents requires overcoming psychological and practical barriers. Organizations grapple with AI systems that might recommend quarantining half their endpoints or halting operational technology—decisions with massive business implications.
Everyone has become familiar with AI mistakes and potential hallucinations, creating natural skepticism about autonomous decision-making in high-stakes environments. Looking up information represents manageable risk, but when AI assistants want to take disruptive actions across operational infrastructure, the trust threshold increases exponentially.
The solution lies in radical transparency becoming the most essential quality for AI deployment in security operations. AI systems must provide complete visibility into their reasoning processes, allowing analysts to understand not just recommendations, but the underlying logic and data analysis that drove those conclusions. This approach transforms opaque “black box” systems into transparent “glass box” operations that build confidence through comprehension.
“Trust underpins the SOC today writ large… models are generally a little more reliable in many cases than some analysts. It’s not to say that you can replace them because I don’t think you can.” Alfred Huger, Command Zero
Practical experience reinforces this trust-building approach. While models demonstrate greater reliability than some human analysts in specific scenarios, the goal involves augmentation rather than replacement, leveraging AI to enhance human capabilities rather than substitute for human judgment.
Redefining Professional Roles Through AI Augmentation
“ A massive part of cybersecurity is managing human risk for instance. And I feel like there will always be an aspect of connecting with other humans within cybersecurity, even if we reach the point of AGI and completely dehumanizing certain aspects of security. I don’t think that we should really go there. So there will always be a place for humans in that.” Lucas Ferreyra, Frost & Sullivan
Industry leaders consistently reject replacement narratives in favor of human empowerment strategies. The most successful AI implementations focus on transforming existing team members into force multipliers, enabling personnel to tackle challenges previously beyond their capabilities. This approach takes people off the bench and puts them into active roles, allowing them to punch above their weight in complex problem-solving scenarios.
This transformation addresses a persistent challenge in security operations: quality consistency across analyst teams. Traditional SOCs struggle with wide variations in skill levels and investigation outcomes, seeking highly consistent results rather than variable performance quality. AI-augmented operations provide more reliable frameworks while accelerating professional development for junior analysts.
The evolution redefines roles rather than eliminating positions. Analysts transition from reactive ticket processors to Engineering and Operation Specialists, becoming investigation directors who manage complex security incidents, refine AI technologies for their specific environments, and focus on sophisticated, organization-specific threats requiring human intuition and contextual understanding.
“Analysts roles are becoming Engineering and Operation Specialists… investigation directors, where they manage investigations and they help refine the technologies specific to their environment.” Dean De Beer, Command Zero
Overcoming Cultural and Technical Barriers
“We’ve got older individuals of my vintage in these SOCs, and they’ve developed their ways of working… A SOC by nature is going to be risk averse.” Brian Cotton, Frost & Sullivan
Implementation success requires addressing generational and cultural challenges within security organizations. Seasoned professionals have developed established workflows and trusted tool chains, creating natural resistance to transformative technologies that challenge familiar operational approaches. These experienced team members must learn to trust new systems while potentially feeling like they’re losing control over their established work processes.
Security operations centers maintain inherently risk-averse cultures where new tools with unknown characteristics represent potential threats rather than opportunities. This cultural dynamic requires careful management, emphasizing gradual adoption and demonstrated value rather than wholesale transformation.
“Organizations have always been kind of pragmatic about the tools they’re using and about the features within those tools they choose to use or not use. This should continue to be true for most of the integration with AI. You can’t just throw AI at a problem and expect it to go away.” Lucas Ferreyra, Frost & Sullivan
Technical barriers encompass concerns about model drift, performance degradation over time, and the static nature of some AI systems that cannot adapt to evolving organizational needs. Organizations often push back against over-automation due to perceived lack of control, leading to reluctance in adopting technologies that might fundamentally change operational approaches.
Addressing these challenges requires systematic approaches that build technical proficiency while providing visibility into AI system operations, limitations, and human oversight requirements. The user experience revolution extends beyond human-AI interfaces to encompass agent-to-agent communications in multi-agent systems, requiring intuitive design that helps analysts trust and interrogate AI decision-making processes.
“I firmly believe in showing your homework for tasks done by AI systems. Allow for those communications to be presented, to be analyzed… That’s the difference between the black box and the glass box.” Dean De Beer, Command Zero
Part III: The Six-Month Imperative – Practical Implementation and Strategic Vision dives into the current winning cases and practical tips on how every organization can take advantage of AI for SOC in a thoughtful way.


