RL Blog

Topics

All Blog PostsAppSec & Supply Chain SecurityDev & DevSecOpsProducts & TechnologySecurity OperationsThreat Research

Follow us

XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBluesky

Subscribe

Get the best of RL Blog delivered to your in-box weekly. Stay up to date on key trends, analysis and best practices across threat intelligence and software supply chain security.

Spectra Assure Free Trial

Get your 14-day free trial of Spectra Assure for Software Supply Chain Security

Get Free TrialMore about Spectra Assure Free Trial
Blog
Events
About Us
Webinars
In the News
Careers
Demo Videos
Cybersecurity Glossary
Contact Us
reversinglabsReversingLabs: Home
Privacy PolicyCookiesImpressum
All rights reserved ReversingLabs © 2026
XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBlueskyRSSRSS
Back to Top
ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
Skip to main content
Contact UsSupportLoginBlogCommunity
reversinglabs
Security OperationsJanuary 15, 2026

Why governance is key to safe AI adoption

A new CSA report stresses getting out in front of AI risk — and why it matters for SecOps.

John P. Mello Jr.
John P. Mello Jr., Freelance technology writer.John P. Mello Jr.
FacebookFacebookXX / TwitterLinkedInLinkedInblueskyBlueskyEmail Us
AI adoption guardrails

A new report on AI security from the Cloud Security Alliance (CSA) finds security leaders working to secure AI systems even as they begin using AI to strengthen security itself. Some are doing better than others, and governance makes the difference, the CSA says.

“The market is evolving at remarkable speed, and governance is emerging as the foundation that determines whether adoption advances responsibly or outpaces an organization’s ability to manage it,” the CSA writes in its report, “The State of AI Security and Governance.” The report, based on a survey of 300 IT and security professionals from organizations of a variety of sizes and locations, was sponsored by Google Cloud.

Hillary Baron, the CSA’s assistant vice president for research and lead author of the report, said governance is what turns AI from “experimentation” into a repeatable, scalable, and auditable deployment. “In the survey, governance maturity is the clearest predictor of readiness,” she said. Those organizations that have formal governance are twice as likely to adopt agentic AI as those that don’t, three times more likely to train staff, and twice as confident about protecting their AI systems.

In short, [governance] is associated with successful AI adoption.

Hillary Baron

Here's why you need to know about getting ahead of AI risk with effective governance.

See webinar: Modern TPRM: Strategies for Securely Onboarding Software

Make sure governance isn’t a bottleneck

AI has made governance more important than ever, said Stephanie Whitnable, a field data officer for DataBee, a Comcast company. It’s not just about compliance, she said. “It’s about ensuring trustworthy AI outcomes.”

The integrity of AI models depends on accurate, complete, and ethically sourced data. Governance now has to tackle bias, fairness, transparency, and emerging risks like model drift and hallucinations, making it a strategic pillar of AI adoption.

Stephanie Whitnable

Modern governance is automated and integrated, she said, using policy as code to enforce rules in real time, unified visibility to reduce silos, security-first governance to protect data across hybrid environments, and AI-assisted oversight to free teams to focus on higher-value decisions.

Whitnable said organizations needn’t fear that good governance will stifle innovation. “Far from being a bottleneck, governance enables innovation with confidence,” she said. “In the era of AI, it’s about safeguarding not just data but the integrity of decisions that shape the future.”

Iftach Ian Amit, founder and CEO of Gomboc.ai, said that having effective AI governance isn’t a matter of slowing down adoption. Instead, it helps to make AI safe and useful.

It’s about ensuring AI behavior is predictable, auditable, and aligned with real-world systems, which is ultimately what allows organizations to use AI safely and confidently.

Iftach Ian Amit

Ryan McCurdy, vice president of marketing at Liquibase, said AI can fail when nobody trusts it in production. “Governance is how you earn that trust,” he said. “It answers the questions executives and security teams actually care about: what data was used, who approved it, what changed, and how we prove it is working safely over time.” In fact, AI that lacks effective governance should not be trusted.

Here’s the part a lot of teams miss: AI multiplies the cost of bad change. If the underlying data or schema shifts without control, you do not just get a broken dashboard. You get confident answers that are wrong, and they spread fast.

Ryan McCurdy

Go all in on AI governance

Governance underlies the thoughtful deployment and use of AI, and because all business areas need to understand the potential risks and impacts, they all should take part in building the governance framework, said Karen Walsh, CEO and founder of Allegro Solutions. 

A governance framework includes the technical users like the security team and the business leadership like the senior management team or board of directors.

Karen Walsh

Jeanette Manfra, senior director for global risk and compliance at Google Cloud, explained in a company blog post that many organizations still don’t have structured AI governance — and they don’t know how to get there. 

To implement AI compliance and risk management properly, the legal, data governance, technical development, and cybersecurity teams should be brought together. Organizations need a structured, comprehensive approach.

Jeanette Manfra

Why governance matters to AI in security operations

The CSA report also found that security teams have become early adopters of AI. Over 90% of the survey’s respondents are testing or planning to use AI for threat detection, red teaming, and access control, the CSA notes. “With only 10% reporting no plans to invest, this represents a major inflection point: AI is not just a future concept for cybersecurity, it is becoming a near-term operational reality,” it added.

Security teams are sold on the idea that AI can provide faster detection, reduced analyst workload, and more scalable response, said the CSA’s Baron. 

And unlike past technology cycles, they don’t have to justify why they want to use AI. Leadership already understands the value and is actively encouraging adoption.

Hillary Baron

Jack E. Gold, founder and principal analyst at J.Gold Associates, said security teams are overwhelmed by false positives — and AI excels at detecting patterns. 

AI has the promise of sorting through a lot of those alerts and saying, ‘These are the ones you need to be thinking about.’

Rosario Mastrogiacomo, chief strategy officer at Sphere Technology Solutions, agrees that security teams are under relentless pressure, with too many alerts, too much data, and not enough people. “AI offers immediate operational leverage — triage, correlation, pattern recognition, and speed,” he said.

Security teams also understand adversarial behavior better than most functions, so they instinctively see both the power and the risk of AI. In many cases, they’re adopting AI not out of enthusiasm, but necessity.

Rosario Mastrogiacomo

Shift gears to focus on modern AI threats

The CSA survey cautions that organizations are prioritizing well-understood risks over newer, AI-specific threats such as model drift, prompt injection, and model theft, which can quietly undermine reliability, integrity, and organizational control. Such risks frequently are out of sight until systems are deployed at scale, the CSA’s Baron said. 

Data exposure and compliance are familiar, well-understood risks, so it’s natural that organizations focus there first. But model risks are newer, and addressing them is less clear.

Hillary Baron

Randolph Barr, CISO of Cequence Security, said traditional weaknesses are indeed responsible for the majority of AI-related incidents, but about one-third are AI-native, including model and data poisoning, prompt injection, and autonomous agents that can chain together API calls while acting with minimal human oversight.

These emerging risks reflect the reality that AI systems are dynamic, self-learning, and interconnected in ways traditional applications never were. When paired with the rapid speed of development, the outcome is a growing attack surface that grows faster than most security programs can respond.

Randolph Barr

The CSA concluded in its report that all the findings point to a single message: Governance maturity stands out as the strongest predictor of readiness and responsible innovation. “Only a minority of organizations report comprehensive AI security governance today,” it says, “but where unified frameworks are in place, outcomes consistently improve — earlier experimentation, higher board awareness, greater confidence in securing AI systems, and more robust staff training.”

Organizations must shift from fragmented policies to a unified governance model that spans all teams involved in AI.

Keep learning

  • Get up to speed on the state of software security with RL's Software Supply Chain Security Report 2026. Plus: See the the webinar to discussing the findings.
  • Learn why binary analysis is a must-have in the Gartner® CISO Playbook for Commercial Software Supply Chain Security.
  • Take action on securing AI/ML with our report: AI Is the Supply Chain. Plus: See RL's research on nullifAI and watch how RL discovered the novel threat.
  • Get the report: Go Beyond the SBOM. Plus: See the CycloneDX xBOM webinar.

Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.

Tags:Security Operations

More Blog Posts

ReversingLabs: Home
Solutions
Secure Software OnboardingSecure Build & ReleaseProtect Virtual MachinesIntegrate Safe Open SourceGo Beyond the SBOM
Increase Email Threat ResilienceDetect Malware in File Shares & StorageAdvanced Malware Analysis SuiteICAP Enabled Solutions
Scalable File AnalysisHigh-Fidelity Threat IntelligenceCurated Ransomware FeedAutomate Malware Analysis Workflows
Products & Technology
Spectra Assure®Software Supply Chain SecuritySpectra DetectHigh-Speed, High-Volume, Large File AnalysisSpectra AnalyzeIn-Depth Malware Analysis & Hunting for the SOCSpectra IntelligenceAuthoritative Reputation Data & Intelligence
Spectra CoreIntegrations
Industry
Energy & UtilitiesFinanceHealthcareHigh TechPublic Sector
Partners
Become a PartnerValue-Added PartnersTechnology PartnersMarketplacesOEM Partners
Alliances
Resources
BlogContent LibraryCybersecurity GlossaryConversingLabs PodcastEvents & WebinarsLearning with ReversingLabsWeekly Insights Newsletter
Customer StoriesDemo VideosDocumentationOpenSource YARA Rules
Company
About UsLeadershipCareersSeries B Investment
EventsRL at RSAC
Press ReleasesIn the News
Pricing
Software Supply Chain SecurityMalware Analysis and Threat Hunting
Request a demo
Menu

Crypto group ushers in post-quantum security

Here’s a look at the Ethereum Foundation’s new PQC security effort — and why you need to modernize your SecOps.

Learn More about Crypto group ushers in post-quantum security
Crypto group ushers in post-quantum security

Cybercrime-as-a-service forces a security rethink

With AI-powered tools readily available, sophisticated attacks no longer require sophisticated attackers.

Learn More about Cybercrime-as-a-service forces a security rethink
Cybercrime-as-a-service forces a security rethink

Adversarial AI is on the rise: What you need to know

Researchers explain that as threat actors move to AI-enabled malware in active operations, existing defenses will fail.

Learn More about Adversarial AI is on the rise: What you need to know
Adversarial AI is on the rise: What you need to know
AI technical debt

AI technical debt: What it is — and why it matters

AI platforms exacerbate existing security risks. Here’s what you need to know to stay out of technical debt. 

Learn More about AI technical debt: What it is — and why it matters
AI technical debt: What it is — and why it matters
Post-quantum security
Cybercrime-as-a-service
Adversarial AI rise