Spectra Assure Free Trial
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free Trial
A new report on AI security from the Cloud Security Alliance (CSA) finds security leaders working to secure AI systems even as they begin using AI to strengthen security itself. Some are doing better than others, and governance makes the difference, the CSA says.
“The market is evolving at remarkable speed, and governance is emerging as the foundation that determines whether adoption advances responsibly or outpaces an organization’s ability to manage it,” the CSA writes in its report, “The State of AI Security and Governance.” The report, based on a survey of 300 IT and security professionals from organizations of a variety of sizes and locations, was sponsored by Google Cloud.
Hillary Baron, the CSA’s assistant vice president for research and lead author of the report, said governance is what turns AI from “experimentation” into a repeatable, scalable, and auditable deployment. “In the survey, governance maturity is the clearest predictor of readiness,” she said. Those organizations that have formal governance are twice as likely to adopt agentic AI as those that don’t, three times more likely to train staff, and twice as confident about protecting their AI systems.
Hillary BaronIn short, [governance] is associated with successful AI adoption.
Here's why you need to know about getting ahead of AI risk with effective governance.
See webinar: Modern TPRM: Strategies for Securely Onboarding Software
AI has made governance more important than ever, said Stephanie Whitnable, a field data officer for DataBee. It’s not just about compliance, she said. “It’s about ensuring trustworthy AI outcomes.”
Stephanie WhitnableThe integrity of AI models depends on accurate, complete, and ethically sourced data. Governance now has to tackle bias, fairness, transparency, and emerging risks like model drift and hallucinations, making it a strategic pillar of AI adoption.
Modern governance is automated and integrated, she said, using policy as code to enforce rules in real time, unified visibility to reduce silos, security-first governance to protect data across hybrid environments, and AI-assisted oversight to free teams to focus on higher-value decisions.
Whitnable said organizations needn’t fear that good governance will stifle innovation. “Far from being a bottleneck, governance enables innovation with confidence,” she said. “In the era of AI, it’s about safeguarding not just data but the integrity of decisions that shape the future.”
Iftach Ian Amit, founder and CEO of Gomboc.ai, said that having effective AI governance isn’t a matter of slowing down adoption. Instead, it helps to make AI safe and useful.
Iftach Ian AmitIt’s about ensuring AI behavior is predictable, auditable, and aligned with real-world systems, which is ultimately what allows organizations to use AI safely and confidently.
Ryan McCurdy, vice president of marketing at Liquibase, said AI can fail when nobody trusts it in production. “Governance is how you earn that trust,” he said. “It answers the questions executives and security teams actually care about: what data was used, who approved it, what changed, and how we prove it is working safely over time.” In fact, AI that lacks effective governance should not be trusted.
Ryan McCurdyHere’s the part a lot of teams miss: AI multiplies the cost of bad change. If the underlying data or schema shifts without control, you do not just get a broken dashboard. You get confident answers that are wrong, and they spread fast.
Governance underlies the thoughtful deployment and use of AI, and because all business areas need to understand the potential risks and impacts, they all should take part in building the governance framework, said Karen Walsh, CEO and founder of Allegro Solutions.
Karen WalshA governance framework includes the technical users like the security team and the business leadership like the senior management team or board of directors.
Jeanette Manfra, senior director for global risk and compliance at Google Cloud, explained in a company blog post that many organizations still don’t have structured AI governance — and they don’t know how to get there.
Jeanette ManfraTo implement AI compliance and risk management properly, the legal, data governance, technical development, and cybersecurity teams should be brought together. Organizations need a structured, comprehensive approach.
The CSA report also found that security teams have become early adopters of AI. Over 90% of the survey’s respondents are testing or planning to use AI for threat detection, red teaming, and access control, the CSA notes. “With only 10% reporting no plans to invest, this represents a major inflection point: AI is not just a future concept for cybersecurity, it is becoming a near-term operational reality,” it added.
Security teams are sold on the idea that AI can provide faster detection, reduced analyst workload, and more scalable response, said the CSA’s Baron.
Hillary BaronAnd unlike past technology cycles, they don’t have to justify why they want to use AI. Leadership already understands the value and is actively encouraging adoption.
Jack E. Gold, founder and principal analyst at J.Gold Associates, said security teams are overwhelmed by false positives — and AI excels at detecting patterns.
AI has the promise of sorting through a lot of those alerts and saying, ‘These are the ones you need to be thinking about.’
Rosario Mastrogiacomo, chief strategy officer at Sphere Technology Solutions, agrees that security teams are under relentless pressure, with too many alerts, too much data, and not enough people. “AI offers immediate operational leverage — triage, correlation, pattern recognition, and speed,” he said.
Rosario MastrogiacomoSecurity teams also understand adversarial behavior better than most functions, so they instinctively see both the power and the risk of AI. In many cases, they’re adopting AI not out of enthusiasm, but necessity.
The CSA survey cautions that organizations are prioritizing well-understood risks over newer, AI-specific threats such as model drift, prompt injection, and model theft, which can quietly undermine reliability, integrity, and organizational control. Such risks frequently are out of sight until systems are deployed at scale, the CSA’s Baron said.
Hillary BaronData exposure and compliance are familiar, well-understood risks, so it’s natural that organizations focus there first. But model risks are newer, and addressing them is less clear.
Randolph Barr, CISO of Cequence Security, said traditional weaknesses are indeed responsible for the majority of AI-related incidents, but about one-third are AI-native, including model and data poisoning, prompt injection, and autonomous agents that can chain together API calls while acting with minimal human oversight.
Randolph BarrThese emerging risks reflect the reality that AI systems are dynamic, self-learning, and interconnected in ways traditional applications never were. When paired with the rapid speed of development, the outcome is a growing attack surface that grows faster than most security programs can respond.
The CSA concluded in its report that all the findings point to a single message: Governance maturity stands out as the strongest predictor of readiness and responsible innovation. “Only a minority of organizations report comprehensive AI security governance today,” it says, “but where unified frameworks are in place, outcomes consistently improve — earlier experimentation, higher board awareness, greater confidence in securing AI systems, and more robust staff training.”
Organizations must shift from fragmented policies to a unified governance model that spans all teams involved in AI.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free Trial