RL Blog

Topics

All Blog PostsAppSec & Supply Chain SecurityDev & DevSecOpsProducts & TechnologySecurity OperationsThreat Research

Follow us

XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBluesky

Subscribe

Get the best of RL Blog delivered to your in-box weekly. Stay up to date on key trends, analysis and best practices across threat intelligence and software supply chain security.

ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
AppSec & Supply Chain SecurityJanuary 28, 2025

AppSec & Supply Chain Security | January 28, 2025 AI is a double-edged sword: Why you need new controls to manage risk

AI can improve cybersecurity outcomes, but it also represents an entirely new threat. Upgrade your security strategy — and tooling — for the AI age.

smiling woman
Ericka Chickowski, Freelance writer.Ericka Chickowski
FacebookFacebookXX / TwitterLinkedInLinkedInblueskyBlueskyEmail Us
two men playing with wooden swords

Like just about every part of business today, cybersecurity has been awash in promises of what AI can do for its tools and processes. In fact, cybersecurity vendors have touted the power of algorithmic detection and response for years.

But risk management professionals and application security (AppSec) teams need to recognize that the relationship between AI and cybersecurity extends far beyond enhancing algorithms or adding generative AI features to the security tool stack. In short, AI can undermine enterprise security just as well as enhance it.

Malcolm Harkins, chief security and trust officer for Hidden Layer, said that while AI is being used by hackers for deepfakes or automated attacks, the bigger threat to organizations is how they develop AI apps and processes.

[AI] itself is a completely different tech stack: different file types, model types, and totally different ways of being susceptible to attack. And to be blunt, the existing enterprise security stack does not protect AI — particularly AI models — from being attacked.

Malcolm Harkins

Don't let the cybersecurity promise of AI make you blind to the need to secure the AI systems deployed in the enterprise, including all AI-developed software running in your organization. Here's why you need to update your strategy — and your security tooling — for the AI age.

Get White Paper: How the Rise of AI Will Impact Software Supply Chain Security

The AI blind spot

As the pace of embedding AI in enterprise systems accelerates, there is a general awareness that AI will add risk to the technology infrastructure and business processes that it supports. As a result, the corporate world has been rolling out AI risk-governance boards. Too many of them, however, have implemented AI governance policies that rely on traditional security controls, Harkins said.

In a recent analysis of the most common types of threats to the AI stack that security researchers have uncovered, Harkins found that they can be grouped in these three categories:

  • Threats to AI models: data poisoning, model evasion, model theft.
  • Threats from malicious input: prompt injection, code injection
  • Threats to artifacts in the AI supply chain: code execution, malware delivery, lateral movement

Harkins then assessed the strength of present controls, including static application security testing (SAST), dynamic AST, and vulnerability and malware scans.

The result was a color-coded spreadsheet showing controls that couldn’t manage a particular AI risk, controls that provided only indirect protection or partial coverage, and, in green, controls sufficient for the AI risk. The spreadsheet was devoid of green.

Models today are not only vulnerable; they're easily exploitable. Our research is proving that all the time.

Malcolm Harkins

The open question is whether attackers are using these flaws. Many security leaders have told Harkins that they currently view threats to AI as a low priority because they aren’t seeing attacks against it with any regularity.

But, Harkins noted, "The absence of evidence doesn’t prove the evidence of absence. If I don’t have logging and monitoring purpose-built for AI models, how am I ever going to know an attack occurred?”

Where to get started on securing AI

This blind spot with AI is the basis for a recent RSA 360 article that Harkins wrote urging enterprises to start getting serious about bolstering the AI-specific controls they have in place. He’s been a champion for best practices and standards free of vested interests and vendor hype.

One effort Harkins hopes security practitioners get behind is the Coalition for Secure AI (CoSAI), which sets security standards and frameworks for securing tech from unique AI risks. More standards are expected from the group on model signing that will be similar to what the AppSec world has done with code signing, Harkins said.

As groups such as CoSAI start to tackle standards and cross-industry cooperation, security leaders can, little by little, start adding AI visibility and controls, Harkins said. His advice: “Start embedding AI visibility and awareness into your existing security practices."

One example: If you have an existing threat intelligence program, you should be embedding more feeds that cover attacks against AI. And third-party risk management programs should be asking questions about how vendors use AI.

Most importantly, security teams with asset management and vulnerability management programs should find a way to build out an AI inventory and ways to enumerate AI flaws. To ameliorate fears that that will further strain the vulnerability management team with even more vulns to prioritize, Harkins said, “We might use AI to help in that.”

Invest in AI security — and the right tools for the job

In order to fund it all, Harkins said, CISOs and other risk leaders need to be crafty and aware of when AI initiatives are being vetted. If an AI initiative gets $25 million, then it should only follow that at least some of those funds should be carved out to manage cyber-risk.

With machine learning driving the next generation of technology, the security risks associated with model sharing — and specifically issues within ML models such as serialization — are becoming increasingly significant, Dhaval Shah, senior director of product management at ReversingLabs, wrote recently. Vulnerabilities in serialization and deserialization are common across programming languages and applications, and they present specific challenges in machine learning workflows. For instance, Pickle, which is frequently used in AI, is especially prone to such risks, Shah wrote.

He said organizations need to stay ahead of these evolving threats with advanced detection and mitigation solutions, such as modern ML malware detection and protection. Shah had more advice:

  • Before you bring a third-party LLM model into your environment, check for unsafe function calls and suspicious behaviors and prevent hidden threats from compromising your system.
  • Before you ship or deploy an LLM model that you’ve created, ensure that it is free from supply chain threats by thoroughly analyzing it for any malicious behaviors.
  • Models saved in risky formats such as Pickle should be meticulously scanned to detect any potential malware before they can impact your infrastructure.

Keep learning

  • Get up to speed on the state of software security with RL's Software Supply Chain Security Report 2026. Plus: See the the webinar to discussing the findings.
  • Learn why binary analysis is a must-have in the Gartner® CISO Playbook for Commercial Software Supply Chain Security.
  • Take action on securing AI/ML with our report: AI Is the Supply Chain. Plus: See RL's research on nullifAI and watch how RL discovered the novel threat.
  • Get the report: Go Beyond the SBOM. Plus: See the CycloneDX xBOM webinar.

Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.

Tags:AppSec & Supply Chain Security

More Blog Posts

Spectra Assure Free Trial

Get your 14-day free trial of Spectra Assure for Software Supply Chain Security

Get Free TrialMore about Spectra Assure Free Trial
Blog
Events
About Us
Webinars
In the News
Careers
Demo Videos
Cybersecurity Glossary
Contact Us
reversinglabsReversingLabs: Home
Privacy PolicyCookiesImpressum
All rights reserved ReversingLabs © 2026
XX / Twitter
LinkedInLinkedIn
FacebookFacebook
InstagramInstagram
YouTubeYouTube
blueskyBluesky
RSSRSS
Back to Top
Skip to main content
Contact UsSupportLoginBlogCommunity
reversinglabsReversingLabs: Home
Solutions
Secure Software OnboardingSecure Build & ReleaseProtect Virtual MachinesIntegrate Safe Open SourceGo Beyond the SBOM
Increase Email Threat ResilienceDetect Malware in File Shares & StorageAdvanced Malware Analysis SuiteICAP Enabled Solutions
Scalable File AnalysisHigh-Fidelity Threat IntelligenceCurated Ransomware FeedAutomate Malware Analysis Workflows
Products & Technology
Spectra Assure®Software Supply Chain SecuritySpectra DetectHigh-Speed, High-Volume, Large File AnalysisSpectra AnalyzeIn-Depth Malware Analysis & Hunting for the SOCSpectra IntelligenceAuthoritative Reputation Data & Intelligence
Spectra CoreIntegrations
Industry
Energy & UtilitiesFinanceHealthcareHigh TechPublic Sector
Partners
Become a PartnerValue-Added PartnersTechnology PartnersMarketplacesOEM Partners
Alliances
Resources
BlogContent LibraryCybersecurity GlossaryConversingLabs PodcastEvents & WebinarsLearning with ReversingLabsWeekly Insights Newsletter
Customer StoriesDemo VideosDocumentationOpenSource YARA Rules
Company
About UsLeadershipCareersSeries B Investment
EventsRL at RSAC
Press ReleasesIn the News
Pricing
Software Supply Chain SecurityMalware Analysis and Threat Hunting
Menu

MCP rug-pull attack worries mount

This new class of AI tool supply chain attack highlights how trust of agents can be exploited.

Learn More about MCP rug-pull attack worries mount
MCP rug-pull attack worries mount

Can AppSec keep pace with AI coding?

AI lets software teams generate code at a rate faster than security can validate it. One way to win the race: more AI.

Learn More about Can AppSec keep pace with AI coding?
Can AppSec keep pace with AI coding?

LLMmap puts its finger on ML attacks

Researchers show how LLM fingerprinting can be used to automate generation of customized attacks.

Learn More about LLMmap puts its finger on ML attacks
LLMmap puts its finger on ML attacks
Request a demo
Vibeware bad vibes

Vibeware: More than bad vibes for AppSec

Threat actors are leveraging the freewheeling vibe-coding trend to deliver malicious software at scale.

Learn More about Vibeware: More than bad vibes for AppSec
Vibeware: More than bad vibes for AppSec
MCP attacks
AI coding racing
Finger on map