RL Blog

Topics

All Blog PostsAppSec & Supply Chain SecurityDev & DevSecOpsProducts & TechnologySecurity OperationsThreat Research

Follow us

XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBluesky

Subscribe

Get the best of RL Blog delivered to your in-box weekly. Stay up to date on key trends, analysis and best practices across threat intelligence and software supply chain security.

ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
Skip to main content
Contact UsSupportLoginBlogCommunity
reversinglabsReversingLabs: Home
Solutions
Secure Software OnboardingSecure Build & ReleaseProtect Virtual MachinesIntegrate Safe Open SourceGo Beyond the SBOM
Increase Email Threat ResilienceDetect Malware in File Shares & StorageAdvanced Malware Analysis SuiteICAP Enabled Solutions
Scalable File AnalysisHigh-Fidelity Threat IntelligenceCurated Ransomware FeedAutomate Malware Analysis Workflows
Products & Technology
Spectra Assure®Software Supply Chain SecuritySpectra DetectHigh-Speed, High-Volume, Large File AnalysisSpectra AnalyzeIn-Depth Malware Analysis & Hunting for the SOCSpectra IntelligenceAuthoritative Reputation Data & Intelligence
Spectra CoreIntegrations
Industry
Energy & UtilitiesFinanceHealthcareHigh TechPublic Sector
Partners
Become a PartnerValue-Added PartnersTechnology PartnersMarketplacesOEM Partners
Alliances
Resources
BlogContent LibraryCybersecurity GlossaryConversingLabs PodcastEvents & WebinarsLearning with ReversingLabsWeekly Insights Newsletter
Customer StoriesDemo VideosDocumentationOpenSource YARA Rules
Company
About UsLeadershipCareersSeries B Investment
EventsRL at RSAC
Press ReleasesIn the News
Pricing
Software Supply Chain SecurityMalware Analysis and Threat Hunting
Request a demo
Menu
Security OperationsJune 5, 2023

5 AI threats keeping SOC teams up at night

Your security operations team should be planning how to stay ahead of these emerging AI risks.

smiling woman
Ericka Chickowski, Freelance writer.Ericka Chickowski
FacebookFacebookXX / TwitterLinkedInLinkedInblueskyBlueskyEmail Us
break glass imagery

The explosion in the use of OpenAI's ChatGPT and other large language models (LLMs) — along with a range of other artificial intelligence (AI) and machine learning (ML) systems — is ramping up the security cat-and-mouse game.

AI risks are going beyond theoretical and becoming threats in practice, and security operations center (SOC) teams need to start getting prepared. The problem is lag time, said Ali Khan, field CISO for ReversingLabs.

For the security operations folks, I'm seeing that they're starting to predict and threat-model a lot of the scenarios that could go wrong [with AI technology].

Ali Khan

Proactive SOC teams need to start building the threat models and then skill up and build or procure the right tools for the fight, Khan said. One problem: the business climate. Enterprises are looking to cut costs, potentially stalling security budgets for new AI security capabilities.

These challenges could create yet another innovation catch-up cycle for SOC teams if they don't start getting out in front of the threat posed by AI, Khan said.

Here are five AI threats your security operations team should be planning and budgeting for to stay ahead of the emerging risk associated with AI.

Get White Paper: How the Rise of AI Will Impact Software Supply Chain Security

AI-enhanced phishing and social engineering

One of the biggest AI threats on the immediate horizon is the use of LLMs and deep learning to scale up highly targeted phishing attacks and other social engineering ploys. Attackers can utilize deep learning to do more automated reconnaissance of their targets and pair that with LLMs to generate emails, phone calls, and video to make their impersonation attacks more realistic than ever, said Petko Stoyanov, CTO for Forcepoint.

We are going to see more targeted phishing. Text-based generative AI is being used to create very personalized emails impersonating CEOs and other executives.

Petko Stoyanov

The potential is hair-raising. If attackers can scrape data from a trove of employee LinkedIn profiles to map out the products, projects, and groups those employees work on and then feed that into an LLM, they could generate massive business email compromise (BEC) scams, sending out extremely convincing emails that look as if they are from the employees' bosses or CFOs and that include precise details about the projects they're working on. If an attacker managed to compromise company data and feed that into the LLM, that would make the attack all the more authentic-looking.

Stoyanov said SOC teams are going to need more proactive monitoring, which is challenging because traditional threat intelligence is hashes and indicators of compromise.

When you think of advanced persistent threats, every attack is targeted to you and never reused anywhere else. Now what used to be only targeted at certain banks and certain governments can be replicated to smaller businesses because of generative AI. That's scary.

Petko Stoyanov

2. Generative AI-based malware

The other AI threat that's coming fast is generative AI-based malware. Last month at RSA Conference 2023, Stephen Sims of SANS Institute demonstrated how easy it was to convince ChatGPT to code ransomware for him with a series of convincing prompts—even though that model is trained to reject requests for building malware.

Based on his research, SANS ranks offensive uses of AI such as this as one of the top five dangerous attack types for 2023. Included in that is not only malware generation but also zero-day exploit discovery.

Khan says that ChatGPT and other generative AI models stand to greatly enhance the way attackers write malware. "We think that's going to really proliferate a lot of new malware that threat actors are going to be able to produce all the more quickly," he said.

So, think of traditional SOCs writing YARA rules to defend against and detect against signature or hashes traditionally. But with LLMs, attackers are producing things so fast, that you could almost write code on the fly and remove the detection logic that security operations would be dependent on.

Ali Khan

3. AI will unleash new software supply chain attacks

Just as with traditional software supply chain attacks, AI systems are going to be increasingly vulnerable to attacks that target the supply chain of components that feed their functionality. This includes the AI models, the training data — and the code that goes into building out not just the models but the software that uses them.

Chris Anley, chief scientist for NCC Group, said there are a lot of AI risks associated with the software supply chain and third-party code.

The models themselves can often contain executable code, which can result in supply chain and build security issues. Distributed training can be a security headache — [and training] data can be manipulated to create backdoors, and the resulting systems themselves can be subject to direct manipulation; adversarial perturbation and misclassifications can cause the system to produce inaccurate and even dangerous results.

Chris Anley

One of the most-discussed types of risks in the software supply chain is data poisoning. Khan said training data is very often publicly sourced, and when enterprises start blindly tying the output of an AI's model to make predictions or take actions, compromised data could create very costly consequences.

LLMs can help produce a certain amount of information that you start to rely on, and then threat actors or insider threats might try to poison the data that you're reliant on as an organization. You're going to have to start writing detection rules to see if this LLM matches what you're actually trying to author for your enterprise.

Ali Khan

4. Adversarial AI attacks

Whether it is a supply chain attack, data poisoning, or other attack types such as sponge attacks, evasion attacks, or prompt injection, the broader field of adversarial AI attacks — attacks against AI systems themselves — will be a problem for the SOC.

Andy Patel, a researcher for WithSecure, said SOC teams need to rapidly upskill their team on AI expertise to tackle adversarial AI.

They do need experts because none of the current solutions for protecting against adversarial attacks are plug and play. It isn't just something you can go out and buy and stick into your infrastructure, and have it work.

Andy Patel

Patel said different models do different things, and that enumerates the attack surfaces. "Figuring out what sort of attacks you can perform against them, what sort of attacks adversaries would be interested in performing against them, and those sorts of things, that still requires one to look at each system individually," he said.

5. Data theft and IP exposure

One of the big concerns with generative AI such as ChatGPT is that it can involve inputting sensitive data to an AI system that's not owned by the organization. This creates a nightmare tangle of data risk and compliance issues.

One case in point of this in action came by way of Samsung, which last month banned ChatGPT use after employees leaked sensitive data by loading it into the platform. This is just the tip of the iceberg of these kinds of incidents.

Additionally, organizations that are building their own in-house AI systems or working with vendors and partners building AI models collaboratively must worry about a cascading list of new data security issues.

Oftentimes the working environments of data scientists working on AI sends data governance right out the window, said Anley. "We now have large data lakes which have to be accessed by either in-house data scientists, or just someone has to be taking care of your customer data in order to use it effectively in an AI system."

[That's] a degree of access to the customer data that probably didn't exist before the AI system came along. It's important to look at those new types of data security problems, because that's another way that you can have a data breach now.

Chris Anley

Recognize the risk and recalibrate

With businesses facing yet another cyclical downcycle, Khan fears SOCs are heading into the AI adoption explosion at exactly the wrong time. With generative AI being embraced by technology giants, the risk is ramping up fast.

You really need to think of and plan ahead for the next fiscal year what kind of scenarios your organization can be exposed to as a result of this.

Ali Khan

Keep learning

  • Get up to speed on the state of software security with RL's Software Supply Chain Security Report 2026. Plus: See the the webinar to discussing the findings.
  • Learn why binary analysis is a must-have in the Gartner® CISO Playbook for Commercial Software Supply Chain Security.
  • Take action on securing AI/ML with our report: AI Is the Supply Chain. Plus: See RL's research on nullifAI and watch how RL discovered the novel threat.
  • Get the report: Go Beyond the SBOM. Plus: See the CycloneDX xBOM webinar.

Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.

Tags:Security OperationsArtificial Intelligence (AI)/Machine Learning (ML)

More Blog Posts

Post-quantum security

Crypto group ushers in post-quantum security

Here’s a look at the Ethereum Foundation’s new PQC security effort — and why you need to modernize your SecOps.

Learn More about Crypto group ushers in post-quantum security
Crypto group ushers in post-quantum security
Cybercrime-as-a-service

Cybercrime-as-a-service forces a security rethink

With AI-powered tools readily available, sophisticated attacks no longer require sophisticated attackers.

Learn More about Cybercrime-as-a-service forces a security rethink
Cybercrime-as-a-service forces a security rethink
AI adoption guardrails

Why governance is key to safe AI adoption

A new CSA report stresses getting out in front of AI risk — and why it matters for SecOps.

Learn More about Why governance is key to safe AI adoption
Why governance is key to safe AI adoption
Adversarial AI rise

Adversarial AI is on the rise: What you need to know

Researchers explain that as threat actors move to AI-enabled malware in active operations, existing defenses will fail.

Learn More about Adversarial AI is on the rise: What you need to know
Adversarial AI is on the rise: What you need to know

Spectra Assure Free Trial

Get your 14-day free trial of Spectra Assure for Software Supply Chain Security

Get Free TrialMore about Spectra Assure Free Trial
Blog
Events
About Us
Webinars
In the News
Careers
Demo Videos
Cybersecurity Glossary
Contact Us
reversinglabsReversingLabs: Home
Privacy PolicyCookiesImpressum
All rights reserved ReversingLabs © 2026
XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBlueskyRSSRSS
Back to Top