RL Blog

Topics

All Blog PostsAppSec & Supply Chain SecurityDev & DevSecOpsProducts & TechnologySecurity OperationsThreat Research

Follow us

XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBluesky

Subscribe

Get the best of RL Blog delivered to your in-box weekly. Stay up to date on key trends, analysis and best practices across threat intelligence and software supply chain security.

Security OperationsDecember 6, 2023

6 ways AI helps SecOps punch back

While AI is mostly seen as opening a new front in the threat landscape, it will also be tapped to fight back with advanced threat hunting and more.

John P. Mello Jr., Freelance technology writer.John P. Mello Jr.
FacebookFacebookXX / TwitterLinkedInLinkedIn

More Blog Posts

Spectra Assure Free Trial

Get your 14-day free trial of Spectra Assure for Software Supply Chain Security

Get Free TrialMore about Spectra Assure Free Trial
Blog
Events
About Us
Webinars
In the News
Careers
Demo Videos
Cybersecurity Glossary
Contact Us
reversinglabsReversingLabs: Home
Privacy PolicyCookiesImpressum
All rights reserved ReversingLabs © 2026
XX / TwitterLinkedInLinkedInFacebookFacebookInstagramInstagramYouTubeYouTubeblueskyBlueskyRSSRSS
Back to Top
blueskyBluesky
Email Us

From the moment OpenAI let ChatGPT out of the box, the potential for generative AI and large language models (LLMs) to cause harm has dominated conversations about the emerging technology.

Less talked about has been how AI can be a formidable weapon in the hands of the good guys. AI decision makers believe that among departments in the enterprise, IT operations will be affected the most by generative AI — more than security — according to data gathered by Forrester Research.

Forrester principal analyst Allie Mellen said in a new report on the use of generative AI in security tools that, while the tools are not yet widely available, they are coming.

Security leaders need to be prepared for this new technology to affect how their teams operate.

Allie Mellen

Here are six ways AI can be used by security teams to punch back at adversaries.

See related: The AI executive order: What AppSec teams need to knowSee Webinar: Secure by Design: Why Trust Matters for Risk Management

1. AI can be used to develop more secure code

Using AI for securing code is still in its infancy and has attracted skepticism from some security practitioners, but its potential can't be denied. In a presentation at InfoSec World 24, Mark Sherman, technical director of the Cyber Security Foundations group in the CERT Division at Carnegie Mellon University's Software Engineering Institute, said early experiments show promise but also have limitations. (See his slides from the talk in PDF form).

When using AI to secure code, Sherman cautioned, output must be reviewed by knowledgeable users. Unfortunately, most of the time that doesn't include programmers because they're not very good at reading and evaluating code.

2. AI can help address security staffing issues

Forrester's Mellen said that many implementations of generative AI in security tools today rely on chatbot-style features built into a separate view in an application. "As unique as this is right now, it ultimately does not naturally fit into the analyst workflow and is little more than a novelty,” she said.

The real value in generative AI is in addressing tasks automatically that were previously part of the analyst workflow. One example: writing draft incident-response reports. Mellen advised security leaders to look for generative AI implementations that fit into the analyst experience, to help analysts make decisions faster — not just force them to use another view or tab.

Generative AI can also improve the performance of less experienced security team members. At the recent Ignite conference, Vasu Jakkal, Microsoft corporate vice president for security, compliance, identity, and management, cited a Microsoft study that found that “new in career” analysts using AI-enabled tools produced responses to security events that were 44% more accurate and responded 26% faster across all tasks.

In addition, a large majority of the analysts said the tools helped improve the quality of their work (86%), reduced the effort needed to complete a task (83%), and made them more productive (86%).

3. AI can aid security teams in report creation

With tasks such as summarizing incidents for reporting purposes and creating human-readable case descriptions, generative AI can be used effectively. It can also convey information in a human-friendly way, which can be valuable in responding to customer service requests and in producing better product documentation.

4. AI can assist security teams in analyzing pattern behavior

With its predictive abilities, generative AI can help identify privacy risks, attacker activity, and risk scenarios, and it can suggest remediation actions.

5. AI can improve threat hunting

Generative AI can add speed and scale for scenarios such as security-posture management, incident investigation and response, and security reporting.

6. AI can be used to unify numerous security solutions

The number of security tools organizations have in use can be overwhelming for teams. Generative AI, for one, can bring together all the security signals and threat intelligence siloed in disconnected tools. That allows security teams to streamline, triage, and obtain a complete end-to-end view of threats across the digital estate, making response easier and quicker for analysts of every level.

The potential is there. It's time to put AI to work

Generative AI tech has the potential to greatly enhance our ability to detect and respond to cyber threats,” said Joseph Thacker, a security researcher with AppOmni.

There are already so many companies building AI security analyst agents. It’s going to be vital to use AI for securing digital assets in the future.

Joseph Thacker

However, Forrester’s Mellen said patience is in order, because despite vendor promises, "this technology is currently available only to a select set of customers, if at all.”

Every vendor we mentioned in the report (and have spoken with) has at least a press release related to the generative AI offering that it’s building, but none are generally available, or likely to be generally available, before the first half of 2024.

Allie Mellen

Tags:Security Operations
John P. Mello Jr.
man in boxing gloves squaring up

Keep learning

  • Get up to speed on the state of software security with RL's Software Supply Chain Security Report 2026. Plus: See the the webinar to discussing the findings.
  • Learn why binary analysis is a must-have in the Gartner® CISO Playbook for Commercial Software Supply Chain Security.
  • Take action on securing AI/ML with our report: AI Is the Supply Chain. Plus: See RL's research on nullifAI and watch how RL discovered the novel threat.
  • Get the report: Go Beyond the SBOM. Plus: See the CycloneDX xBOM webinar.

Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.

ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
Skip to main content
Contact UsSupportLoginBlogCommunity
reversinglabs
ReversingLabs: Home
Solutions
Secure Software OnboardingSecure Build & ReleaseProtect Virtual MachinesIntegrate Safe Open SourceGo Beyond the SBOM
Increase Email Threat ResilienceDetect Malware in File Shares & StorageAdvanced Malware Analysis SuiteICAP Enabled Solutions
Scalable File AnalysisHigh-Fidelity Threat IntelligenceCurated Ransomware FeedAutomate Malware Analysis Workflows
Products & Technology
Spectra Assure®Software Supply Chain SecuritySpectra DetectHigh-Speed, High-Volume, Large File AnalysisSpectra AnalyzeIn-Depth Malware Analysis & Hunting for the SOCSpectra IntelligenceAuthoritative Reputation Data & Intelligence
Spectra CoreIntegrations
Industry
Energy & UtilitiesFinanceHealthcareHigh TechPublic Sector
Partners
Become a PartnerValue-Added PartnersTechnology PartnersMarketplacesOEM Partners
Alliances
Resources
BlogContent LibraryCybersecurity GlossaryConversingLabs PodcastEvents & WebinarsLearning with ReversingLabsWeekly Insights Newsletter
Customer StoriesDemo VideosDocumentationOpenSource YARA Rules
Company
About UsLeadershipCareersSeries B Investment
EventsRL at RSAC
Press ReleasesIn the News
Pricing
Software Supply Chain SecurityMalware Analysis and Threat Hunting
Request a demo
Menu
Post-quantum security

Crypto group ushers in post-quantum security

Here’s a look at the Ethereum Foundation’s new PQC security effort — and why you need to modernize your SecOps.

Learn More about Crypto group ushers in post-quantum security
Crypto group ushers in post-quantum security
Cybercrime-as-a-service

Cybercrime-as-a-service forces a security rethink

With AI-powered tools readily available, sophisticated attacks no longer require sophisticated attackers.

Learn More about Cybercrime-as-a-service forces a security rethink
Cybercrime-as-a-service forces a security rethink
AI adoption guardrails

Why governance is key to safe AI adoption

A new CSA report stresses getting out in front of AI risk — and why it matters for SecOps.

Learn More about Why governance is key to safe AI adoption
Why governance is key to safe AI adoption

Adversarial AI is on the rise: What you need to know

Researchers explain that as threat actors move to AI-enabled malware in active operations, existing defenses will fail.

Learn More about Adversarial AI is on the rise: What you need to know
Adversarial AI is on the rise: What you need to know
Adversarial AI rise