ReversingLabs: The More Powerful, Cost-Effective Alternative to VirusTotalSee Why
Building Trustworthy AI - LP background
Wednesday, March 18 @ 1-2pm ET

Building Trustworthy AI

Detecting Hidden Threats in AI Models Before They Strike

The models you build and distribute are the new frontier for sophisticated, hard-to-detect attacks.

As a foundation model provider, securing your core artifacts and the platforms that host them is a non-negotiable responsibility. The foundation of your AI stack is a potent vector for attack, with threats often hidden within serialized model formats like Pickle or Numpy files.

Join ReversingLabs to learn how to detect and neutralize these threats before they ever reach your customers.

In this webinar, we will explore:

  • Detecting Hidden Threats: Learn how Spectra Assure uses deep binary analysis to scan static files, flag unsafe function calls, and find hidden threats without ever executing the dangerous code.
  • Defending Against Real-World Attacks: Understand how to prevent attacks like nullifAI, where malicious models uploaded to platforms like Hugging Face execute implants to establish backdoors or exfiltrate data.
  • Protecting Your Hosting Platform in Production: Discover how Spectra Detect provides critical, in-line protection for the platforms hosting your models, defending against multi-modal malicious inputs — including prompt injections and exploits hidden in images or uploaded PDFs.

Attendees will receive an attendance certificate to be used towards CPE credit.

Meet the Speakers

Register Now
Dhaval Shah
Patrick Enderby
Back to Top