EO on AI: What security teams need to know
In this episode, Matt Rose digs into the White House’s new executive order on AI, and what it means for software supply chain security.
-Blog: AI needs transparency: How supply chain security tools can protect ML models
-Blog: OWASP Top 10 for LLM update bridges the gap between app sec and AI
-Related ReversingGlass: Trust in Your Software Must be Complete
MATT ROSE: Hi, everyone. Welcome back to another episode of ReversingGlass. I'm Matt Rose, Field CISO at ReversingLabs. And in this installment, we're going to be talking about the EO on AI. Sounds like "E I E I O, old McDonald had a farm," but we're going to go a little different way. A little early Halloween present was delivered to everybody on October 30th, the day before Halloween.
So was it a trick or was it a treat? I'll leave that up to you guys to decide. But, it's really about what the heck is AI? The executive office has really seen the explosive growth and discussion around AI, the applications for where it can be used, and they want to put some guardrails and some boundaries around it to understand what the heck it actually is. AI or artificial intelligence is the platform that uses large language models to basically come up with content, come up with ideas, scour the Internet for different aspects to create things with artificial intelligence.
Well, from a governmental aspect, that's getting a little scary. We all know Terminator and Skynet, the computers took over the world. And I think this is the executive office trying to make a statement about how do we use this correctly? How does it not infringe upon people's jobs, on different aspects of intellectual property, so on and so forth.
But the interesting thing I see with this executive order, it's very much like the executive order and follow on memorandum around SBOM. We're developing software faster and faster every day. Every company is a technology company. Everyone's using software, creating software applications to do their basic activities, to run businesses, to run governments, to run infrastructure.
And the big thing is, these things are just trusted. And from an SBOM standpoint, the government wanted to have people self attest to understand what the heck is in the applications? Are there supply chain breaches? Same thing with AI. What is this stuff that we're creating with AI? Is it valuable? Has it been compromised from a software side of the house?
Has the software itself that the AI is created and written in compromised? Or are the large data lakes, the data models that it's actually using compromised? So this is just another lens of risk in trying to understand it. We're in the beginning stages of the adoption of AI across many different use cases, industries, governmental agencies.
But how do we actually do that? And the thing I like to say is, you need to be able to trust your software. The first step in trusting your software was an SBOM, a complete SBOM: a final exam of the thing that you're actually developing for your clients or using as an entity. Same thing with AI. If you are leveraging AI to write documents to create research based on the large language models, is that secure?
And it's about transparency. So do you trust your AI platform? Do you trust the software you're creating or consuming? The government thinks that you should, and are trying to put together the correct processes, guardrails, and details on how to trust AI and how it can be used in an effective way. Because, hey, we all know the train's left the station with AI.
It's here, it's doing great things, but is it doing wrong things? Food for thought. I'm Matt Rose, Field CISO, ReversingLabs. Thanks for watching and have a great day everybody.