<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1076912843267184&amp;ev=PageView&amp;noscript=1">

RL Blog

|

When GenAI and low-code collide: What could go wrong for AppSec?

Here's why the duo results in a perfect storm, key considerations — and expert advice on how engineering and application security teams can tackle the problem.

Ericka Chickowski
Blog Author

Ericka Chickowski, Freelance writer. Read More...

perfect-storm-genai-lowcode-appsecIf application security (AppSec) professionals thought the problems of code complexity, code bloat, and the poor state of software supply chain security (SSCS) were bad enough, they had better strap in. Things are about to get a heck of a lot worse with the cross-pollination of generative AI (GenAI) code creation and low-code/no-code development.

Each is a big threat vector in its own right. But when GenAI and low-code development are used concurrently in an organization — and especially when they intersect — they threaten to create a perfect storm for AppSec teams.

Here's why the combination of low-code and GenAI results in a perfect storm, plus key considerations to keep in mind — and expert advice on how AppSec and engineering teams can start tackling the problem.

[ Related: Gartner outlines AI as No. 1 cybersecurity trend | See Special Report: The State of SSCS 2024 ]

Low-code and GenAI are worries by themselves

Known security problems exist with low-code and GenAI. One fundamental problem raised by the simultaneous use of these development techniques is the drastic amplification of code output and speed of development. This is great for agility, but it creates a cumulative security nightmare if risks aren't appropriately accounted for, said Michael Bargury, an expert on low-code/no-code security and CTO of Zenity.

"AI is pushing on the same pain point that low-code/no-code pushes on, which is development output and business enablement."
Michael Bargury

Bargury said he has found that it's generally true that for every enterprise application built by a professional developer, an organization could easily produce 10 low-code/no-code apps. These software applications are produced by both professional developers and so-called citizen developers — non-development staff within organizations who overcome development backlogs by building their own apps through the simplicity of low-code/no-code drag-and-drop interfaces.

When professional developers are armed with GenAI, output goes up another order of magnitude. This both scales up the amount of code and components that need to be governed for risk and increases the vulnerabilities and malware pushed into production, said David Lindner, CISO for Contrast Security

"The rapid application development and deployment facilitated by low-code platforms, coupled with GenAI's code generation and process automation, can inadvertently scale security vulnerabilities if not properly managed."
David Lindner

The lack of AppSec governance in many low-code/no-code environments — and the tendency for low-code applications to overprovision connections and accounts — had started to raise eyebrows among security researchers even before GenAI-driven coding entered the mainstream. Apps built in poorly governed low-code environments are seen by security researchers as an ideal breeding ground for initial compromises that lead to devastating lateral attacks across the enterprise.

Zenity's Bargury presented research at last year's Black Hat conference that illustrated how easily low-code provisioning problems can pave the way for lateral movement — demonstrating a problem in the environment that makes it possible for guest users in Microsoft Power Apps to gain easy access to corporate secrets.

Tools for GenAI-driven code generation such as Microsoft's GitHub Copilot, ChatGPT, and Google's new Gemini Code Assist are all equally lacking in risk governance, said Eric Schwake, director of cybersecurity strategy at Salt Security.

"Low-code development with generative AI poses unique security challenges that AppSec teams must prepare for. AI-generated code may introduce vulnerabilities, and the lack of control over the underlying code makes it difficult to analyze for weaknesses. Low-code development can also lead to shadow IT, increasing the attack surface and making it challenging to mitigate security risks."
Eric Schwake

Contrast Security's Lindner said that there is a huge veil of obscurity over how GenAI code generation works and that where the code is pulled from is particularly problematic.

"The opaque nature of AI decision making complicates understanding and vetting of the generated code for security best practices, heightening the risk of introducing vulnerabilities. The reliance on prebuilt components for efficiency by both low-code platforms and GenAI can further expose applications to risks if these elements are outdated or inherently flawed, showcasing the need for rigorous security review processes."
—David Lindner

This low standard for code is accelerating the creation of more vulnerable code, and GenAI is even creating new classes of software flaws. Lasso Security researchers recently showed that four major AI models commonly used to power GenAI-assisted coding regularly hallucinate software component package names. The researchers offered  a proof of concept to show how trivial it would be for attackers with some knowledge of this hallucination predilection to create malicious packages with these hallucinated names for download on open-source repositories by organizations that unsuspectingly use them to generate their code.

When GenAI and low-code collide

While security problems with low-code and GenAI are singularly concerning, when the two worlds collide risk management gets far more complicated. In many type of low-code/no-code environments, GenAI is built directly into the low-code platform to further abstract technical details and simplify the application-creation process. However, in many cases, that abstraction is also obscuring governance from the organization, Bargury said.

"If you look at the low-code platforms themselves, they have all integrated AI into their business stream. So if you want to create an app with them, you often no longer have to drag boxes. Instead, you write what you want it to do and it will generate an app for you. And when it does so, it has to make choices."
—Michael Bargury

For example, if a user asks a platform to create an automation that bridges different kinds of business software, the AI will choose the connection for the user. This issue of API connections and calls to components outside of the walled garden of the low-code platform itself can rapidly increase the complexity of security problems that can't be easily scanned for in traditional code scanning tools, Salt Security's Schwake said.

"Low-code apps rely heavily on APIs, and AI-generated API integrations may introduce unforeseen security issues."
—Eric Schwake

Low-code platforms already struggle to appropriately set access controls for connections between applications and accounts, Bargury said. They usually create connections that are over-permissioned because they're essentially impersonating a user to do whatever the user can do in any given system. Layering AI into the mix exacerbates the problem — and adds another threat vector, he said.

"If the over-permissioned connections get embedded in an app, then the app user can do whatever they want with the permission. But then once AI gets hold of these connections, then an interesting thing happens. Now, if you give overprivileged connections to AI, what happens is that prompt injection becomes a way to do privilege escalation. And this is happening all over the place. We've seen this with Microsoft Copilot. We've seen this with OpenAI plugins."
—Michael Bargury

Upgrade your AppSec tools and approach to take on evolving risk

The convergence of low-code development and GenAI code development will potentially create a powerful one-two punch to AppSec and DevOps teams if they rely on traditional application security testing (AST) methods for reviewing code and don't move toward a comprehensive approach to software supply chain security. The big worry, said Grant Goodes, innovation architect for Zimperium: Organizations will see the efficiency of low-code and GenAI but fail to add new security measures to their software engineering and AppSec processes.

"It allows a potentially dramatic reduction in the size of the development team but results in a double-whammy when it comes to performing code reviews. Not only is there almost no one left to review the code; those that are left didn’t even write the code so will struggle to understand it. This is a recipe for disaster when it comes to the security of the resulting application."
Grant Goodes

But even with a very healthy contingent of security pros charged with AppSec, there's no way that an organization is going to keep pace with the output of low-code and GenAI-powered development without taking their software risk management approach to the next level, with a comprehensive review of all software components in a package before release — one that accounts for real and hallucinated calls and robust API reviews.

The Enduring Security Framework (ESF) said in recent guidance, titled "Securing the Software Supply Chain: Recommended Practices for Managing Open-Source Software and Software Bill of Materials," that organizations should adopt complex binary analysis and reproducible builds to advance their AppSec practices.

The document builds on previous efforts by the federal government to foster formal standards for bolstering software security against current and emergent threats, including the most recent push for Secure by Design, which seeks to shift liability for software compromises to software teams.

Additional human resources could be required as well. At some organizations, overseeing and coordinating added protection around low-code and GenAI-created apps may require a special role or even a new team to get it right. Bargury said that he's increasingly seeing AppSec teams dedicate someone to the task.

For development organizations with robust DevOps org structures and practices in place, getting started on this may not be a heavy lift. At the same time, they'll need to ensure that they're training everybody on their teams to fully understand the new crop of risks, Salt Security's Schwake warned.

"Mature DevOps teams typically have a solid foundation in integrating security and automation principles. However, they need to expand their skillset to include the specific security risks associated with low-code development. This involves comprehending how AI-generated code may introduce vulnerabilities that traditional security tools could overlook. Additionally, they must understand the unique attack surface created by low-code applications."
—Eric Schwake

Get up to speed on key trends and learn expert insights with The State of Software Supply Chain Security 2024. Plus: Explore RL Spectra Assure for software supply chain security.

More Blog Posts

    Special Reports

    Latest Blog Posts

    Chinese APT Group Exploits SOHO Routers Chinese APT Group Exploits SOHO Routers

    Conversations About Threat Hunting and Software Supply Chain Security

    Reproducible Builds: Graduate Your Software Supply Chain Security Reproducible Builds: Graduate Your Software Supply Chain Security

    Glassboard conversations with ReversingLabs Field CISO Matt Rose

    Software Package Deconstruction: Video Conferencing Software Software Package Deconstruction: Video Conferencing Software

    Analyzing Risks To Your Software Supply Chain