<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1076912843267184&amp;ev=PageView&amp;noscript=1">

RL Blog

|

Developers behaving badly: Why holistic AppSec is key

Mature organizations recognize that their AppSec approach has to keep pace with modern development teams. Here's why.

Ericka Chickowski
Blog Author

Ericka Chickowski, Freelance writer. Read More...

devs-behaving-badly-holistic-app-sec

A recent survey shows that untested software releases, rampant pushing of unvetted and uncontrolled AI-derived code, and bad developer security are all combining to seriously expand security risks across software development. Add in the explosion of low-code/no-code development and economic pressures that are forcing developers to deliver features with less support, and you have an AppSec world that’s facing a tumultuous 2024.

While the buzz around shifting security to the left, or earlier in the development lifecycle, is still pushed by many DevSecOps advocates — and for good reason — the mantra of "test early and test often" can only get an application security or product security team so far in moving the needle on software risk. 

Comprehensive application security (AppSec) is much more than squashing bugs early in the development lifecycle. Mature organizations recognize that they need to mature their AppSec approach to keep pace with modern development and release practices. Here's why a more holistic AppSec approach is key. 

[ See related: How legacy AppSec is holding back Secure by Design | See Webinar: Secure by Design: Why Trust Matters for Software Risk Management ]

Curbing bad developer security behavior

A recent survey conducted among 500 developers worldwide by SauceLabs illuminated  lot about "Developers Behaving Badly," as the survey was dubbed. One of the key themes to bubble up from the report had absolutely nothing to do with when or how testing is conducted. It had to do with the security hygiene practiced by developers daily.

The fact is, it's not so great. About three-quarters of developers admit to circumventing security measures by doing things such as disabling multifactor authentication (MFA) or going around the VPN to speed up their work. Similarly, 70% admit they've shared credentials — with 40% saying they do so regularly.

This report points to a huge need for security support in creating developer guardrails that are embedded in the CI/CD pipeline, so that developers can still move quickly but do so safely. That means putting in place well-architected identity and access management (IAM) functionality, as well as thoughtful permissions throughout the entire development workflow — but especially when it comes to touching the highest-value assets.

Nir Valtman, founder of the software security firm Arnica, said that minimizing the attack surface by reducing the permissions to source code, the place where the problem starts,  is key.

"If the company culture is to provide access to push code for all developers, then apply branch protection policies to require pull request-reviews by the right owners and review the CI/CD permissions and triggers."
Nir Valtman

A big part of this holistic approach to curbing bad operational security is visibility. Valtman said organizations should also be monitoring for abnormal behavior in development tooling and code repositories. Ideally, security should get buy-in with their approach.

"An abnormal behavior can be the result of an insider threat, account takeover, or a malicious third-party library. Use an anomaly-detection mechanism across your development ecosystem, but make sure the developers like the selected approach. Empower developers to own security in a simple and scalable way — let them pick the right security solution for them. 
—Nir Valtman

Shift everywhere with your testing

Security testing — and the remediation and refactoring that follows — is obviously a core part of every application security program. Unfortunately, in spite of the best efforts of DevSecOps pundits and AppSec advocates today, a lot of the security tests mandated of developers today still remain out of phase with their CI/CD pipeline and manually conducted. When the "Developers Behaving Badly" survey asked developers, 67% said they could and did push code to production without conducting security testing, and nearly a third of them reported that they do it often or very often.

When the "Developers Behaving Badly" survey asked developers, 67% said they could and did push code to production without conducting security testing, and nearly a third of them reported that they do it often or very often.

The goal of the shift-left movement is to build security gates into the pipeline as early in the development process as possible, and to automate testing. But early tests at the code and component levels won't catch every AppSec risk. Shifting right — or shifting everywhere — allows AppSec teams to identify risk in the context of how software will be deployed, said Saša Zdjelar, Chief Trust Officer at ReversingLabs.

"As you shift right, you lose componentry, or unit-level control, but you gain context, as people add more and more code. As first-party code gets combined with third-party commercial and open-source imports and includes, that container size grows and it becomes something closer and closer to a full-built product."
Saša Zdjelar

As an organization consumes or produces software, introducing testing at the very end before pushing to production makes it possible to check for malware that may have infiltrated the software supply chain, tampering, problems with digital signatures, and the inclusion of sensitive information or development secrets.

"Those are the characteristics of software that we believe should be checked at the very end."
—Saša Zdjelar

Account for development risks from generative AI

Further complicating the testing issue is the addition of generative AI to the development cycle. Tools such as GitHub Copilot and ChatGPT stand to greatly accelerate developer productivity, but utilizing code produced through GenAI adds more to the risk equation.

In a recent Security Table Podcast, longtime AppSec veteran Jim Manico, founder of Manicode Security, explained the scenario succinctly.

"To be a developer and not to use AI is going to put you behind the eight ball real fast. To use AI as a developer is necessary because if you don't your productivity is going to be one-third to one-fourth of your peers. But if you're using AI without security review, you're screwed in a bad way."
Jim Manico

The "Developers Behaving Badly" report found that most developers are failing to do that review. Approximately 61% of respondents said they've used untested code generated by ChatGPT, and more than a quarter do it regularly.

Holistic AppSec programs are going to need the policies, developer education, tooling, and security guardrails necessary to meet these AI risks head on, as it is inevitable that generative AI is embedded into developer processes given tools like GitHub Copilot.

Low-code/no-code: A call to action on guardrails

Speaking of inevitability, another huge one is the looming risks that are coming for organizations with regard to low-code/no-code development environments — for both professional developers and citizen developers. This is a looming issue that didn't make it into the "Developers Behaving Badly" survey but that, when combined with generative AI, is poised to cause the number of applications needing security scrutiny to mushroom.

Michael Bargury, founder of low-code/no-code security firm Zenity and author of the OWASP Top 10 for Low-Code, said the situation was already getting out of control.

"How does application security look when you are taking all your business users under your umbrella and allowing them to push code? And we are seeing [generative AI] make this even more of an issue — we're seeing thousands of applications being developed by AI in low-code/no-code environments and being directly deployed to production."
Michael Bargury

Bargury said Zenity is working with many Fortune 100 companies that are grappling with how to create a holistic AppSec problem that includes the enormous body of apps produced in this way. He explained one engagement with a security team that's been looking at applications built by generative AI across their entire organization — 500 AI-derived applications, "and that was before they realized they hadn't accounted for low-code apps."

Once the company was able to get a software bill of materials (SBOM) on the low-code environment, it found that it had about 7,000 applications that were built by low-code with generative AI.

"The magnitude is enormous."
—Michael Bargury
 

At the same time, there's no stopping the tide of low-code/no-code. Just like with the rest of development environments, the modern AppSec team will need to start building automated guardrails and testing into low-code/no-code development in order to attain holistic AppSec.

Get up to speed on key trends and learn expert insights with The State of Software Supply Chain Security 2024. Plus: Explore RL Spectra Assure for software supply chain security.

More Blog Posts

    Special Reports

    Latest Blog Posts

    Chinese APT Group Exploits SOHO Routers Chinese APT Group Exploits SOHO Routers

    Conversations About Threat Hunting and Software Supply Chain Security

    Reproducible Builds: Graduate Your Software Supply Chain Security Reproducible Builds: Graduate Your Software Supply Chain Security

    Glassboard conversations with ReversingLabs Field CISO Matt Rose

    Software Package Deconstruction: Video Conferencing Software Software Package Deconstruction: Video Conferencing Software

    Analyzing Risks To Your Software Supply Chain