Spectra Assure Free Trial
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free TrialThe current software development landscape can best be described today by one word: velocity. Code is now produced at an unprecedented rate with the assistance of artificial intelligence (AI). AI coding has enabled dev teams to produce and publish software faster. It's also empowering folks with limited or no coding knowledge to craft apps, too.
This development reality is now a perfect storm for software risk management. With pipelines flooded with code, application security (AppSec) teams are hard-pressed to secure it — especially if they can't persuade developers that security is a team sport.
Neil Carpenter, a principal solution architect at the secure container firm Minimus, said it’s an oft-made joke in security circles that developers don’t care about security. "I don’t think that’s actually true," he said.
Neil CarpenterI think we’ve often failed to provide consumable security visibility, telemetry, and intel for developers. We either show them nothing or overwhelm them with too much information.
Here are five action items that can help AppSec teams cope with the current state of application development — and persuade developers to buy-in to security.
Download Today: The 2025 Software Supply Chain Security Report
Ensar Seker, CISO of SOCRadar, said that implementing “shift-left” effectively means weaving security into development workflows in a developer-friendly way. He recommends automating security checks in CI/CD pipelines and providing developers with tools that give clear, actionable feedback early in the coding process.
Ensar SekerFor example, integrating static analysis or dependency checks that run during builds can catch issues before they reach production without requiring developers to context-switch or become security experts overnight.
Stuart McClure, CEO of Qwiet AI, said that by integrating security tools directly into developer workflows, security is not an afterthought "but an end-to-end part of the development process."
Stuart McClureThis approach prioritizes actionable, high-quality findings, avoiding alert fatigue and enhancing security posture.
Seker also emphasized the importance of fostering a supportive culture. He said security and development teams needed to collaborate with security teams, helping developers interpret scan results and prioritizing fixes, "truly making security a team sport."
Sean Wright, head of application security at Featurespace, said that embedding a security culture is vital for shift-left. It helps developers understand the importance of security — and the importance of "thinking about security from the very beginnings of their work."
Sean WrightAs this would be a cultural shift, bringing developers on board as opposed to forcing them is going to be very important for any chance of success.
Chris Romeo, CEO of the threat modeling firm Devici, said AppSec leaders must keep any activities as lightweight and tuned as possible, added. "Instead of telling developers to 'threat model,' define a lightweight process that is time-boxed and has a defined output, with guidance and assistance from the AppSec team to coach developers to success," he advised.
Efficient management of software bills of materials (SBOMs) depends on automation, said Eric Schwake, director of cybersecurity strategy at Salt Security. He said organizations should automate SBOM creation during each build, integrate them into existing dependency management and CI/CD systems, and concentrate on using SBOMs for actionable vulnerability insights rather than merely raw information.
Eric SchwakeFocusing on critical vulnerabilities within SBOMs helps development teams tackle the most significant risks without feeling overwhelmed.
Dwayne McDaniel, developer advocate at GitGuardian, said that while the security community loves to talk about shifting left for so many things, SBOMs are one of the places that needs to shift right. SBOMs are too often seen as static documents produced during the development process, and then become stale as soon as they are produced. Companies need to move to live introspection of code as it moves toward production, keeping a living record of what is actually in the code executing on the server, not just the code the developer pushed days or weeks ago, he said.
Dwayne McDanielAutomation of SBOM generation as point-in-time artifacts would then be as simple as an automated script run from the developer's perspective.
SBOMs are essential for understanding your software supply and spotting vulnerable components, but they shouldn’t become a bottleneck, said SOCRadar's Seker.
He recommends generating SBOMs automatically as part of the build process. Then use policy-as-code to automatically evaluate those SBOMs against security and compliance requirements. "This lets you, for instance, automatically fail a build if a disallowed library version appears — all without manual effort," Seker explained.
Ensar SekerAutomating SBOM management dramatically reduces friction between security and development teams.
Seker also advised keeping SBOM data in a centralized, accessible repository so developers and security can easily query components and vulnerabilities. "By treating SBOM checks as an invisible part of CI," he said, "you maintain a rapid development pace while gaining confidence that components are tracked and safe."
Larry Maccherone, founder and CTO of Transformation.dev, said the SBOM itself isn't the important thing, "Even vulnerabilities reported in the SBOM are just a part of the workflow," he said.
What's important is that each actively developed application stays relatively current with its dependencies., Maccherone said. "You should be no more than one major release behind current and as soon as there is a major release, you schedule the work to upgrade to that. That way, when there is a dangerous vulnerability, you can easily upgrade without breaking anything."
Larry MaccheroneThe only way I've seen development teams achieve this is to have a fairly robust automated test suite. Without that, no one is willing to take the risk to upgrade.
Developers play a pivotal role in API security, Seker said. He recommended APIs be built using secure coding practices: enforce strong authentication and authorization, and avoid exposing unnecessary data. Many common API vulnerabilities — like broken authentication, injection flaws, or excessive data exposure — can be prevented by careful coding and design, he said.
For example, a developer can ensure that an endpoint verifies a user’s token and access level before returning sensitive data, or that query parameters are sanitized to prevent SQL/NoSQL injection, Seker said.
Ensar SekerIn essence, when developers treat security as a core feature of the API — just like performance or functionality — they significantly harden the application. It’s encouraging to see more developers thinking this way and even collaborating with security teams during design and code reviews to get API security right from the start.
Salt Security's Schwake explained that the current landscape of development is characterized by a rapid increase in API creation, primarily driven by machine-to-machine communication and the rapid adoption of AI agents.
He said that security strategies must go beyond traditional application security by incorporating continuous, real-time API security and posture governance.
Without visibility into how AI agents and applications interact through APIs, organizations risk unrecognized vulnerabilities that threaten data integrity, operational resilience, and trustworthiness.
GitGuardian's McDaniel said that teaching developers to have an adversarial mindset goes a long way to improving overall API security. "Devs love thinking in terms of the 'happy path' when it comes to functionality. They test for what they believe the user is likely to do based on what they themselves would do," he said.
Getting development teams in the mindset that every API call is a chance for an attacker to do something not on the happy path will encourage them to test unusual scenarios, such as enumeration and changing random values, to see what happens, McDaniel said. "This will close a lot of security holes that they currently leave open."
Gartner notes in a recent research report that by 2027, more than 75% of all AI deployments will use container technology as the underlying compute environment, up from less than 50% in 2024.
McDaniel said to remember that AI is just another new type of application. "At its heart, it is still just running code requesting resources. Getting back to basics and thinking through container security, such as running minimal images with no root permissions and where no credentials are exposed, is going to make any application more secure," he said.
Using open-source frameworks like Container Hardening Priorities (CHP) to perform container configuration scanning can help ensure that anything going to production is as secure as possible, regardless of whether it involves AI or not.
Carpenter said to focus on universal security problems as a starting point. "Machine learning and large language models may introduce new, exotic risks, but you still have to cover the basics," he said.
Seker recommends starting with minimal, trusted base images and keeping them updated. "Many AI containers bundle large frameworks, so ensure those are the latest, patched versions," he said.
He advised that teams regularly scan their container images — and the machine learning libraries within them — for known vulnerabilities, since outdated libraries in AI workloads can be a hidden risk. Run AI containers with least privilege, avoid running them as root, and restrict their access to only the resources they truly need — GPU access, specific data volumes, network calls, etc.
Ensar SekerIsolation is critical. Containerize AI services instead of letting them run directly on the host, which provides a security boundary.
Salt Security's Schwake advocates AppSec teams prepare for GenAI vulnerabilities by understanding emerging attack methods like prompt injection, data poisoning, and model exfiltration. This includes securing data pipelines that supply GenAI models, safeguarding the APIs used to access these models and connect with other services, and implementing clear governance policies for AI use and data management. "Proactive threat modeling of GenAI systems is essential," he said.
Devici's Romeo said AppSec teams must focus on security education and threat modeling. "Teach your developers about the OWASP Top Ten for LLM, and expand into the MAESTRO framework for agentic AI threats. Wrap all this together by sharing potential threats against AI that developers can consider in the threat modeling process," he said
Seker said that Gen AI is changing how we develop software, and AppSec teams need to adapt quickly. One big concern is that AI-assisted coding can introduce insecure code at a much faster rate. Code generation tools might suggest vulnerable snippets or use outdated libraries, so governance and review of AI-generated code is crucial, he said. "I advise treating AI contributions like any other third-party code. Run them through rigorous code review, static analysis, and even dynamic testing to catch flaws," he said.
Gen AI is really changing the game, saod Transformation.dev's Maccherone. "If you haven't already shifted left, you're in big trouble because code is coming out faster than ever. A traditional, 'check it after it's built' approach is doomed," he said.
Security must now operate at the speed of code, which means adopting new ways of working with developers. Shifting security everywhere will help, but overburdening developers with multiple tools is not the way, said Qwiet AI's McClure. He recommended security teams prioritize education and fixes in the IDE to help developers avoid the "swivel chair effect" and build better collaborative practices.
GitGuardian's McDaniel said he has never seen this kind of acceleration in the development space, driven by AI tooling, which is letting people vibe code their way to insecure apps faster than ever. "We need people to communicate better and review each other's work more than ever. Unfortunately, teams are seeing reduced headcount and loss of business logic expertise that no machine is going to replace," he said.
Dwayne McDanielUnless we can retrain the AI on only secure code, using it to produce applications faster and faster, as business demands, is going to continue to make us less secure.
Trends like vibe coding — and individuals with little coding knowledge churning out code — is a worry, said Featurespace's Wright. "This personally has me worried, as we may see a resurgence of applications that are not particularly secure. Current AI models are not fantastic at producing secure code and combining that with someone who has little to no knowledge regarding security is not going to end well," he said.
However, modern development technology can offer helpful tooling that can aid developers, such as code scanning tools that can provide recommended code fixes for security-related findings, Wright said.
Sean WrightIt is going to be an interesting few years ahead, certainly with many challenges, but also opportunities.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free Trial