A trio of AI experts raised eyebrows earlier this year when they revealed their ambitious plans to use artificial intelligence (AI) tools to automate all white-collar jobs "as fast as possible." At the top of the hit list: software developers. "[W]e’ll eventually reach a point when AIs can perform the full range of activities involved in software engineering," the threesome — Matthew Barnett, Tamay Besiroglu, and Ege Erdil — wrote on the website of the company they have founded, Mechanize.
So what will happen to software engineers? They will "transition into adjacent positions that rely on similar expertise but are significantly harder to automate, such as software engineering management, product management, or executive leadership within software companies," the experts said.
However, fully automating software engineering — completely eliminating the need for people with development expertise — is a bigger deal than building AI that can write code, they wrote. "We’ll only truly know we’ve succeeded once we’ve created AI systems capable of taking on nearly every responsibility a human could carry out at a computer."
"[W]hile at some point the software engineering profession will become fully automated, this milestone may only occur at a surprisingly late point in time — likely after AIs have already taken over a large share of white-collar jobs throughout the broader economy."
—Matthew Barnett, Tamay Besiroglu, and Ege Erdil
Here's what you need to know about fully autonomous coding — and how your organization can get out in front of the application security (AppSec) risk that is sure to add.
[ Report: How AI Impacts Supply Chain Security | Blog: How to Secure your AI with an ML-BOM ]
A 'vulnerability factory'
In laying out their expansive vision for AI in software development, the trio made no mention of security, even though AI-generated code can turn automated development into "an automated vulnerability factory," in the words of Ensar Seker, CISO of SOCRadar. “Automation significantly accelerates the pace of software delivery, but it can also amplify security risks if not governed properly," he said.
“Vulnerabilities introduced by AI-generated code, insecure default configurations, and overly trusted dependencies can all scale faster than traditional SDLC models. This means that security must shift even further left, embedding validation, threat modeling, and anomaly detection directly into AI pipelines.”
—Ensar Seker
Elango Senthilnathan, a solutions architect with the security firm Qwiet AI, said that while automated software development offers some advantages, the challenges it presents outweigh them for now.
“To effectively address security challenges, AI engines need extensive tuning and contextual input. We strongly advocate for shifting left, addressing security as early as possible in the development lifecycle."
—Elango Senthilnathan
Stephen Kowski, field CTO at SlashNext, said that automated software development requires a significant change in the way security needs to be approached.
“Instead of just addressing human coding mistakes, the emphasis is on securing the automation pipeline and verifying that all automatically produced code and configurations are secure. This requires security to be integrated from the very beginning, adopting a Secure by Design philosophy that is enforced through automated tools.”
—Stephen Kowski
Automating software development also means automating the tests and scrutiny put on the development pipeline, said Iftach Ian Amit, founder and CEO of Gomboc.AI.
“Since right now software development automation is riddled with security issues — by nature it learns from other software code, which is also prone to security issues — that means security has more work to do on code created by AI."
—Iftach Ian Amit
Next steps for AppSec teams
SOCRadar’s Seker advises security teams to evolve in parallel to automated software development by integrating automated code-scanning tools, setting strict governance policies for AI-generated code, and embedding real-time monitoring into the continuous integration/continuous delivery (CI/CD) process.
Training is also key. Seker said developers and security analysts alike need to understand how large language models (LLMs) make decisions, what their limitations are, and how to validate their outputs. "Just as DevOps birthed DevSecOps, the next wave might be AISecOps, where securing autonomous development becomes its own discipline,” Seker said.
SlashNext’s Kowski recommended that AppSec teams embed security controls into the automated development pipeline, including integrated security testing for code, configurations, and APIs within CI/CD systems. “It's vital to establish strong API posture-governance controls to ensure that generated APIs are secure from the start and continuously monitored for vulnerabilities and unusual activity. Additionally, protecting the AI agents and tools powering this automation is crucial,” he said.
The rapid rise of AI coding is highlighting the need for standards, Seker stressed.
“As more organizations embrace autonomous software engineering, a lack of common frameworks for validation, explainability, and secure agent interaction poses a systemic risk. We should prioritize the creation of security benchmarks and red-teaming guidelines tailored specifically for AI-driven development environments.”
—Ensar Seker
RIP, software developers?
Despite the efforts of Mechanize, software developers are unlikely to be replaced anytime soon. “Human creativity and judgment are still required to build modern software,” argued Gomboc.AI’s Amit. “Automation definitely helps with the rote tasks and the basic elements, but just like some recent Apple research concluded, none of the promises of ‘reasoning’ by AI providers are grounded, and the coding assistants are prone to a high rate of mistakes when it comes to security and reliability.”
Seker agrees: “We may eventually reach a point where large portions of the software development lifecycle are automated, especially in tasks like code generation, testing, and deployment, but full automation remains unlikely in the near future.”
“Software engineering is not just syntax and logic,” Seker continued. “It involves abstract problem-solving, prioritization, ethics, and context awareness that AI still struggles with. Instead of full automation, we’re heading toward co-piloted development, where humans remain in charge of high-level reasoning and architectural decisions, while AI handles repetitive tasks at scale.”
Modern AppSec tooling is necessary
A better approach to manage software risk is to employ next-generation technologies such as complex binary analysis and reproducible builds to complement traditional AppSec testing tools such as static and dynamic application security testing (SAST and DAST), as well as software composition analysis (SCA).
The Enduring Security Framework, a public/private working group led by the National Security Agency (NSA) and the Cybersecurity and Infrastructure Security Agency (CISA), has called for the use of binary analysis and reproducible builds to identify and manage risk. These more modern tools produce actionable threat information about the software and services deployed within IT environments. That includes the presence of active malware, evidence of software tampering, the absence of application hardening, and incidents of secrets exposure. This strategy makes security teams more proactive in their quest to mitigate risk.
In contrast, SAST and DAST typically apply only to a small subset of internally developed systems and applications at many organizations, said Saša Zdjelar, chief trust officer at ReversingLabs. He said the recommended use of binary analysis and reproducible builds marked a significant step forward in ensuring better software supply chain security
"Our ability to analyze binaries is key to understanding risk in third-party software."
—Saša Zdjelar
New NIST guidance identifies key AI and ML challenges. Learn why ReversingLabs Spectra Assure should be an essential part of your solution.
Keep learning
- Read the 2025 Gartner® Market Guide to Software Supply Chain Security. Plus: See RL's webinar for expert insights.
- Get the white paper: Go Beyond the SBOM. Plus: See the Webinar: Welcome CycloneDX's xBOM.
- Go big-picture on the software risk landscape with RL's 2025 Software Supply Chain Security Report. Plus: See our Webinar for discussion about the findings.
- Get up to speed on securing AI/ML with our white paper: AI Is the Supply Chain. Plus: See RL's research on nullifAI and learn how RL discovered the novel threat,
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.