<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1076912843267184&amp;ev=PageView&amp;noscript=1">

RL Blog

|

The Week in Security: Google Cloud Build permissions can be poisoned, WormGPT weaponizes AI

Kate Tenerowicz
Blog Author

Kate Tenerowicz, Former Summer Intern at ReversingLabs. Read More...

google-cloud-build-abused

Welcome to the latest edition of The Week in Security, which brings you the newest headlines from both the world and our team across the full stack of security: application security, cybersecurity, and beyond. This week: Google Cloud Build permissions can be abused to poison production environments. Also: A new AI model allows cybercriminals to launch sophisticated phishing attacks.  

This Week’s Top Story

Attackers can abuse Google Cloud Build to poison production environments

Security researchers at Orca Security have uncovered a new vulnerability that can compromise production environments in Google Cloud Build — a CI/CD platform that is a part of Google Cloud. Cloud Build allows development organizations to integrate source code from different code repositories or cloud storage spaces and conduct builds. It integrates with Google Cloud Services such as Artifact Registry, Google Kubernetes Engine, and App Engine. Orca researchers discovered that Cloud Build’s user permissions can be abused to produce potentially catastrophic environment poisoning.

The Orca researchers discovered the flaw in a Cloud Build permission titled cloudbuild.builds.create, which gives users the ability to create new builds. An attacker could leverage this permission to elevate the permissions of a lower privileged compromised account, giving it the ability to masquerade as a Cloud Build service account and access source code and resources, such as software artifacts. For example, using this flaw, an attacker could extract container images from the Artifact Registry that are used inside the Google Kubernetes Engine (GKE) and inject them with malicious code. The code then executes when the compromised image is launched by the GKE, creating a backdoor that malicious actors can leverage for remote execution. 

Any application built from these manipulated images is vulnerable to backdoor deployment that can result in denial-of-service (DoS) attacks and data theft. Also, if these applications are deployed on customer environments, the risks grow exponentially. Malicious actors can then deliver the final blow, which is a software supply chain attack that could have a similar impact to SolarWinds or 3CX incidents. 

This discovery highlights the potential risks lurking within cloud-based infrastructure and the need for constant vigilance as the threat landscape constantly adapts and shifts. It is recommended that any users of Google Cloud Platform restrict permissions granted to the Cloud Build service account based on the principle of least privilege. 

News Roundup

Here are the stories we’re paying attention to this week…    

WormGPT: New AI Tool Allows Cybercriminals to Launch Sophisticated Cyber Attacks (The Hacker News)

A new generative AI cybercrime tool called WormGPT has been advertised on underground forums as a way to launch sophisticated phishing and business email compromise (BEC) attacks. It operates without any ethical boundaries that limit most public large language models (LLMs) such as ChatGPT. WormGPT can be used in the place  of these ‘ethical’ LLMs to draft highly convincing fake emails that are personalized to the individual recipient, warned Daniel Kelly of the firm SlashNext.

FIN8 Modifies 'Sardonic' Backdoor to Deliver BlackCat Ransomware  (Dark Reading)

FIN8 has made a resurgence online using a revised version of ‘Sardonic’ Backdoor to launch BlackCat ransomware attacks. FIN8 is a well-known financially-motivated cybercrime group that has a habit of constantly reinventing its tactics. The revamp of ‘Sardonic’ — first made public in 2021 — maintains much of the same original characteristics but helps avoid detection practices designed for the 2021 version, and expands the hackers flexibility and capabilities. 

US govt bans European spyware vendors Intellexa and Cytrox (Bleeping Computer) 

The U.S. government has banned European commercial spyware manufacturers Intellexa and Cytrox, citing risks to U.S. national security and foreign policy interests. This decision was motivated by the four companies' involvement in trafficking cyber exploits and their aid in sustaining a global climate of repression and human rights violations.

Linux Ransomware Poses Significant Threat to Critical Infrastructure (Dark Reading)

Linux runs on about 80% of web servers, often in the government, manufacturing, energy, and banking sectors. It is the backbone of the Internet and is the new frontier for cybercriminals operating ransomware, warns Jon Miller the CEO of the firm Halcyon in an opinion piece. Gangs have been introducing Linux versions at an increasing pace, with attacks now coming from some of the most infamous gangs, Miller said. The cybersecurity field needs to get ahead of this major threat by focusing more attention on Linux defense systems and security. 

If George Washington Had a TikTok, What Would His Password Be? (Dark Reading) 

An experiment run on ChatGPT found that the AI model can — if given the correct parameters and wording — generate a password list for an individual on a specific platform. In this experiment’s case, they generated a list for George Washington’s  TikTok password. Despite the goofy nature of the case study, the gravity of the experiment is serious. Any individual could replace George Washington in this scenario, with AI crafting hundreds of potential passwords for that specific user. This kind of information could be handed to a hacker on a silver platter. Other platforms have been able to create password lists, but none to this level of ease and simplicity.This experiment further demonstrates the weakness in password-based authentication, and how AI weakens it even more.

Get up to speed on key trends and learn expert insights with The State of Software Supply Chain Security 2024. Plus: Explore RL Spectra Assure for software supply chain security.

More Blog Posts

    Special Reports

    Latest Blog Posts

    Chinese APT Group Exploits SOHO Routers Chinese APT Group Exploits SOHO Routers

    Conversations About Threat Hunting and Software Supply Chain Security

    Reproducible Builds: Graduate Your Software Supply Chain Security Reproducible Builds: Graduate Your Software Supply Chain Security

    Glassboard conversations with ReversingLabs Field CISO Matt Rose

    Software Package Deconstruction: Video Conferencing Software Software Package Deconstruction: Video Conferencing Software

    Analyzing Risks To Your Software Supply Chain