Toyota stands accused of lax DevOps standards, as the company reveals it stored prod database credentials in a public GitHub repo. That’s bad enough, but it also took five years to detect and fix.
Easy to mock, but could it happen to you? What DevOps processes do you use to prevent a similar incident? And do those processes have management support?
It’s not the first time this has happened. In this week’s Secure Software Blogwatch, we know it won’t be the last.
Your humble blogwatcher curated these bloggy bits for your entertainment. Not to mention: World of tomorrow.
Do: Detect daft devs defying doctrine
What’s the craic? Satoshi Sugiyama reports — “Toyota says about 296,000 pieces of customer info possibly leaked”:
“Possibility of spamming, phishing”
Toyota said 296,019 email addresses and customer numbers of those using T-Connect, a telematics service that connects vehicles via a network, were potentially leaked. … It added that third-party access "could not be completely ruled out." … The affected customers are individuals who signed up to the service's website using their email addresses since July 2017.
The Japanese automaker … cautioned that there is a possibility of spamming, phishing scams and unsolicited email messages being sent to the users' email addresses. [It] said a contractor that developed the T-Connect website accidentally uploaded parts of the source code with public settings.
Sounds like the details got mangled in reporting. Bill Toulas managed to uncover the real issue — “Access key exposed on GitHub”:
“GitHub has begun scanning published code for secrets”
An access key was publicly available on GitHub for almost five years. … This made it possible for an unauthorized third party to access the details of 296,019 customers between December 2017 and … September 17, 2022, [when] the database's keys were changed.
This type of security incident has become a large-scale problem that places troves of sensitive data at risk of exposure. … This is typically the result of developer negligence, storing credentials in the code to make asset fetching, service access, and configuration updating quick and easy while testing multiple app iterations. These credentials should be removed when the software is ready for actual deployment.
GitHub has begun scanning published code for secrets and blocking code commits that contain authentication keys to better secure projects. However, if a developer uses non-standard access keys or custom tokens, GitHub will not be able to detect them.
Ouch. How did it happen? Simon Sharwood says — “When your contractor leaks site source code”:
The automaker … explains an outsourced developer tasked with building T-Connect uploaded the source code for the site to a GitHub public repo in December 2017. … Thankfully, the customer management numbers stored on the server aren't much use to third parties.
But email addresses are – especially if criminals decide to fire up some Toyota-themed phishing. Perhaps the car maker needs to scrutinize its own affairs more closely too, given it experienced a cyberattack in March 2022 that shuttered its plants, sold cars susceptible to losing wheels while in motion, and faked emissions data.
What a mess. chatterhead sounds slightly sarcastic:
Oh good, glad all the keys were changed and now the folks who had access for 5 years finally don't. Phew.
When are these companies going to realize we don't give a **** if phone numbers and credit cards are leaked - numbers can be changed and purchases can be rolled back. The exposure of 5 years of behavioral data on almost 300K people is the threat. Behavior dictates economics, politics, and everything in between. Behavioral data is what real manipulative models are built around
How can devs avoid this sort of SNAFU? u/sometimesanengineer suggests a little list:
To prevent publishing secrets:
• IDE side pre commit checks
• Pipeline check for secrets
• Periodically re-scanning your repos
And, of course, fixing any problems as soon as you discover them. Unlike Toyota did. Jamesit asks the obvious question:
Why did it take two days to change the key? I thought changing the key would be a priority.
And what about the five years beforehand? TwistedGreen calls it “Massive Mismanagement”:
Not only did a developer have access to production database credentials containing customer data, but these credentials were not rotated in 5 years? Sorry, but the problem is way bigger than a "subcontractor messed up." Heads [should] roll for this.
Aye. There’s the rub. u/srgevipr argues that management must give space for devs to follow sound processes:
It's more culture than tools. … Everyone in a company should clearly understand the benefits. … Also, it should be supported by management to build it into the process.
Meanwhile, Toyota has a special place in drinkypoo’s heart:
This is the same Toyota where — when they were accused of unintended acceleration — a code review found that there were multiple code paths that could cause it, which were caused in part by Toyota engineers not … following Toyota's own coding standards — let alone well-established industry standards.
You have been reading Secure Software Blogwatch by Richi Jennings. Richi curates the best bloggy bits, finest forums, and weirdest websites … so you don’t have to. Hate mail may be directed to @RiCHi or email@example.com. Ask your doctor before reading. Your mileage may vary. Past performance is no guarantee of future results. Do not stare into laser with remaining eye. E&OE. 30.
Image sauce: IIHS.
- See Webinar: Secure by Design: Why Trust Matters for Risk Management
- Supply Chain Risk Report: Learn why you need to upgrade your AppSec
- See special report: The Evolution of Application Security
- Track key trends: The State of Supply Chain Security 2022-23
- Special report: C-SCRM and federal supply chain security guidance