To be fair, we only know of this one. There may well be other open source backdoors floating around with no detection. Was heartbleed really an accident?
True. And the “given enough eyeballs, all bugs are shallow” is a neat sounding thing from the past when the amount of code lines was not as much as now. Sometimes it is scary to see how long a vulnerability in the Linux kernel had been there for years, “waiting” to be exploited.
Still far better than a proprietary kernel made by a tech corp, carried hardly changed from release to release, even fewer people maintain, and if they do they might well be adding a backdoor themselves for their government agency friends.
I’ve gotten back into tinkering on a little Rust game project, it has about a dozen dependencies on various math and gamedev libraries. When I go to build (just like with npm in my JavaScript projects) cargo needs to download and build just over 200 projects. 3 of them build and run “install scripts” which are just also rust programs. I know this because my anti-virus flagged each of them and I had to allow them through so my little roguelike would build.
Like, what are we even suppose to tell “normal people” about security? “Yeah, don’t download files from people you don’t trust and never run executables from the web. How do I install this programming utility? Blindly run code from over 300 people and hope none of them wanted to sneak something malicious in there.”
I don’t want to go back to the days of hand chisling every routine into bare silicon by hand, but i feel l like there must be a better system we just haven’t devised yet.
Do you really need to download new versions at every build? I thought it was common practice to use the oldest safe version of a dependency that offers the functionality you want. That way your project can run on less up to date systems.
Okay, but are you still going to audit 200 individual dependencies even once?
Most softwares do not include detailed security fixes in the change log for people to check; and many of these security fixes are in dependencies, so it is unlikely to be documented by the software available to the end user.
So most of the time, the safest “oldest safe” version is just the latest version.
So only protects like Debian do security backports?
Edit: why the downvote? Is this not something upstream developers do? Security fixes on older releases?
Debian actually started to collect and maintain packages of the most important rust crates. You can use that as a source for cargo
Researchers have found a malicious backdoor in a compression tool that made its way into widely used Linux distributions, including those from Red Hat and Debian.
Yeah they messed up once. It’s still miles better than just not having someone looking at the included stuff
THIS.
I do not get why people don’t learn from Node/NPM: If your language has no exhaustive standard library the community ends up reinventing the wheel and each real world program has hundreds of dependencies (or thousands).
Instead of throwing new features at Rust the maintainers should focus on growing a trusted standard library and improve tooling, but that is less fun I assume.
Easily, just look at the standard libraries of Java/Python and Golang! :-P
To get one thing out of the way: Each standard library has dark corners with bad APIs and outdated modules. IMHO it is a tradeoff, and from my experience even a bad standard library works better than everyone reinvents their small module. If you want to compare it to human languages: Having no standard library is like agreeing on the English grammar, but everyone mostly makes up their own words, which makes communication challenging.
My examples of missing items from the Rust standard library (correct me, if I am wrong, not a Rust user for many reasons):
- Cross platform GUI library (see SWING/Tk)
- Enough bits to create a server
- Full set of data structures and algorithms
- Full set of serialization format processing XML/JSON/YAML/CVS/INI files
- HTTP(S) server for production with support for letsencrypt etc.
Things I don’t know about if they are provided by a Rust standard library:
- Go like communication channels
- High level parallelism constructs (like Tokyo etc.)
My point is, to provide good enough defaults in a standard library which everybody knows/are well documented and taught. If someone has special needs, they always can come up with a library. Further, if something in the standard library gets obsolete, it can easily be deprecated.
It’s a really wicked problem to be sure. There is work underway in a bunch of places around different approaches to this; take a look at SBoM (software bill-of-materials) and reproducible builds. Doesn’t totally address the trust issue (the malicious xz releases had good gpg signatures from a trusted contributor), but makes it easier to spot binary tampering.
+1
Shameless plug to the OSS Review Toolkit project (https://oss-review-toolkit.org/ort/) which analyze your package manager, build a dependency tree and generates a SBOM for you. It can also check for vulnerabilitiea with the help of VulnerableCode.
It is mainly aimed at OSS Compliance though.
(I am a contributor)
The only real downside on the open source side is that the fix is also public, and thus the recipe how to exploit the backdoor.
If there’s a massive CVE on a closed source system, you get a super high-level description of the issue and that’s it.
If there’s one on an open source system, you get ready-made “proof of concepts” on github that any script kiddy can exploit.
And since not every software can be updated instantly, you are left with millions of vulnerable servers/PCs and a lot of happy script kiddies.
See, for example, Log4Shell.
If your security relies on hidden information then it’s at risk of being broken at any time by someone who will find the information in some way. Open source security is so much stronger because it works independently of system knowledge. See all the open source cryptography that secures the web for example.
Open source poc and fix increases awareness of issues and helps everyone to make progress. You will also get much more eyes to verify your analysis and fix, as well as people checking if there could other consequences in other systems. Some security specialists are probably going to create techniques to detect this kind of sophisticated attack in the future.
This doesn’t happen with closed source.
If some system company/administrator is too lazy to update, the fault is on them, not on the person who made all the information available for your to understand and fix the issue.
Crowd sourcing vulnerability analysis and detection doesn’t make open source software inherently more secure.
Closed source software has its place and it isn’t inherently evil or bad.
This event shows the good and bad of the open source software world but says NOTHING about closed source software.
Crowd sourcing vulnerability analysis and detection doesn’t make open source software inherently more secure.
It does, because many more eyes can find issues, as illustrated by this story.
Closed source isn’t inherently bad, but it’s worse than open source in many cases including security.
I think you’re the only one here thinking publishing PoC is bad.
In this case it seems the backdoor is only usable with someone who has the correct key. Seeing and reverting something fishy is in some cases, like this easier than finding an exploit. It takes a lot of time in this case to figure out what goes on.
Fixing a bug never automatically give an easy to use exploit for script kiddies
Ever wondered why ${insert_proprietary_software_here} takes so long to boot?