The malicious changes were submitted by JiaT75, one of the two main xz Utils developers with years of contributions to the project.

“Given the activity over several weeks, the committer is either directly involved or there was some quite severe compromise of their system,” an official with distributor OpenWall wrote in an advisory. “Unfortunately the latter looks like the less likely explanation, given they communicated on various lists about the ‘fixes’” provided in recent updates. Those updates and fixes can be found here, here, here, and here.

On Thursday, someone using the developer’s name took to a developer site for Ubuntu to ask that the backdoored version 5.6.1 be incorporated into production versions because it fixed bugs that caused a tool known as Valgrind to malfunction.

“This could break build scripts and test pipelines that expect specific output from Valgrind in order to pass,” the person warned, from an account that was created the same day.

One of maintainers for Fedora said Friday that the same developer approached them in recent weeks to ask that Fedora 40, a beta release, incorporate one of the backdoored utility versions.

“We even worked with him to fix the valgrind issue (which it turns out now was caused by the backdoor he had added),” the Ubuntu maintainer said.

He has been part of the xz project for two years, adding all sorts of binary test files, and with this level of sophistication, we would be suspicious of even older versions of xz until proven otherwise.

124 points

Dude seems like a foreign asset

permalink
report
reply
88 points
*

Jia Tan, University of Hong Kong in China. He’s been the sole maintainer of the package for almost two years.

permalink
report
parent
reply
86 points

Looks like he’d done a lot for various US companies on his LinkedIn.

I would not be surprised if he was previously legit but pressured into doing this by the CCP.

permalink
report
parent
reply
29 points

Maybe he wasn’t sloppy by accident if he was indeed coerced by someone. I don’t think we’ll ever find out the backstory of this though.

permalink
report
parent
reply
32 points

It would make more sense to compromise developers in trusted positions, or steal their credentials, than going through the time and effort of building trusted users and projects only to burn them with easily spotted vulnerabilities.

permalink
report
parent
reply
16 points

This wasn’t easily spotted. They use words like sloppy, but it all started with someone digging in because starting ssh season was about a half second slower that it used to be. I could easily imagine 99.99% of people shrugging and deciding just something in the chain of session startup took a bit longer for a reason not worth digging into.

Also, this was a maintainer that just started two years ago. xz is much older than that, just he took over.

permalink
report
parent
reply
5 points

foreign to whom?

permalink
report
parent
reply
42 points
*
Deleted by creator
permalink
report
parent
reply
-13 points

Maybe she’s a bitch.

permalink
report
parent
reply
98 points
*

From the article…

Will Dormann, a senior vulnerability analyst at security firm Analygence, said in an online interview. “BUT that’s only because it was discovered early due to bad actor sloppiness. Had it not been discovered, it would have been catastrophic to the world.”

Is auditing for security reasons ever done on any open source code? Is everyone just assuming that everyone else is doing it, and hence no one is really doing it?


EDIT: I’m not attacking open source, I’m a big believer in open source.

I’m just trying to start a conversation about a potential flaw that needs to be addressed.

Once the conversation was started I was going to expand the conversation by suggesting an open source project that does security audits on other open source projects.

Please put the pitchforks away.

Edit2: This is not encouraging.

permalink
report
reply
61 points

You’re making a logical fallacy called affirming the consequent where you’re assuming that just because the backdoor was caught under these particular conditions, these are the only conditions under which it would’ve been caught.

Suppose the bad actor had not been sloppy; it would still be entirely possible that the backdoor gets identified and fixed during a security audit performed by an enterprise grade Linux distribution.

In this case it was caught especially early because the bad actor did not cover their tracks very well, but now that that has occurred, it cannot necessarily be proven one way or the other whether the backdoor would have been caught by other means.

permalink
report
parent
reply
22 points

Also they are counting the hits and ignoring the misses. They are forgetting that sneaking a backdoor into an open source project is extremely difficult because people are reviewing the code and such a thing will be recognized. So people don’t typically try to sneak back doors in. Also, backdoors have been discovered in an amazing amount of closed source projects where no one was even able to review the code.

permalink
report
parent
reply
8 points
*

They are forgetting that sneaking a backdoor into an open source project is extremely difficult because people are reviewing the code and such a thing will be recognized.

Everyone assumes what you have stated, but how often does it actually happen?

How many people, and how often, and how rigorous, are code reviews actually done? Especially with large volume projects?

permalink
report
parent
reply
11 points
*

It’s maybe possible, but perhaps even unlikely still.

Overwhelmingly thorough security review is time consuming and expensive. It’s also not perfect, as evidenced by just how many security issues accidentally live long enough to land Even in enterprise releases. That’s even without a bad actor trying to obfuscate the changes. I think this general approach had several aspects that would made it likely to pass scrutiny:

  • It was in XZ, which was likely not perceived as a security critical library. A security person would recognize any thing as potentially security critical, but they don’t always have the resources and so are directed to focus on obviously security related and historically security incident magnets.
  • it was carried out by someone who spent years building up an innocuous reputation. Investigation may even show previous “test samples” to be malicious but not caught, or else it was a red herring to get people used to random test samples getting placed in the project.
  • The only “source code” he touched was “just build scripts”. Even during a security audit, build shell scripts are likely going to be ignored, they are just build scripts and maybe you run some tests on all scripts, but those tests aren’t going to catch this sort of misbehavior.
  • The actual runtime malicious code was delivered as portions of ostensibly throw away test sample xz files. The malicious code is applied by binary patch of the build output. A security audit won’t be thinking too hard about a sea of binary files that are just throwaway samples as fodder for test.

So while I see the point about logical fallacy about it accidentally not getting far enough to see if the enterprise release process would have caught it, I think we know track records well enough to deem this approach likely to get through. Now that it has been caught, I could see some changes that may mitigate this in the future. Like package build scripts deleting all test samples and skipping tests when building for release, as well as more broad scrutiny.

There’s also the reality that a lot of critical applications deem themselves too cool to settle for “old crusty enterprise distributions”. They think that approach is antiquated and living on the edge is better. Admittedly I doubt theyd go as far as arch, tumbleweed, or rawhide, but this one could have easily made it to Debian testing, fedora release, or an Ubuntu release.

permalink
report
parent
reply
2 points

I think we know track records well enough to deem this approach likely to get through.

That was my concern, and why I brought up my point.

Human nature, especially when volunteer work versus paid work is being done, as well as someone who purposely over the long-term is trying to be devious, could be a potent combination for disaster.

I still wonder if there should be an actual open source project that does nothing but security audits of all other open source projects, hence my original question as an opener to a conversation that I never got to elaborate on because I was getting attacked almost immediately by people who are very sensitive about bringing any criticisms/concerns about open source out in the open.

permalink
report
parent
reply
9 points

Have those audits you allude to ever caught anything before it went live? Cuz this backdoor has been around for a month and RedHat is affected, too. Plus this was the single owner of a package who is implicitly trusted, it’s not like it was a random contributor whose PRs would get reviewed.

The code being open source helps people track it down once they try to debug an issue (performance issue and crashes because in their setup the memory layout was not what the backdoor was expecting), that’s true. But what actually triggered the investigation was the bug. After that it’s just a matter of time to trace it back to the backdoor. You understimate reverse engineers. Or maybe I’m just spoiled.

How long until US bans code from developers with ties to CN/RU?

permalink
report
parent
reply
5 points

How long until US bans code from developers with ties to CN/RU?

That won’t happen because it would effectively mean banning all FOS which isn’t remotely practical.

permalink
report
parent
reply
1 point
1 point
*

That link doesn’t prove whatever you think it’s proving.

The open source ecosystem does not rely (exclusively) on project maintainers to ensure security. Security audits are also done by major enterprise-grade distribution providers like Red Hat Enterprise. There are other stakeholders in the community as well who have a vested interest in security, including users in military, government, finance, health care, and academic research, who will periodically audit open source code that they’re using.

When those organizations do their audits, they will typically report issues they find through appropriate channels which may include maintainers, distributors, and the MITRE Corporation, depending on the nature of the issue. Then remedial actions will be taken that depend on the details of the situation.

In the worst case scenario if an issue exists in an open source project that has an unresponsive or unhelpful maintainer (which I assume is what you were suggesting by providing that link), then there are several possible courses of action:

  • Distribution providers will roll back the package to an earlier compatible version that doesn’t have the vulnerability if possible
  • Someone will fork the project and patch the fix (if the license allows), and distribution providers will switch to the fork
  • In the worst case scenario if neither of the above are possible, distribution providers will purge the vulnerable package from their distributions along with any packages that transitively depend on it (this is almost never necessary except as a short-term measure, and even then is extremely rare)

The point being, the ecosystem is NOT strictly relying on the cooperation of package maintainers to ensure security. It’s certainly helpful and makes everything go much smoother for everyone if they do cooperate, but the vulnerability can still be identified and remedied even if they don’t cooperate.

As for the original link, I think the correct takeaway from that is: If you have a vested or commercial interest in ensuring that the open source packages you use are secure from day zero, then you should really consider ways to support the open source projects you depend on, either through monetary contributions or through reviews and code contributions.

And if there’s something you don’t like about that arrangement, then please consider paying for licenses on closed-source software which will provide you with the very reassuring “security by sticking your head in the sand”, because absolutely no one outside the corporation has any opportunity to audit the security of the software that you’re using.

permalink
report
parent
reply
1 point

You’re making a logical fallacy called affirming the consequent where you’re assuming that just because the backdoor was caught under these particular conditions, these are the only conditions under which it would’ve been caught.

No, I’m actually making that comment based on a career as a software developer, who has actually worked on a few open source projects before.

permalink
report
parent
reply
5 points

Your credentials don’t fix the logical fallacy.

permalink
report
parent
reply
4 points

That’s another logical fallacy: Argument from authority

permalink
report
parent
reply
29 points
*

Having once worked on an open source project that dealt with providing anonymity - it was considered the duty of the release engineer to have an overview of all code committed (and to ask questions, publicly if needed, if they had any doubts) - before compiling and signing the code.

On some months, that was a big load of work and it seemed possible that one person might miss something. So others were encouraged to read and report about irregularities too. I don’t think anyone ever skipped it, because the implications were clear: “if one of us fails, someone somewhere can get imprisoned or killed, not to speak of milder results”.

However, in case of an utility not directly involved with functions that are critical for security - it might be easier to pass through the sieve.

permalink
report
parent
reply
7 points

I don’t think anyone ever skipped it, because the implications were clear: “if one of us fails, someone somewhere can get imprisoned or killed, not to speak of milder results”.

However, in case of an utility not directly involved with functions that are critical for security - it might be easier to pass through the sieve.

I’ve actually seen people checking in code that doesn’t get reviewed properly on mission critical apps before (like in the health industry).

My understanding is basically the same as yours, and in theory I agree with you. However, the problem is we all tend to hand-wave away any possibility of bad things happening, because it’s open source, and don’t take into account human nature, especially when it comes to volunteer versus paid work.

permalink
report
parent
reply
20 points

Auditing can be done only on open source code. No code = no audit. Reverse engieneering doesn’t count.

permalink
report
parent
reply
1 point

True, but does it actually get done, or just everyone just assuming gets done, because it’s open source?

permalink
report
parent
reply
17 points

Bystander effect, yes.

permalink
report
parent
reply
16 points

The answer is the same as closed source software: sometimes.

But that’s beside the point, a security audit is not perfect. Plenty of audited codebases are the source of security vulnerabilities in the wild. We know based on analysis that the malicious actor’s approach would have a high chance of successfully hiding from a typical security audit.

permalink
report
parent
reply
1 point

Oh I know security audits are not perfect, I’m just wondering if they actually get done, or everyone just assumes they get done because of “Open Source”, but they don’t.

permalink
report
parent
reply
1 point

There are security researchers looking for vulnerabilities constantly, but they’re inconsistent and informal. Issues usually get caught eventually, but sometimes that’s after a vulnerability in the wild.

permalink
report
parent
reply
93 points
*

Thankfully this was discovered before hitting stable distros but I’m hoping it increases scrutiny across the board. We dodged a bullet on this one.

permalink
report
reply
22 points

Across the board indeed. Scrutiny in code is one thing, where this story, as far as is known right now, really went south is the abuse of a trusted, but vulnerable, member of the community.

I know the (negative) spotlight is targeting Jia Tan right now (and who knows if they (still) exist), but I really hope Larhzu is doing okay. Who’s name is mentioned in the same articles.

Mental health is a serious issue, that, if you read the back story, is easily ignored or abused. And it wasn’t an unknown in this story. Don’t only check the code, check up on your people too.

permalink
report
parent
reply
13 points

This is why I run debian oldstable.

Or maybe it’s because I’m too lazy to do a dist-upgrade.

permalink
report
parent
reply
0 points
Deleted by creator
permalink
report
parent
reply
81 points

Long game supply chain attacks, pretty much going to be state actors. And I wouldn’t chalk it up to the usual malicious ones like China and Russia. This could be the NSA just as easily.

permalink
report
reply
34 points

I honestly think the NSA has changed. If you look at the known backdoors they haven’t got caught making any new backdoors since like 2010. Their MO also seems to be more hardware and encryption (more of an observational charter) than manipulation.

There’s also evidence US Congress acted to stop the NSA from doing these underhanded tacits at least once https://www.wired.com/story/nsa-backdoors-closed/

They’re not idiots, lots of smart people there that surely understand the risk of something like this to US national security interests. It’s not the NSA that’s been asking for encryption to be broken in recent years. They’ve been warning about quantum threats and … from what I’m aware of actually been taking on the defensive role they were conducted to perform https://gizmodo.com/nsa-plans-to-act-now-to-ensure-quantum-computers-cant-b-1757038212

This seems like something that could actually be weaponized against predominantly western technology companies so I’d be very surprised if it was them and very surprised if they used someone that appears to be a Chinese born resident to do it.

permalink
report
parent
reply
33 points

I really can’t believe they’ve stopped. Their mentality is “national security has no morals”. They’ll do everything they can do to facilitate that mission, though not getting caught is a big part of the facade they need to put on to keep or renovate their image to do this.

Maybe they’re being more careful, and doing simple things like putting in timestamps that emulate working hours in other timezones are certainly the first thing they’re going to think about. That one has always cracked me up, security researchers point to it like it’s proof of something, which is ridiculous. Just like our people are smart, I don’t think the foreign actors are dumb either.

And before you say it, I’d be all over not being paranoid if it hadn’t been proven to me time and again that these agencies won’t change, that they don’t give a shit about what’s right if it gets in the way of their mandate. The only thing that might change is how well they hide things now and intimidate their people into staying quiet. Because potential whistleblowers have seen the examples that have been made.

permalink
report
parent
reply
15 points
*

Personally I suspect they’re getting all the information they care about via subpoenas on big data and social media companies. They don’t have a need to compromise security on a technical level anymore because the justice system itself is compromised. That means backdoors only benefit national enemies at this point, so the NSA of today would rather those not exist at all.

Of course that’s not to say anyone should trust those agencies at their word on anything.

permalink
report
parent
reply
6 points
*

Backdoors at a mation-state level are a double edged sword. In order to successfully implement a backdoor, you need to ensure that you are more clever than your adversaries, because those same backdoors can be used against you. You must assume that they will eventually discover them, and be able to leverage them against you. Then you must be able to identify that it had been compromised, and then “responsibly disclose” the vulnerability before too much damage is done.

Much better to be on the defensive. Discover 0days first, either accidental or intentional, and then use them until someone else discloses them and they get patched to hell.

permalink
report
parent
reply
13 points

That’s not true, Shadow broker leaks for example contained 0-day found by the NSA well after 2010. And that’s only what got published, there’s probably more !

permalink
report
parent
reply
6 points

There is a difference from finding something you can take advantage of and putting it there though, no? This sounds like the former.

But still, it’s a good point, thanks.

permalink
report
parent
reply
4 points
*

It’s not the NSA that’s been asking for encryption to be broken in recent years.

I remember 2013 backdoored crypto by NSA. If they get caught less doesn’t mean they make less backdoors.

EIDT: it was discovered in 2007 and revoked as standard in 2014

Also they owned corporation that made backdoored crypto algos till 2018. And the only reason they stopped is FOIA.

permalink
report
parent
reply
21 points

I don’t know man. Imagine you could have ssh access to every Debian and fedora server on the planet, and all you had to do was write tests for some compression library for 2 years and sneak in a clever patch. I’d guess such an exploit is worth millions. You wouldn’t work 2 years for millions of dollars?

This is sophisticated but it doesn’t have to be a state actor.

permalink
report
parent
reply
1 point

Yup. I think it’s an independent hacker, probably hired by a state actor, but not a state actor themselves.

My understanding is that state actors generally look for exploits, not create them. I also think they’d be a little more clever than this.

permalink
report
parent
reply
7 points

If you throw enough money at the right person you can get shit done.

permalink
report
parent
reply
3 points

I think you are greatly underestimating FSB incompetense.

permalink
report
parent
reply
65 points

There are no known reports of those versions being incorporated into any production releases for major Linux distributions

A stable release of Arch Linux is also affected.

… BTW.

permalink
report
reply
15 points

The malicious code is only thought to have affected deb/rpm packaging (i.e the backdoor only included itself with those packaging methods). Additionally, arch doesn’t link ssh against liblzma which means this specific vulnerability wasn’t applicable to arch. Arch may have still been vulnerable in other ways, but this specific vulnerability targeted deb/rpm distros

permalink
report
parent
reply
14 points

I liked the joke, but ya arch is not compromised. Check out this user’s detailed comment.

https://feddit.de/comment/8782369

permalink
report
parent
reply
8 points
*

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 11K

    Posts

  • 505K

    Comments