Hacking News



  • @BernieTheBernie

    “The data our firewall collects shows that bots are trying different passwords to log onto accounts and different systems, or trying to find the vulnerabilities, tens of thousands of times a week.”

    Welcome to running an internet facing service sometime during the last 20 or so years.



  • @cvi Yeah, seeing the same thing in my own logs taught me that there is no such thing as "too obscure to be targeted"; because bots don't care, they simply target everyone.



  • tl;dw: backdoor in liblzma (xz) that affects/targets ssh.

    https://www.youtube.com/watch?v=jqjtNDtbDNI

    It appears(tm):

    • version 5.6 and newer
    • ssh doesn't normally link to liblzma, but liblzma may be pulled in indirectly (systemd loggers?)

    But don't bet your systems on that.

    Edit: OP: https://www.openwall.com/lists/oss-security/2024/03/29/4



  • Oww, that one is nasty.



  • @cvi A reply in the comments:

    Important Clarification (since I feel this isn't clarified): upstream OpenSSH doesn't use liblzma, however many distros like Debian patch OpenSSH to use SystemD Notifications through libsystemd, which in turn uses liblzma. Distros like Arch (which don't patch OpenSSH) or distros without SystemD like Void should be fine with regards to SSH (however most distros are already downgrading xz anyway for obvious security reasons)

    Source: the latest Arch News post regarding this backdoor

    EDIT: to quote directly from the Arch news post:
    "Arch does not directly link openssh to liblzma, and thus this attack vector is not possible"



  • Comments on Hacker News are interesting.

    Several of them (like this one) mentions elements that show the attack was likely carefully planned and implemented in steps, over a long time.

    Others have found out that the library is loaded by a lot of stuff, not just ssh. So there maybe additional nasty stuff that hasn't been found yet.

    There's even speculation that one of the maintainers doesn't actually exist, and is a fake identity created just to infiltrate the project.

    The whole thing looks more and more like a state-actor operation.



  • @Zerosquare Yeah, it's been an interesting read. Waiting for the full analysis of the injected code (and potentially if it tried to do something else besides backdooring sshd).

    The whole thing looks more and more like a state-actor operation.

    Scary part: If it is, this is unlikely to be their only play.

    Funny part: It was foiled because somebody thought that SSH taking 0.8s instead of 0.3s was a bit too much and then started digging. (Performance matters! Optimize your software!)


  • BINNED

    @cvi all of this is pretty insane. I think the part where the “many eyes” idea fails hardest is the apparently crazy build process that even makes this possible. Turing complete build processes were a terrible idea. (They said Thompson’s “Trusting Trust” was purely theoretical, but this “backdoor isn’t even in the code” is close enough in spirit)
    I’m not sure how bad the audit hook part of the dynamic linker is, but sounds problematic too.


  • Discourse touched me in a no-no place

    @topspin said in Hacking News:

    @cvi all of this is pretty insane. I think the part where the “many eyes” idea fails hardest is the apparently crazy build process that even makes this possible. Turing complete build processes were a terrible idea. (They said Thompson’s “Trusting Trust” was purely theoretical, but this “backdoor isn’t even in the code” is close enough in spirit)

    The backdoor was in blobs for "testing" so in the codebase, even if largely not in the part supposed to be readable.

    I'm still waiting for someone to figure out a way to land such an attack in the Rust build system. The wailing will be very funny, and they aren't immune even with static linking; proving that your build tools didn't add something "special" is difficult...

    I’m not sure how bad the audit hook part of the dynamic linker is, but sounds problematic too.

    That seems to be part of how it figures out when to turn on.



  • @dkf said in Hacking News:

    I'm still waiting for someone to figure out a way to land such an attack in the Rust build system. The wailing will be very funny, and they aren't immune even with static linking; proving that your build tools didn't add something "special" is difficult...

    anyone thinks updating often is still good advice? supply chain attacks seem a lot worse than some random vulnerability on a depend that isn't exposed to users



  • The weirdest hot take I’ve seen from this is “and this is why you should use SaaS”.

    As in, it’s their problem to worry about, not yours, that this is a thing. On one level I guess that’s true but on the other you have no way of knowing what backdoors exist on such systems that will exfiltrate god-knows-what.



  • @Arantor said in Hacking News:

    The weirdest hot take I’ve seen from this is “and this is why you should use SaaS”.

    As in, it’s their problem to worry about, not yours, that this is a thing. On one level I guess that’s true but on the other you have no way of knowing what backdoors exist on such systems that will exfiltrate god-knows-what.

    TempleOS.



  • @Carnage poor sod has passed away now, though, and I fear we shall not see his like again. (Terry Davis had a very long, very complex battle with mental health issues, and that is in no small part why his death was ruled suicide rather than accident at the time. Nonetheless, he represents a mindset and an age of development that are fading quite fast.)



  • @Arantor said in Hacking News:

    @Carnage poor sod has passed away now, though, and I fear we shall not see his like again. (Terry Davis had a very long, very complex battle with mental health issues, and that is in no small part why his death was ruled suicide rather than accident at the time. Nonetheless, he represents a mindset and an age of development that are fading quite fast.)

    Yep.
    We need a bunch of crazies that are the right kind of crazies to build what he did, but with less insanity.


  • Considered Harmful

    @Arantor said in Hacking News:

    The weirdest hot take I’ve seen from this is “and this is why you should use SaaS”.

    As in, it’s their problem to worry about, not yours, that this is a thing. On one level I guess that’s true but on the other you have no way of knowing what backdoors exist on such systems that will exfiltrate god-knows-what.

    It's a bit like the suggestion that instead of seat belts, cars should have a sharp, poisoned metal spike in the middle of the steering wheel. The way it should work—everybody knows they don't have a chance in case of an accident so they make extra sure to be super careful—is the the same as with SaaS: you know they can basically siphon off whatever they like so you adjust your level of trust accordingly.
    In practice it would of course work just as well as with the clouds.



  • @topspin said in Hacking News:

    I think the part where the “many eyes” idea fails hardest is the apparently crazy build process that even makes this possible.

    On the other hand, the "many eyes" was in a sense what uncovered it in the end. Assuming this is in fact state sponsored and that people are prepared to throw years at the problem ... how well would commercial software fare? Any software? (How many people in an business understand the full build process? What about their supply chains?)


  • BINNED

    @cvi sure. The point was that this probably would’ve been discovered more or less immediately if you put the backdoor code in the actual code. Not in a binary that’s only getting compiled in due to some makefile fuckery that nobody is reading because of insane build system.


  • Discourse touched me in a no-no place

    @topspin said in Hacking News:

    @cvi sure. The point was that this probably would’ve been discovered more or less immediately if you put the backdoor code in the actual code. Not in a binary that’s only getting compiled in due to some makefile fuckery that nobody is reading because of insane build system.

    It would have been just as easy to squirrel away inside a CMake-based build chain. Easier. I know of no build system where it would have been impossible; all of them need ways to decorate build outputs...


  • BINNED

    @dkf a sane build system would make it

    • extremely simple and declarative for your 99% of normal projects
    • stick out like a sore thumb - and cause extra scrutiny - for the few exceptions

    If your bog standard normal project looks insane, nobody blinks if you add stuff like this.



  • @topspin said in Hacking News:

    @dkf a sane build system would make it

    • extremely simple and declarative for your 99% of normal projects
    • stick out like a sore thumb - and cause extra scrutiny - for the few exceptions

    If your bog standard normal project looks insane, nobody blinks if you add stuff like this.

    And then you have the sane looking build systems that support plugins in the build system. Your build specification can look entirely sane and be completely declarative, it can still inject stuff that shouldn't be in there.
    One of the reasons to do a check of the generated deliverable every once in a while. You can't trust your build system any more than you can trust your least skilled cow-orker.



  • An additional factor in this mess is that apparently, some Linux distros aren't built from Git repos ; they use tarballs provided by developers instead, and it's not unusual for those tarballs' contents to differ from what's in Git repos, because they include such things as already auto-generated files and test stuff. The rationale appears to be "it's easier for distro maintainers not to have to run the whole build process from scratch".

    In xz's case, the code in Git was clean, the backdoor was only present in the tarballs.

    My opinion is:

    • needing such workarounds is a good sign that your build process is too convoluted (Pikachu is unavailable for comment)
    • even without foul play, distributing software that isn't guaranteed to match what's in source control is a bad practice, and can create hard-to-find bugs


  • @topspin Fair. Makefiles + autotools aren't the greatest. But I suspect that's only half the problem. The other half being that build code is often just given less attention. (Maybe this changes now for a while?)

    @dkf said in Hacking News:

    It would have been just as easy to squirrel away inside a CMake-based build chain. Easier. I know of no build system where it would have been impossible; all of them need ways to decorate build outputs...

    But also this.

    They did in fact sabotage the CMake-based build subtly. One of the feature tests would never compile due to a misplaced '.'. This caused one of the (new) sandbox features to not be detected and thus left disabled.



  • @Zerosquare said in Hacking News:

    An additional factor in this mess is that apparently, some Linux distros aren't built from Git repos ; they use tarballs provided by developers instead, and it's not unusual for those tarballs' contents to differ from what's in Git repos, because they include such things as already auto-generated files and test stuff. The rationale appears to be "it's easier for distro maintainers not to have to run the whole build process from scratch".

    In xz's case, the code in Git was clean, the backdoor was only present in the tarballs.

    My opinion is:

    • needing such workarounds is a good sign that your build process is too convoluted (Pikachu is unavailable for comment)
    • even without foul play, distributing software that isn't guaranteed to match what's in source control is a bad practice, and can create hard-to-find bugs

    Oh yes, any project whose build requires anything beyond the compiler and build toolchain installed and then running a single command is by my definition broken. If I have to spend time to fiddle with knobs and dig through years old forum threads to find out how to make the fucking thing build, then it's broken.
    If it's a project that is also deployed in some cloud or some such shit, the deploy should also be a single command or a single button press.
    Not reading miles of readmes to do the standard thing.
    Oh and building or deploying should not have side effects. If I build one project, another project should not stop compiling.


  • BINNED

    I wonder if, come Tuesday and IT back at work, we’ll get a security advisory that everybody needs to change their ssh keys and passwords. 🤔


  • ♿ (Parody)

    @topspin it wouldn't surprise me, because security policies almost never reflect reality. In this case, you're probably not running any distros that got the exploit.


  • BINNED

    @boomzilla didn’t even think of that. Is RHEL affected?
    They’ll probably make me change passwords anyway, since there’s never enough password changes. 🐠



  • @topspin They'll probably disable ssh for a few hours of emergency patches too.


  • Discourse touched me in a no-no place

    @topspin said in Hacking News:

    Is RHEL affected?

    Apparently not. It had only made it's way into a few distros so far, mainly the ones that go all out for the bleeding edge in everything. That is why this isn't a full-blown panic.


  • ♿ (Parody)

    @topspin said in Hacking News:

    @boomzilla didn’t even think of that. Is RHEL affected?
    They’ll probably make me change passwords anyway, since there’s never enough password changes. 🐠

    The compression utility, known as xz Utils, introduced the malicious code in versions ​​5.6.0 and 5.6.1, according to Andres Freund, the developer who discovered it. There are no known reports of those versions being incorporated into any production releases for major Linux distributions, but both Red Hat and Debian reported that recently published beta releases used at least one of the backdoored versions—specifically, in Fedora Rawhide and Debian testing, unstable and experimental distributions. A stable release of Arch Linux is also affected. That distribution, however, isn't used in production systems.

    Because the backdoor was discovered before the malicious versions of xz Utils were added to production versions of Linux, “it's not really affecting anyone in the real world,” Will Dormann, a senior vulnerability analyst at security firm Analygence, said in an online interview. “BUT that's only because it was discovered early due to bad actor sloppiness. Had it not been discovered, it would have been catastrophic to the world.”



  • @Zerosquare said in Hacking News:

    An additional factor in this mess is that apparently, some Linux distros aren't built from Git repos ; they use tarballs provided by developers instead, and it's not unusual for those tarballs' contents to differ from what's in Git repos, because they include such things as already auto-generated files and test stuff. The rationale appears to be "it's easier for distro maintainers not to have to run the whole build process from scratch".

    It is the release tarballs that are the official output of the project, not the git repository. And while most projects do use a version control repository, and have it somewhere in public, these days, it is a relatively recent thing. It used to be that most projects only had a server with the release tarballs, and a mailing list for sending patches that were somehow applied by the maintainer.

    In xz's case, the code in Git was clean, the backdoor was only present in the tarballs.

    My opinion is:

    • needing such workarounds is a good sign that your build process is too convoluted (Pikachu is unavailable for comment)

    That's because portability is pain. There was a lot of platforms and they all had slightly different C compilers with slightly different options, and the way to link code differed a lot too, so to be able to write programs and compile them on all those platforms, autoconf, automake and libtool were born. They are, and have always been an ungodly mess, but they were the only way to work on all sorts of obscure unices.

    Many of the systems are now dead, but the projects continue to use the tools because the are already set up and work, and for the occasional odd use on some odd ancient system.

    • even without foul play, distributing software that isn't guaranteed to match what's in source control is a bad practice, and can create hard-to-find bugs

    The generated files in autoconf, automake and libtool solve a chicken-and-egg problem: the target system may not yet have the tools needed to generate the configure script and the makefiles, but to install them, you have to configure their build systems too. That's why the autoconf+automake+libtool compile to scripts that get included in the distribution tarballs.

    @Carnage said in Hacking News:

    Oh yes, any project whose build requires anything beyond the compiler and build toolchain installed and then running a single command is by my definition broken. If I have to spend time to fiddle with knobs and dig through years old forum threads to find out how to make the fucking thing build, then it's broken.

    But that's the point of the convoluted build system—that it does not require anything beyond a compiler and then your run, well, three commands (configure, make, sudo make install). That you don't fiddle with any knobs, it adjusts the grand dozens of knobs itself.

    @Carnage said in Hacking News:

    If it's a project that is also deployed in some cloud or some such shit, the deploy should also be a single command or a single button press.
    Not reading miles of readmes to do the standard thing.

    Sure, and I am the Chinese God of Fun. You can totally have a single button press when its your project deploying to your specific cloud. Where the button is on the CD server and it took someone two weeks to set up the deployment pipeline (because debugging those things is a huge sink of time). But if you are distributing it, well, any non-trivial software will have a lot of parameters for whoever is installing it to set—which will be properly described in the miles of readmes if you are lucky.

    @Carnage said in Hacking News:

    Oh and building or deploying should not have side effects. If I build one project, another project should not stop compiling.

    Building should not. And can be mostly prevented from having.

    Deploying … if you deploy each application to a separate container, yes. Installing libraries onto a system … no way to really prevent the possibility.


  • BINNED


  • BINNED

    @cvi said in Hacking News:

    @topspin said in Hacking News:

    I think the part where the “many eyes” idea fails hardest is the apparently crazy build process that even makes this possible.

    On the other hand, the "many eyes" was in a sense what uncovered it in the end. Assuming this is in fact state sponsored and that people are prepared to throw years at the problem ... how well would commercial software fare? Any software? (How many people in an business understand the full build process? What about their supply chains?)

    Another take on this:

    IMG_1324.webp



  • @Zerosquare said in Hacking News:

    There's even speculation that one of the maintainers doesn't actually exist, and is a fake identity created just to infiltrate the project.
    The whole thing looks more and more like a state-actor operation.

    It's just as possible that a state actor decided to replace a maintainer who just happened to live in said state. How many contributors and/or maintainers live in Russia, Belarus or China? Because, politics be damned, there are wars going on.

    To trust a maintainer without being in daily contact with them is, effectively, to trust the government of the nation they live in.



  • That's a possibility, but if you know that contributors are unlikely to ever meet, creating a fake identity is easier.



  • @Zerosquare But that requires planning ahead of time. Granted, hostilities in e.g. parts Ukraine have gone on long enough that this person could have been a malicious state actor from the start.

    But let's say you for some reason realized that you need some way to electronically infiltrate another state right now, without that kind of preparation time. How hard would it be to scan publicly available material for FOSS developers residing in a given country? Or, if we get particularly nasty, FOSS developers with close relatives living in a given country?


  • BINNED

    @acrow you don’t need to decide that “right now, without preparation time”, because backdoors are already in place for everything.
    CPUs have US backdoors, Cisco routers have US backdoors, telco stuff has Chinese backdoors, so do all the phones.

    ETA: Russia might not have backdoors in place, because IT equipment doesn’t run on vodka and oil. But they still have an army of cyber warfare hackers and culture war shitposters.



  • @acrow said in Hacking News:

    But that requires planning ahead of time.

    The whole operation took several years, and was obviously carefully planned anyways. Whoever is behind this thing wasn't interested in the quickest way to get access to their target(s). Instead, they wanted to compromise the supply chain in a durable and hard-to-detect way. And they almost got away with it, if it wasn't for these blasted kids obstinate developers.

    @acrow said in Hacking News:

    But let's say you for some reason realized that you need some way to electronically infiltrate another state right now, without that kind of preparation time. How hard would it be to scan publicly available material for FOSS developers residing in a given country? Or, if we get particularly nasty, FOSS developers with close relatives living in a given country?

    Sure, that's another way of doing it. But if you're not in a hurry, it's also messier and riskier than the alternative.



  • @Zerosquare Riskier and messier, true. But, well, after a war starts, messy will have already happened. And risk of consequences isn't what it was in peacetime.

    WW3 will be extra interesting. During WW2 Germany was able to use some U.S. citizens with German roots as spies. In the next big war, it won't be enough to catch the spies, but also any surprises they may have left in the codebases.


  • 🚽 Regular

    In possible future backdoor news:



  • @Zecc said in Hacking News:

    In possible future backdoor news:

    It does say

    Bun Shell escapes all strings by default, preventing shell injection attacks.

    though. So at least some sanity is included.


  • Discourse touched me in a no-no place

    @Bulb said in Hacking News:

    It does say

    Bun Shell escapes all strings by default, preventing shell injection attacks.

    though. So at least some sanity is included.

    Escapes... as opposed to using an API like execve() which doesn't have the problem in the first place.


  • Java Dev

    @dkf That's why in a previous product I replaced all system() calls with an NIH library which used fork()/execve() as well as redirecting any output to our inhouse logging functions. All that remained was one or two popen() calls which the NIH library wasn't set up to replace.

    Though the real reason I wrote the NIH library was managing multiple parallel jobs.



  • @dkf said in Hacking News:

    @Bulb said in Hacking News:

    It does say

    Bun Shell escapes all strings by default, preventing shell injection attacks.

    though. So at least some sanity is included.

    Escapes... as opposed to using an API like execve() which doesn't have the problem in the first place.

    It does use execve. It also parses the command. Whether what they describe as “quoting” is actually implemented as quote-interpolate-parse (dumb) or parse-substitute-tokens (clever) … :kneeling_warthog: hunting the source, but the fact that they say quote does not really imply they actually do the dumb thing.


  • Discourse touched me in a no-no place

    @Bulb said in Hacking News:

    @dkf said in Hacking News:

    @Bulb said in Hacking News:

    It does say

    Bun Shell escapes all strings by default, preventing shell injection attacks.

    though. So at least some sanity is included.

    Escapes... as opposed to using an API like execve() which doesn't have the problem in the first place.

    It does use execve. It also parses the command. Whether what they describe as “quoting” is actually implemented as quote-interpolate-parse (dumb) or parse-substitute-tokens (clever) … :kneeling_warthog: hunting the source, but the fact that they say quote does not really imply they actually do the dumb thing.

    If they support running on Windows, they'll be doing the dumb thing because there isn't really a better option there in general (command line strings are parsed into arguments by the receiving application, and not all applications use the same rules). Once you have that for one platform, it's tempting to use it for all; it seems simpler, even though it isn't really.

    So many programmers don't understand what the basic systems they're working with actually do.



  • @dkf said in Hacking News:

    If they support running on Windows, they'll be doing the dumb thing because there isn't really a better option there in general (command line strings are parsed into arguments by the receiving application, and not all applications use the same rules)

    That isn't the dumb way.

    On Windows, the arguments to each process need to be concatenated and best-guess-quoted, because Windows passes arguments as a single string rather than as a list of strings like Unix. That's dumb, but it's Windows being dumb, not the library being dumb.

    But the point of the library is that you can set up redirections and pipelines and process substitutions in the command, and that is what they need to avoid parsing the interpolated values for.


  • Discourse touched me in a no-no place

    @Bulb said in Hacking News:

    But the point of the library is that you can set up redirections and pipelines and process substitutions in the command, and that is what they need to avoid parsing the interpolated values for.

    Well, except they might instead compose a shell command and pass that to popen(). Sometimes the stupid really hurts.



  • @dkf said in Hacking News:

    @Bulb said in Hacking News:

    But the point of the library is that you can set up redirections and pipelines and process substitutions in the command, and that is what they need to avoid parsing the interpolated values for.

    Well, except they might instead compose a shell command and pass that to popen(). Sometimes the stupid really hurts.

    That would be dumb indeed, but I don't think they do that, because then they'd need a very different implementation for Windows.




  • Considered Harmful

    @Watson

    proposed Cook Islands legislation, drafted by a US-based company

    :surprised-pikachu:


  • Notification Spam Recipient

    @Watson said in Hacking News:

    So have we decided that cryptocurrencies are real now? :drop_monocle:


Log in to reply