Security by obscurity fails again



  • Is nothing beyond the awesome power of Javascript?



  • The researchers said the side channel attack is much more damaging than previous ASLR bypasses, because it exploits a micro-architectural property of the CPU's that's independent of any operating system or application running on it.

    Translation: We're all fucked.



  • Some guy said in that site:

    Ok, this bypasses ASLR, but by itself it cannot actually exploit anything.
    For that a memory bug in the browser is still needed.
    Solution: Fix the browser to avoid such bugs e.g. by using safe programming languages such as Rust?

    How'd that get 16 downvotes and counting?
    Switching the entire browser code (or at least most of it) to use a safe programming language would be a huge boon to security.

    After that is done, ASLR and its bypassing should not matter.



  • @CreatedToDislikeThis said in Security by obscurity fails again:

    How'd that get 16 downvotes and counting?

    Rust haters, probably.


  • sockdevs

    @RaceProUK said in Security by obscurity fails again:

    The researchers said the side channel attack is much more damaging than previous ASLR bypasses, because it exploits a micro-architectural property of the CPU's that's independent of any operating system or application running on it.

    Translation: We're all fucked.

    not all of us, just those of us that thought ASLR was a silver bullet and didn't bother putting secure coding practices in place beyond that.

    ASLR was never intended to be the first line of defense anyway, it was intended to be a fallback that made it much harder to exploit a bug in a program in the first place. Unfortunately too many dev shops saw "crashes instead of successful exploit" and thought "silver bullet!"



  • @accalia said in Security by obscurity fails again:

    not all of us, just those of us that thought ASLR was a silver bullet and didn't bother putting secure coding practices in place beyond that.

    Like browser makers? :shopping_cart:


  • sockdevs

    @RaceProUK said in Security by obscurity fails again:

    @accalia said in Security by obscurity fails again:

    not all of us, just those of us that thought ASLR was a silver bullet and didn't bother putting secure coding practices in place beyond that.

    Like browser makers? :shopping_cart:

    among others, yes.

    privacy and malware blocker addons just got 9000% more important.



  • @accalia said in Security by obscurity fails again:

    privacy and malware blocker addons just got 9000% more important.

    And NoScript* :slight_smile:

    *Other script blocking add-ons are available


  • Winner of the 2016 Presidential Election

    @accalia said in Security by obscurity fails again:

    not all of us, just those of us that thought ASLR was a silver bullet and didn't bother putting secure coding practices in place beyond that.

    I think you're underestimating the problem. With the combination of Rowhammer and AnC, you may not even need an exploit for remote code execution.

    Switching to ECC memory may be a (costly) solution, but that's not going to happen any time soon.



  • More on this from Tired.com.



  • And a link to the original research:



  • @accalia said in Security by obscurity fails again:

    privacy and malware blocker addons

    To a very good first approximation, that means advertising blockers.


  • sockdevs

    @flabdablet said in Security by obscurity fails again:

    @accalia said in Security by obscurity fails again:

    privacy and malware blocker addons

    To a very good first approximation, that means advertising blockers.

    i meant what i said.

    i've seen FAR to many malware attacks use ad networks as their distribution vector. I don't trust them, any of them.

    I'm onay with the concept of ads, and i would be happy to view unobtrusive ones (like the old static test/static image ones, none of this animated shite, and audio ads can fuck off) but as it stands i cannot trust the ad networks to not serve me malware along side the ads.

    Sure, i know it's not the ad network code that does that, it's the code of whomever paid them to place their malware laden ad, but if they place the ad without vetting that it is safe then they are culpable in my not at all legal opinion..


  • Impossible Mission - B

    I've never understood how ASLR provides any protection at all in the first place.

    It used to be, you jump to 0xdeadf00d, the location where the routine you want to maliciously call into is located in memory. But now that's not at a static location anymore, so you don't know where it is.

    But you know what does know where it is? The application itself. It has to have that knowledge stored at some predictable location so that it can make calls, so now instead you jump to the indirect location of the place to look that up. That costs an additional byte or 2 of assembly; it doesn't seem like any meaningful :barrier: to return-to-libc attacks.


  • sockdevs

    @masonwheeler said in Security by obscurity fails again:

    I've never understood how ASLR provides any protection at all in the first place.

    it doesn't.... not really.

    it makes ONE class of exploit harder, that's about it. See ASLR as i understand it, basically randomizes the address space (including that of stdlib and the application) such that one cannot use a buffer overflow to write a JMP instruction that will reliably jump you to code that runs in a different assembly/code uinit, because you don't know where to jump to.

    of course you can still JMP within a code unit, and other shenanigains, so there's that.

    basically ASLR is an "oh shit we messed up the security of our app bad! hopefully this attack just makes us crash instead of getting sploited (but there's no guarantee, as this technique has shown)"


  • Impossible Mission - B

    @accalia Yes, I understand that that's what it's supposed to do in theory. But look at what I wrote: how does that work at all, even in theory, when all you need is an indirect call rather than a direct one?



  • @masonwheeler said in Security by obscurity fails again:

    But you know what does know where it is? The application itself.

    Nope. ASLR gets done as the app is loaded; it's the linking loader that decides where all the pieces go, not the app per se. References to stuff get patched by the loader on initial load (for static links) or first call (for dynamic links), just as they would in the non-randomized-layout case; as far as the app itself is concerned, ASLR is completely transparent.


  • sockdevs

    @masonwheeler said in Security by obscurity fails again:

    @accalia Yes, I understand that that's what it's supposed to do in theory. But look at what I wrote: how does that work at all, even in theory, when all you need is an indirect call rather than a direct one?

    ooooh.

    well, in theory most of the exploitable bugs will be in a different code unit than the one that has the ASLR map, so you don't know where the ASLR map is to consult, unless you walk the stack/fixup jumps from wherever you are to wherever you want to go. See, in Windows at least the ASLR map is in kernel memory, the images that get loaded into userspace are fixedup so all their jumps are correct for the particular address space randomization that the OS laid down, so there's no map to read, you gotta make your own.

    IIRC Linux takes a slightly different approach to ASLR that has the same end effect in that there's no way to do an indirect jump directly to your destination, you gotta find it first.

    oooor, ignore this in favor of @flabdablet's explanation..... it's better than mine.



  • @masonwheeler said in Security by obscurity fails again:

    It has to have that knowledge stored at some predictable location so that it can make calls

    Key thing to understand here is that linking loaders actually patch the jumps and calls inside the loaded app precisely so that it doesn't suffer the overhead of having to trampoline every single jump or call. Once an app is loaded and running, there is no central indirection map.



  • @accalia said in Security by obscurity fails again:

    not all of us, just those of us that thought ASLR was a silver bullet and didn't bother putting secure coding practices in place beyond that.

    Having a weak security measure often makes people care less about the strong ones, thus reducing security.


  • Impossible Mission - B

    @flabdablet Yes, I know this.

    Somewhere in the program image there's a bit of assembly that says CALL Fixup00123, and that gets fixed-up by the linking loader with the actual call, so it now says CALL 0xDEADF00D.

    There are two ways that the compiler can implement this. It can drop fixups everywhere throughout the codebase, which both makes the compiler's job harder and makes the loading process slower. Or it can create little stub functions in the output that do nothing but call a fixup. That way, there's only one fixup per external routine to be called in the entire codebase, which makes everything a lot simpler.

    Either way, all the attacker needs to know is the location of a fixup to the routine he wants to call, anywhere in the codebase being attacked. That will contain the new address of the routine to be called.

    @flabdablet said in Security by obscurity fails again:

    Key thing to understand here is that linking loaders actually patch the jumps and calls inside the loaded app precisely so that it doesn't suffer the overhead of having to trampoline every single jump or call. Once an app is loaded and running, there is no central indirection map.

    Nope, see above. Once the loader is done, the codebase itself becomes your central indirection map. There's simply no way to have this not happen and still have dynamic linking work.

    I suppose this would protect against cases where the attacker is not able to (legitimately or otherwise) get his hands on a copy of the program being remotely exploited, but with most targets these days being either commercial or open-source software, that's kind of a small consolation.


  • Winner of the 2016 Presidential Election

    @masonwheeler
    At least it's not trivial to find the correct address without accidentally causing a segmentation fault.


  • Winner of the 2016 Presidential Election

    @masonwheeler said in Security by obscurity fails again:

    Nope, see above. Once the loader is done, the codebase itself becomes your central indirection map.

    But how do you even read the code? Let's say you're able to read from and write to random stack addresses due to a security issue. So you can read the return address of the current function and modify it. You still don't know where to jump to even get the information you need from the text segment.

    As I understand it, it'd be a chicken-and-egg problem: You'd need to know where certain, known (vulnerable) parts of the code are to figure out where other known parts of the code are which you want to use for your attack.

    Due to the NX bit, you cannot just execute arbitrary instructions you write on the stack. You need to use the application's code itself. To my limited knowledge, that's what Return-oriented Programming is all about. Which is what ASLR is trying to prevent by not giving you predictable return addresses to jump to.


  • Impossible Mission - B

    @asdf said in Security by obscurity fails again:

    But how do you even read the code?

    The exact same way you read a vtable to do an indirect call to a virtual method. This has been specifically supported in hardware since pretty much forever.

    Let's say you're able to read from and write to random stack addresses due to a security issue. So you can read the return address of the current function and modify it. You still don't know where to jump to even get the information you need from the text segment.

    You do if you have your own copy of the binary

    As I understand it, it'd be a chicken-and-egg problem: You'd need to know where certain, known (vulnerable) parts of the code are to figure out where other known parts of the code are which you want to use for your attack.

    See "have your own copy of the binary," above.

    Due to the NX bit, you cannot just execute arbitrary instructions you write on the stack.

    Yes, that is a legitimate security improvement.

    To my limited knowledge, that's what Return-oriented Programming is all about. Which is what ASLR is trying to prevent by not giving you predictable return addresses to jump to.

    ...unless there's a nice, juicy fixup at a well-known address (because you have your own copy of the binary) just sitting there waiting to be returned to, which ASLR does nothing at all to prevent.


  • Winner of the 2016 Presidential Election

    @masonwheeler said in Security by obscurity fails again:

    Let's say you're able to read from and write to random stack addresses due to a security issue. So you can read the return address of the current function and modify it. You still don't know where to jump to even get the information you need from the text segment.

    You do if you have your own copy of the binary

    No, you don't. Your own copy of the binary doesn't help you at all. (Unless the executable was compiled without -pie, in which case ASLR is completely useless.) The whole point of ASLR is that you cannot predict where the dynamic loader will place certain parts of the text and data segments, even if you know the whole binary.

    BTW: I should probably modify my example scenario. I would assume that in reality, most security problems allow you to either write to some part of the memory or read from it, not both at the same time.

    If you can do both at the same time, then you at least know the location of some instructions, which makes things easier. You'll still have to find a way to read those instructions using the instructions themselves and the stack you control, to eventually maybe figure out the address of libc.



  • @masonwheeler said in Security by obscurity fails again:

    You do if you have your own copy of the binary

    ...except you won't have your own copy of the binary as actually loaded, because the loader will put all its load segments (including trampoline segments, if it uses those) at random addresses before calling it.

    Your remarks would be applicable only to statically linked executables, which are vanishingly rare in 2017.


  • Winner of the 2016 Presidential Election

    @flabdablet said in Security by obscurity fails again:

    Your remarks would be applicable only to statically linked executables, which are vanishingly rare in 2017.

    Well, there are still enough executables out there which are not relocatable themselves. If you can figure out the address of the PLT, ASLR is useless. Android didn't require executables to be relocatable before 5.0, for example.



  • @RaceProUK said in Security by obscurity fails again:

    @CreatedToDislikeThis said in Security by obscurity fails again:

    How'd that get 16 downvotes and counting?

    Rust haters, probably.

    Proposal for a new language: Galvanized


  • sockdevs

    @HardwareGeek said in Security by obscurity fails again:

    @RaceProUK said in Security by obscurity fails again:

    @CreatedToDislikeThis said in Security by obscurity fails again:

    How'd that get 16 downvotes and counting?

    Rust haters, probably.

    Proposal for a new language: Galvanized

    i'd program in that.


  • Impossible Mission - B

    @flabdablet said in Security by obscurity fails again:

    ...except you won't have your own copy of the binary as actually loaded, because the loader will put all its load segments (including trampoline segments, if it uses those) at random addresses before calling it.

    Your remarks would be applicable only to statically linked executables, which are vanishingly rare in 2017.

    Not sure if I'm just not describing it well or what, but that's got nothing to do with static linking. Static linking wouldn't need fixups at all. What I described is exactly how dynamic linking is implemented at the binary level.


  • Winner of the 2016 Presidential Election

    @accalia said in Security by obscurity fails again:

    i'd program in that.

    You'd most likely accidentally choose @Gaska's esoteric language "Gaskanized" instead and take months to realize your mistake. :P



  • @asdf said in Security by obscurity fails again:

    and the stack you control

    This is what I don't get.

    I don't get why ASLR was ever considered a better mitigation for stack smashing and buffer overflow attacks than two other things:

    1. Keep call stacks and local variable stacks completely separate, each with its own stack pointer register and its own MMU segment, so that there is simply no way to write into the call stack except by making a procedure call.

    2. Lay out the local variable stack so it starts at the lowest address within its segment and grows toward higher addresses, with array variables allocated last in any stack frame and char arrays last among those. That way, the worst that your typical char array buffer overflow could possibly do is trash other buffers in its own function's current stack frame, and overflowing the entire frame would then just write into unallocated stack memory (perhaps triggering a segfault, if it wrote all the way off the end) rather than trashing outer frames.



  • @HardwareGeek said in Security by obscurity fails again:

    Proposal for a new language: Galvanized

    Zinc exists.


  • Winner of the 2016 Presidential Election

    @flabdablet
    Well, that's impossible to implement without completely changing the ABI of everything including the hardware (processors), which I guess is not feasible.



  • @masonwheeler said in Security by obscurity fails again:

    Static linking wouldn't need fixups at all.

    Correct, which is why your statically linked binary would let you work out where stuff was in somebody else's loaded copy of it.


  • Winner of the 2016 Presidential Election

    @flabdablet
    Identifiers with blanks? Ruby-like syntax? Do not want!



  • @flabdablet said in Security by obscurity fails again:

    @HardwareGeek said in Security by obscurity fails again:

    Proposal for a new language: Galvanized

    Zinc exists.

    Facts :barrier: jokes


  • sockdevs

    @asdf said in Security by obscurity fails again:

    @accalia said in Security by obscurity fails again:

    i'd program in that.

    You'd most likely accidentally choose @Gaska's esoteric language "Gaskanized" instead and take months to realize your mistake. :P

    HEY!

    I RESEMBLE THAT REMARK!


  • Impossible Mission - B

    @flabdablet said in Security by obscurity fails again:

    Correct, which is why your statically linked binary would let you work out where stuff was in somebody else's loaded copy of it.

    And your dynamically-linked binary does too, because the fixups are still at a known location.

    Why is this hard to understand? Have you ever actually taken apart a compiled EXE and looked at how this works? If you had, you'd know what I'm talking about, and if not... you'll just have to trust me on it, because I've been doing this for years. That is how dynamic linking is done, and there is no way to make dynamic linking work while avoiding it.


  • Winner of the 2016 Presidential Election

    @masonwheeler said in Security by obscurity fails again:

    And your dynamically-linked binary does too, because the fixups are still at a known location.

    You're talking about the PLT (procedure linkage table), I assume. Well, that's not at a known location at runtime if you compile you program as a position-independent executable. Which is what you should do.


  • Impossible Mission - B

    @asdf No, I'm not talking about the PLT. I'm talking about the places in the .text where the PLT is referring to, that have fixups that get patched by the loader.



  • @asdf said in Security by obscurity fails again:

    completely changing the ABI of everything including the hardware

    Not at all so. The call stack could grow from the top down just like it does now, so you still get to use CALL and RET instructions. Local variables inside the data stack could still be referenced relative to FP just like they are now (assuming you don't compile with -fomit-frame-pointer). FP could still point to the lowest address in a local variable frame, just like it does already. Even procedure entry and exit boilerplate would remain much as it is already; it would just add to FP rather than subtract from it on entry, and vice versa on exit.

    Compilers already support multiple ABI conventions for doing cross-language stuff. This shouldn't be any harder.



  • @HardwareGeek said in Security by obscurity fails again:

    Facts :barrier: jokes

    Depends whether the fact concerned is, as @asdf hints, a joke language.


  • Winner of the 2016 Presidential Election

    @masonwheeler said in Security by obscurity fails again:

    I'm talking about the places in the .text where the PLT is referring to

    Now I'm completely confused. Why would the PLT refer to anything inside the same object file?

    If you meant "referred", then see my post above: You'll still have to figure out how to read the correct parts of the text segments, which is the opposite of trivial and only potentially possible if you have both read and write access to the stack.

    In most cases, you'll have to guess memory locations if you want to bypass ASLR.


  • Winner of the 2016 Presidential Election

    @flabdablet said in Security by obscurity fails again:

    Not at all so. The call stack could grow from the top down just like it does now, so you still get to use CALL and RET instructions.

    But the processor expects the return address to be on the stack, doesn't it?



  • @masonwheeler said in Security by obscurity fails again:

    @asdf No, I'm not talking about the PLT. I'm talking about the places in the .text where the PLT is referring to, that have fixups that get patched by the loader.

    You don't know where the .text gets put, and typically you only get to do one thing: overflow some buffer and overwrite a return address that's on the stack. The return address is an absolute address, so you need to put an absolute value there (not a relative jump).



  • @asdf It expects the return address to be pointed to by the SP, yes. And it still could be. The difference is that instead of SP and FP pointing into different parts of the same stack frame, the SP would point into a frame reserved solely for holding return addresses, and the FP would point into a separate data stack frame that looks very much like what's already conventional except (a) no embedded return addresses and (b) data stack frames are allocated from the bottom of their memory region going up, instead of from the top coming down.

    I believe it to be the case that the only reason why stacks are almost always started at high addresses and work downwards is convention; it started before MMUs were a thing, and provided a convenient way to manage stack vs heap. Given that every important processor now has memory management hardware, I can't see a reason to stick with that convention except when using dedicated processor instructions like CALL and RET that require it.

    On processors where CALL sticks the return address in a register, and you have to push it manually before doing a nested call and pop it manually before doing a return, even that doesn't apply.


  • Impossible Mission - B

    @asdf OK, let's start at the lowest levels.

    If you disasemble a program that makes calls to an external library, you'll find it's implemented with a CALL instruction, obviously. But you can't call the address of the external function directly, because even without ASLR you don't necessarily know what that address is, due to relocation. (ASLR is essentially "force relocation upon all libraries whether they need it or not.") So instead, the compiler puts a fixup token, a value with the same number of bytes as a pointer, into the binary.

    There's a table that lets the loader know where all those fixup tokens are located and what routines they're calling to, and when the binary gets loaded into memory, the loader uses this and overwrites the token values in memory with the real addresses of the routines to call.

    If you have the binary, you know where these tokens are located, and you know that, at runtime, they will contain real addresses of external routines, including the ones you want to call as part of your exploit. So you use the location of the fixup in exactly the same way as you would use a vtable slot containing a pointer to the desired routine. Bada bing, bada boom, so much for ASLR moving your target out from under you.

    Which part of this does not make sense?



  • @masonwheeler said in Security by obscurity fails again:

    If you have the binary, you know where these tokens are located

    ...relative to the start of the binary. You don't know their actual load address ahead of time, which you'd need to do in order to work out what to stuff into your stack smashing buffer payload in order to RET to one.


  • Winner of the 2016 Presidential Election

    @masonwheeler said in Security by obscurity fails again:

    Which part of this does not make sense?

    The part where you completely ignored my post about the difficulties of reading the text segment and extracting the address.


Log in to reply
 

Looks like your connection to What the Daily WTF? was lost, please wait while we try to reconnect.