Security by obscurity fails again



  • @asdf said in Security by obscurity fails again:

    @CreatedToDislikeThis said in Security by obscurity fails again:

    My understanding is that it works just as well natively as in JS:

    That's obvious, but you need a sandboxed environment, right? Which means, an exploit using corrupt video files cannot use it, right?

    Hmm, yeah that's true. You can find the ASLR using this technique any way, but there's no real point if you're not in some sandbox from which you can subsequently exploit a buffer overflow of your host.


  • Winner of the 2016 Presidential Election

    @CreatedToDislikeThis Which would mean that ASLR is at least still effective for programs processing binary data, like media libraries. If you can only exploit this from a sandbox, then the only new issue here is the potentially fatal combination with Rowhammer, because random JavaScript/Java code always had to be treated with care as it could potentially exploit zero-days, it's just become easier now.



  • @RaceProUK said in Security by obscurity fails again:

    NX is easy to understand: it's a flag on a page that says No eXecute

    Right.

    Before NX/DEP/W^X was a thing, it used to be possible to run a buffer overflow sploit by including two things in the crap you run off the end of the buffer: (a) some executable code (often called "shellcode" though it has nothing to do with any command shell) and (b) an absolute pointer to the first instruction of that code, carefully positioned so as to overwrite the return address that the poorly coded function you just persuaded to overflow its buffer would otherwise have used to return to its caller. So when that function finally hits its RET instruction, the RET picks up the entry address of your shellcode instead of the caller's return address, and your sploit now has control.

    To make that work, all you need to know is the absolute address of the stack frame containing the buffer you just overflowed. Stack addresses have nothing to do with code addresses, so that's quite often feasible even given code loaded using ASLR.

    Now that NX/DEP is everywhere, that doesn't work any more; as soon as the CPU tries to execute the first instruction of your shellcode, it barfs because it's trying to execute code from a memory page marked No Execute. So instead of overwriting a single return address with a simple pointer to your own code, you have to write a list of absolute pointers starting at the same place, each one pointing to some useful instruction sequence inside the existing code that ends with a RET. Such sequences are called "gadgets".

    So when the function whose buffer you overflowed finally executes its RET, the CPU jumps to the first of the "gadget" sequences you've pointed to rather than back to the function's caller. That gadget sequence runs a few instructions that do something useful to you, then hits its own RET.

    At this point, the stack pointer has been advanced to the second pointer in your little list, so that RET now transfers control to the second of your "gadget" sequences. Which runs more instructions of use to you, then hits another RET. In effect, you're building up your shellcode using pre-existing instructions you found in the app, and using the existing RETs to glue them together. This is "return-oriented programming", and your list of absolute return addresses amounts to direct threaded code with the machine's stack pointer used as the threading IP; note that jump *ip++ in the linked Wikipedia example is exactly what a RET does if you substitute SP for IP.

    ASLR breaks that, because you can't generate absolute addresses to existing code if you've got no idea where in process address space the ASLR loader has put anything.

    The attack documented in the OP would theoretically allow you to run some Javascript in the browser that decodes the memory layout that the ASLR loader generated when loading the browser itself, then use that to build a local sploit containing a thread of return addresses customized for that specific browser instance. Which means that ASLR is no longer protective against buffer overflow attacks on browsers or their plugins.

    @masonwheeler is all about pointing out that there's theoretically enough information already in memory to build that thread even without the attack documented in the OP, either because the ASLR loader needs it in order to perform the load in the first place, or because your sploit could use position-independent CALL instructions to work out where it is; but @asdf, @darkmatter and I all remain convinced that he's handwaving away the step where a shellcode or direct-threaded sploit actually gets launched in order to do any of these things.

    Javascript itself is supposed to be restricted enough not to be able to do them, but it turns out that in fact it can, using the timing side-channel attack described in the OP.

    Does that help?


  • FoxDev

    @asdf said in Security by obscurity fails again:

    @RaceProUK said in Security by obscurity fails again:

    However, is this actually what happens, or is the module split into its sections, like .text, .data, and .imports (I've probably got one or more of those wrong, but it'll work as an example), and each section is subject to ASLR separately?

    The details probably depend on the implementation. However, it would make sense to relocate each part independently. Especially the PLT (which tells the program where to find library functions) should be relocated separately from the code, since having it at a known address relative to the return address of a vulnerable function might be risky.

    Now we're getting somewhere 🙂

    So if the PLT is relocated separately to .text, then @masonwheeler's potential exploit is impossible, as you cannot know the relative locations of PLT and .text ahead of time. If they're relocated together, then in theory you can exploit, as you'll know the relative offset. Then it comes down to a question of if you can insert a relative jump, or you have to insert an absolute jump.


  • FoxDev

    @flabdablet said in Security by obscurity fails again:

    Does that help?

    A lot, thanks :D



  • @RaceProUK said in Security by obscurity fails again:

    Then it comes down to a question of if you can insert a relative jump, or you have to insert an absolute jump.

    ...except that none of that matters, because until whatever you've inserted is actually executing it can't do squat, and until you manage to overwrite a return address with the absolute address of an instruction you want to execute, it won't ever get the chance.



  • @RaceProUK Mostly what @flabdablet said (👍).

    In addition, once you are able to execute instructions of your own choosing, you don't need to care about the PLT or library functions any more, since you can do syscalls directly (library functions may be convenient, though). With return-oriented programming, you never need to execute instructions that you "injected", you just execute parts of the existing binary (which already have the library functions resolved, if that's what you want to use).

    The difficulty really is in the first step, where you need to overwrite an absolute address (i.e., the return address on the stack, or a function pointer somewhere in memory).



  • @flabdablet said in Security by obscurity fails again:

    which, in order to launch sploit code by overwriting a return address in the stack with a buffer overflow, you can't, as I have pointed out multiple times already.

    I'm not sure that's absolutely the only way stack smashing can launch its payload, although it's the most obvious. After all, you are targeting a specific stack frame, so it should be possible to (for example) manipulate relative calls if their target address, or some modifier to it, appears in the frame, no?



  • @tufty said in Security by obscurity fails again:

    it should be possible to (for example) manipulate relative calls if their target address, or some modifier to it, appears in the frame, no?

    I suspect you'd have to go out of your way to write code susceptible to that kind of smashing.

    Local variables containing function pointers, or perhaps tables of function pointers, might be vulnerable but function pointers are, again, absolute. You'd have to find something like a local variable containing an index into a (probably non-bounds-checked) table of function pointers. There might be compilers that pass method references around in a format like that, I guess.

    If you were lucky or persistent enough to find something like that, I doubt you'd end up needing to faff about with ASLR loading tables to make good use of it.

    Again, if the world just did what I fucking told it to, then none of this would be an issue. Compilers would build data stacks starting from the bottom of a memory segment and working upward, they'd lay them out with scalars first and arrays last, and char arrays last of all, and not a return address anywhere within cooee.

    The way we do things now, it's like the roads (stacks) are full of vulnerable fragile pedestrians (return addresses and function pointers) and instead of separating those out so they can't get mown down (alter control flow) we just add stupid rules like requiring all the cars to have red flags front and behind.



  • @tufty said in Security by obscurity fails again:

    After all, you are targeting a specific stack frame, so it should be possible to (for example) manipulate relative calls if their target address, or some modifier to it, appears in the frame, no?

    I'd guess that relative call/jump addresses on the stack are quite uncommon. Function pointers & virtual functions use absolute addresses. Jump tables don't leak the relative address to the stack in the first place. (Computed gotos? Eww. But they probably stick to absolute addresses as well.)

    Don't know about generated (JIT) code. But JIT happens in runtime when you know all the addresses already, so there's little point in doing relative instructions there... ?


  • Impossible Mission - B

    @CreatedToDislikeThis said in Security by obscurity fails again:

    Some guy said in that site:

    Ok, this bypasses ASLR, but by itself it cannot actually exploit anything.
    For that a memory bug in the browser is still needed.
    Solution: Fix the browser to avoid such bugs e.g. by using safe programming languages such as Rust?

    How'd that get 16 downvotes and counting?
    Switching the entire browser code (or at least most of it) to use a safe programming language would be a huge boon to security.

    After that is done, ASLR and its bypassing should not matter.

    Fully agreed! As a former coworker of mine liked to say, "Dennis Ritchie's true legacy is the buffer overflow."

    We've known since 1988, nearly 3 decades now, that C and C++ are not suitable for writing any network-facing software or software with security implications. And yet we continue to do so.

    In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law.

    -- C. A. R. Hoare, Turing Award lecture, 1980


  • ♿ (Parody)

    @masonwheeler said in Security by obscurity fails again:

    We've known since 1988, nearly 3 decades now, that C and C++ are not suitable for writing any network-facing software or software with security implications. And yet we continue to do so.

    As Abraham Lincoln famously said, "C and C++ are the worst languages for writing browsers in except for all the other languages."


  • Impossible Mission - B

    @boomzilla ... ???


  • ♿ (Parody)



  • @masonwheeler said in Security by obscurity fails again:

    @boomzilla ... ???

    Yeah, he's confused, as usual. It wasn't Abraham Lincoln, it was Winston Churchill, and what he actually said was "tomorrow, Madam, I shall be sober; but you will still be ugly".


  • Impossible Mission - B

    @flabdablet said in Security by obscurity fails again:

    Yeah, he's confused, as usual. It wasn't Abraham Lincoln, it was Winston Churchill, and what he actually said was "tomorrow, Madam, I shall be sober; but you will still be uglybuggy".

    FTFY


  • ♿ (Parody)

    @flabdablet No, now you're thinking of the guy who wrote Andy Capp.


  • Impossible Mission - B



  • @dcon said in Security by obscurity fails again:

    @HardwareGeek said in Security by obscurity fails again:

    Proposal for a new language: Galvanized

    Rejected. Too many syllables.

    But other languages have the same number of syllables:

    • see-plus-plus
    • ja-va-script
    • pee-aitch-pee

    Um, you're right; too many syllables.


  • Impossible Mission - B

    @HardwareGeek In-ter-cal






  • Winner of the 2016 Presidential Election

    @RaceProUK
    @masonwheeler's exploit is impossible either way. But if the PLT is at a known offset from the currently executed instruction and you can read the return address before overwriting it, then you should have a successful return-to-libc.



  • @masonwheeler said in Security by obscurity fails again:

    In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law.
    -- C. A. R. Hoare, Turing Award lecture, 1980

    Pffft. That's the guy who thinks inventing null pointers was a mistake.


  • Winner of the 2016 Presidential Election

    @flabdablet said in Security by obscurity fails again:

    That's the guy who thinks inventing null pointers was a mistake.

    Well, it's certainly a good idea to make null a completely different type than references in your type system so that nullable references always have to be checked for null before they're used.



  • @asdf said in Security by obscurity fails again:

    if... you can read the return address before overwriting it

    A simple buffer overflow won't allow you to do that, though.


  • Winner of the 2016 Presidential Election

    @anotherusername
    I already know that and told Mason multiple times, see above. ;) Just saying: It's more risky if that's possible, because then R/W access to the stack immediately gives you a successful exploit without having to resort to ROP.


  • :belt_onion:

    @boomzilla Lincoln followed that tech insight with the unforgettable, "one leaf of paper is more than enough storage for anybody" and then got murdered by an overzealous dba that was tired of 8 letter table & column name limits.


  • ♿ (Parody)

    @darkmatter Sic semper transactions!


  • Winner of the 2016 Presidential Election

    @boomzilla @darkmatter The misquotes thread is :arrows:


  • :belt_onion:

    @boomzilla said in Security by obscurity fails again:

    @darkmatter Sic semper transactions!

    there is no mechanism by which i can like this enough.



  • @masonwheeler said in Security by obscurity fails again:

    Dennis Ritchie's true legacy is the buffer overflow

    You could overflow buffers in assembler long before C came along. Ritchie just made sure you could do so in a portable fashion.



  • By the way, ASLR should also work without NX, because a good ASLR would also randomize the stack's location (and as such, the absolute address with which the return address must be overwritten).



  • But still, it's alarming that a sandboxed code that can't access arbitrary addresses (or even know the address of its own objects) be able to guess the location of modules with some timing attack I'm not even sure I'll understand if I read the article.


  • kills Dumbledore

    @darkmatter said in Security by obscurity fails again:

    @boomzilla Lincoln followed that tech insight with the unforgettable, "one leaf of paper more than enough storage for anybody" and then got murdered by an overzealous dba that was tired of 8 letter table & column name limits.

    0_1487240193208_upload-784b026f-430e-4488-870e-4ff5e2447147



  • @asdf said in Security by obscurity fails again:

    Which is why it's important to compile your programs with the -pie switch.

    Is it the same, or has it anything to do with, the -fPIC switch? That one I've heard of, but -pie, never... Is it because it's normally always on and I shouldn't have to worry about it? PIC seems to me to also be about position of code and stuff, so not totally unrelated, but given that I don't understand half of what's being said here, I'm probably mixing things...


  • Winner of the 2016 Presidential Election

    @remi said in Security by obscurity fails again:

    Is it the same, or has it anything to do with, the -fPIC switch? That one I've heard of, but -pie, never...

    Basically, you should use -pie for executables and -fPIC for libraries. IIRC, if you compile an executable with -fPIC only, GCC will create a non-relocatable ELF file and therefore the text section of the program will not actually be relocated by the loader. Don't ask me why GCC behaves like that.

    Is it because it's normally always on and I shouldn't have to worry about it?

    I think the compile-time switch is off by default, so you'd have to check whether that's different on your distribution.


  • FoxDev

    @asdf said in Security by obscurity fails again:

    I think the compile-time switch is off by default, so you'd have to check whether that's different on your distribution.

    The MSVC equivalent switch is /DYNAMICBASE, and defaults to on. For 64-bit, you also want /HIGHENTROPYVA, which, as it turns out, also defaults to on. And while you're there, you may as well throw /NXCOMPAT. Except you don't have to, as it defaults to on.

    Microsoft showing OSS how it's done 🛒 🙂


  • ♿ (Parody)


  • Discourse touched me in a no-no place

    @darkmatter said in Security by obscurity fails again:

    somewhere in there is steps 3 & 4

    1. ?­?­?
    2. Profit!


  • @masonwheeler said in Security by obscurity fails again:

    @flabdablet Yes, I know this.

    Somewhere in the program image there's a bit of assembly that says CALL Fixup00123, and that gets fixed-up by the linking loader with the actual call, so it now says CALL 0xDEADF00D.

    There are two ways that the compiler can implement this. It can drop fixups everywhere throughout the codebase, which both makes the compiler's job harder and makes the loading process slower. Or it can create little stub functions in the output that do nothing but call a fixup. That way, there's only one fixup per external routine to be called in the entire codebase, which makes everything a lot simpler.

    There's a third way. Instead of CALL 0xDEADF00D for calling into a DLL, you CALL [0xDEADF00D], where now 0xDEADFOOD contains a pointer to the code you want to call, and the square brackets make it mean 'call the subroutine whose address is in the pointer whose location is 0xDEADFOOD'. You don't need stub functions. (It's approximately similar, but different in detail.)



  • @Medinoc said in Security by obscurity fails again:

    By the way, ASLR should also work without NX, because a good ASLR would also randomize the stack's location (and as such, the absolute address with which the return address must be overwritten).

    Against the very first buffer overflow attacks, randomising the stack's location was all that was needed, because the first ones(1) relied on scribbling the payload downloader code into the stack, along with a return address that transferred control to the downloader. This was before the NX stuff existed (and the NX thing is only needed because the people who write operating systems on x86 are too lazy to do their jobs right by separating code from data. If you have code from zero up to HERE, and data from 0xF...F down to THERE, and you size the code and data segment descriptors correctly, the segmentation features in x86 will allow you to absolutely prevent code execution if the code is in the stack, because the stack is in the data segment (it has to be writeable), and stack addresses, therefore, lie outside the code segment limits, and are therefore not executable. It doesn't help with return-to-libc attacks, but absolutely prevents return-to-stack attacks.)

    (1) I read about them on the Cult of the Dead Cow site about 20 years ago...


  • Notification Spam Recipient

    @asdf said in Security by obscurity fails again:

    @masonwheeler said in Security by obscurity fails again:

    You're talking about the virtues of NX, which I agreed is a useful exploit-mitigation tool, not about the virtues of ASLR. Please keep the strawman population down.

    So you're talking about an attack in a virtual fantasy world in which ASLR is present, but NX is not?

    Maybe he's talking about my IBM T42 units with Windows 7? 'Cause those CPUs definitely don't have NX (Windows 10 barfs a lot when I try to force the issue too).



  • Ok, let's look at our options:

    • Nothing
    • ASLR
    • Not using a language that allows you to write 1000 bytes to a 10 byte buffer
    • ASLR on top of a language that's already sane


  • @Steve_The_Cynic said in Security by obscurity fails again:

    @masonwheeler said in Security by obscurity fails again:

    @flabdablet Yes, I know this.

    Somewhere in the program image there's a bit of assembly that says CALL Fixup00123, and that gets fixed-up by the linking loader with the actual call, so it now says CALL 0xDEADF00D.

    There are two ways that the compiler can implement this. It can drop fixups everywhere throughout the codebase, which both makes the compiler's job harder and makes the loading process slower. Or it can create little stub functions in the output that do nothing but call a fixup. That way, there's only one fixup per external routine to be called in the entire codebase, which makes everything a lot simpler.

    There's a third way. Instead of CALL 0xDEADF00D for calling into a DLL, you CALL [0xDEADF00D], where now 0xDEADFOOD contains a pointer to the code you want to call, and the square brackets make it mean 'call the subroutine whose address is in the pointer whose location is 0xDEADFOOD'. You don't need stub functions. (It's approximately similar, but different in detail.)

    And in fact, this third way is what Visual Studio's compiler does if you tell it in advance that the function is in a DLL. If it doesn't know, it just calls the stub. So says the Word of Raymond.



  • @Medinoc said in Security by obscurity fails again:

    @Steve_The_Cynic said in Security by obscurity fails again:

    @masonwheeler said in Security by obscurity fails again:

    @flabdablet Yes, I know this.

    Somewhere in the program image there's a bit of assembly that says CALL Fixup00123, and that gets fixed-up by the linking loader with the actual call, so it now says CALL 0xDEADF00D.

    There are two ways that the compiler can implement this. It can drop fixups everywhere throughout the codebase, which both makes the compiler's job harder and makes the loading process slower. Or it can create little stub functions in the output that do nothing but call a fixup. That way, there's only one fixup per external routine to be called in the entire codebase, which makes everything a lot simpler.

    There's a third way. Instead of CALL 0xDEADF00D for calling into a DLL, you CALL [0xDEADF00D], where now 0xDEADFOOD contains a pointer to the code you want to call, and the square brackets make it mean 'call the subroutine whose address is in the pointer whose location is 0xDEADFOOD'. You don't need stub functions. (It's approximately similar, but different in detail.)

    And in fact, this third way is what Visual Studio's compiler does if you tell it in advance that the function is in a DLL. If it doesn't know, it just calls the stub. So says the Word of Raymond.

    Yeah, I know. Debugging compiled-but-sourceless Visual Studio code is why I know about that. (This was around 2000, and younger colleagues were ... boggled ... by my ability and willingness to do this.)



  • @ben_lubar said in Security by obscurity fails again:

    • ASLR on top of a language that's already sane

    This site has me questioning whether that's even possible.




  • Discourse touched me in a no-no place

    @ben_lubar said in Security by obscurity fails again:

    Not using a language that allows you to write 1000 bytes to a 10 byte buffer

    That was always my first line of defence.



  • @asdf said in Security by obscurity fails again:

    @masonwheeler said in Security by obscurity fails again:

    I'm talking about the places in the .text where the PLT is referring to

    Now I'm completely confused. Why would the PLT refer to anything inside the same object file?

    If you meant "referred", then see my post above: You'll still have to figure out how to read the correct parts of the text segments, which is the opposite of trivial and only potentially possible if you have both read and write access to the stack.

    In most cases, you'll have to guess memory locations if you want to bypass ASLR.

    Isn't the whole point that the CPU is being monitored for memory access, and then the access sequence is "decrypted" to determine where the relevant parts are loaded?


Log in to reply