My heart bleeds



  • Our sites use IIS, so I sit happily watching all the Heartbleed Bug mayhem out there.

    Until the next patch Tuesday of course..



  • That website is.. something special.


    Do you think they had to have the bleeding heart graphic custom-made? Or was one of the Firefox UI guys already using it on his LiveJournal?



  • Is this the first time anyone has ever been rightly smug about using IIS?



  • @morbiuswilters said:

    Do you think they had to have the bleeding heart graphic custom-made? Or was one of the Firefox UI guys already using it on his LiveJournal?
    Is it way too big and clumsy? Did it work better ten versions ago? Were 80,000 lines of XUL necessary to create it?



  • @bstorer said:

    @morbiuswilters said:
    Do you think they had to have the bleeding heart graphic custom-made? Or was one of the Firefox UI guys already using it on his LiveJournal?
    Is it way too big and clumsy? Did it work better ten versions ago? Were 80,000 lines of XUL necessary to create it?

    I dunno, but if any mental health professionals see it the artist is going to be 5150'd away in the time it takes his boyfriend's iPhone to play a Death Cab For Cutie song.



  • The best thing about this bug is every time I see someone write about it for a split second I misread it as "Heartbeeps".



  • Did no-one else notice the claim that the bug means:

    ** Leaked secret keys allows the attacker to decrypt any past and future traffic to the protected services ...

    I wonder what happened to the concept of ephemeral session keys and perfect forward secrecy. Or maybe the author of this site felt that a lot more FUD was needed to make a large problem appear to be a few orders of magnitude bigger.

    I was so put off by that patently nonsensical statement that I found it hard to take any of the rest of the page seriously, my initial reaction was to ignore it as being simply the product of massive ignorance on the author's part.



  • @jes said:

    Did no-one else notice the claim that the bug means:

    ** Leaked secret keys allows the attacker to decrypt any past and future traffic to the protected services ...

    I wonder what happened to the concept of ephemeral session keys and perfect forward secrecy. Or maybe the author of this site felt that a lot more FUD was needed to make a large problem appear to be a few orders of magnitude bigger.

    I was so put off by that patently nonsensical statement that I found it hard to take any of the rest of the page seriously, my initial reaction was to ignore it as being simply the product of massive ignorance on the author's part.

    To be fair, they do acknowledge that PFS would protect past sessions. But the whole website reads like a combination of patting themselves on the back, and a sales pitch for their software.



  • Use of perfect forward secrecy would fix it, yes. But most sites don't use it.

    Consider this scary scenario:

    • Hacker finds bug, starts using it on affected sites. This could have been happening for nearly two years, as the bug was present since 2012 - one has to hope this is not the case.. but..
    • In doing this it seems it's sometimes possible (AIUI) to be able to get the private key of the servers' SSL certificate.
    • Once you have this, all encrypted traffic between *any* client and the server can be decrypted by the hacker if they can intercept it.
    • Assume they can capture traffic - think public wifi, corporate networks, conference wifi etc,
    • This means they can read username and password logins .. anything not protected by additional security
    • The bug leaks a section of server memory so other information might leak as well, but it would be very implementation dependent.

    Note that this seems to be a very difficult hack to trace unless you've logged the content of every request ever made.

    The fatal error here is to fix your OpenSSL library and think you're safe. You're not. If they did get your certificate private key it's compromised as well. So you change the certificate. Safe? Nope - if they got that they probably got usernames and passwords as well. So all your users have to be notified and forced to change passwords.



  • Here's the changeset where the bug was introduced:

     

     

    See if you can find it without being told where it is!



  • I wonder if this is how they were propagating Darkleech?



  • @aristurtle said:

    Here's the changeset where the bug was introduced:

     

     

    See if you can find it without being told where it is!

    Let's see, dtls1_process_heartbeat()... using data types that differ between the method and the struct it's referencing, using pointer arithmetic and dereference instead of directly accessing struct members by name, FUCKING STILL USING C WHEN THERE ARE A BAJILLION FUCKING BETTER LANGUAGES AVAILABLE THAT WOULD PREVENT STUPID SHIT LIKE THIS FROM EVEN BEING WRITTEN.

    I'm really getting tired of every fucking serious security vulnerability being due to some poorly-written C code that was reviewed by a drunken monkey. It's like the FOSS guys don't want to be taken seriously.



  • @The_Assimilator said:

    @aristurtle said:

    Here's the changeset where the bug was introduced:

     

     

    See if you can find it without being told where it is!

    Let's see, dtls1_process_heartbeat()... using data types that differ between the method and the struct it's referencing, using pointer arithmetic and dereference instead of directly accessing struct members by name, FUCKING STILL USING C WHEN THERE ARE A BAJILLION FUCKING BETTER LANGUAGES AVAILABLE THAT WOULD PREVENT STUPID SHIT LIKE THIS FROM EVEN BEING WRITTEN.

    I'm really getting tired of every fucking serious security vulnerability being due to some poorly-written C code that was reviewed by a drunken monkey. It's like the FOSS guys don't want to be taken seriously.

     

     

    Well, you got the function name right, so you're getting warm. Unfortunately, fixing the data types and accessing struct members by name wouldn't fix this one, the error is more fundamental than that. You're neither a monkey nor drunk, so keep trying -- where's the error?

     

    (Also, I'm curious to know what language you think OpenSSL should have been written in? I'm not saying you're wrong, mind.)

     



  • @bstorer said:

    Is this the first time anyone has ever been rightly smug about using IIS?

    God no. Remember a few years back when IIS totally blew away Apache performance-wise? (I think it was IIS5...) And the open source goons had to spent like the next 3 years doing nothing but optimizing their fat pig of a web server until it could almost barely compete?



  • @morbiuswilters said:

    The best thing about this bug is every time I see someone write about it for a split second I misread it as "Heartbeeps".

    BEST MOVIE EVER!!!

    Oh wait it was like on-screen rectal cancer. Sorry, mix-up. For a second there, I was thinking about Deadly Friend.



  • @Quango said:


    Use of perfect forward secrecy would fix it, yes. But most sites don't use it.

    Consider this scary scenario:

    • Hacker finds bug, starts using it on affected sites. This could have been happening for nearly two years, as the bug was present since 2012 - one has to hope this is not the case.. but..
    • In doing this it seems it's sometimes possible (AIUI) to be able to get the private key of the servers' SSL certificate.
    • Once you have this, all encrypted traffic between *any* client and the server can be decrypted by the hacker if they can intercept it.
    • Assume they can capture traffic - think public wifi, corporate networks, conference wifi etc,
    • This means they can read username and password logins .. anything not protected by additional security
    • The bug leaks a section of server memory so other information might leak as well, but it would be very implementation dependent.

    Note that this seems to be a very difficult hack to trace unless you've logged the content of every request ever made.

    The fatal error here is to fix your OpenSSL library and think you're safe. You're not. If they did get your certificate private key it's compromised as well. So you change the certificate. Safe? Nope - if they got that they probably got usernames and passwords as well. So all your users have to be notified and forced to change passwords.

    Encrypted traffic between a client and a server can be decrypted if the attacker can use this attack to get the ephemeral session key for the session to be eavesdropped. I believe most SSL clients and servers now use Diffie Hellmen key exchanges to set the session key, so in general, sessions established before an attacker can access memory on one end of the connection or sessions started after the attacker has left will in fact have perfect forward secrecy.

    While certificate private keys are not ephemeral and a successful attack could get the ability to decrypt anything encrypted with that public/private key pair, this does not affect the creation of session keys which allow you to actually decrypt the session traffic. Those session keys do not use the certificate to do session key negotiation and knowing the certificate private key gets you nowhere in decrypting sessions.

    A slight caveat is that some servers use the same random number in the DH negotiation for some period of time, to reduce the effort expended and entropy consumed in setting up new connections. (I seem to recall reading that this is sometimes the practice). In that case, fishing out the DH random value the server is using will allow recovering the session key for every session started in the same interval between server changes of the DH random value (assuming you have a complete copy of the connection setup traffic). But sessions set up with a value which changed before an attacker can fish this value out or ones started after the attacker has stopped extracting values will have perfect forward secrecy.



  • @aristurtle said:

    @The_Assimilator said:

    @aristurtle said:

    Here's the changeset where the bug was introduced:

     

     

    See if you can find it without being told where it is!

    Let's see, dtls1_process_heartbeat()... using data types that differ between the method and the struct it's referencing, using pointer arithmetic and dereference instead of directly accessing struct members by name, FUCKING STILL USING C WHEN THERE ARE A BAJILLION FUCKING BETTER LANGUAGES AVAILABLE THAT WOULD PREVENT STUPID SHIT LIKE THIS FROM EVEN BEING WRITTEN.

    I'm really getting tired of every fucking serious security vulnerability being due to some poorly-written C code that was reviewed by a drunken monkey. It's like the FOSS guys don't want to be taken seriously.

     

     

    Well, you got the function name right, so you're getting warm. Unfortunately, fixing the data types and accessing struct members by name wouldn't fix this one, the error is more fundamental than that. You're neither a monkey nor drunk, so keep trying -- where's the error?

     

    (Also, I'm curious to know what language you think OpenSSL should have been written in? I'm not saying you're wrong, mind.)

     

    Ah fuck, I misread it, it's actually indexing into the data array in the SSL3 record, not the SSL3 record itself. I would hang my head in shame, except that indexing into a struct via a pointer and reading arbitrary bytes out of it is entirely possible with C. Also my comments about datatypes are valid, because payload is an unsigned int, yet the maximum permissible size of payload according to the RFC is 2 bytes. (I'm making the - perhaps insane - assumption that an int in OpenSSL land is 32 bits. More fuckery that makes attempting to read C code snippets an exercise in frustration.)

    What the flaw boils down to is a failure to check that the size of the data is actually what the user says it is, and this is not a C issue, but a code review issue. So I retract my vitriol against C for today, but I still maintain that there are more appropriate languages to use. Even C++ would be a step up, at least that would bring OpenSSL into the late 20th century.



  • @The_Assimilator said:

    @aristurtle said:

    @The_Assimilator said:

    @aristurtle said:

    Here's the changeset where the bug was introduced:

     

     

    See if you can find it without being told where it is!

    Let's see, dtls1_process_heartbeat()... using data types that differ between the method and the struct it's referencing, using pointer arithmetic and dereference instead of directly accessing struct members by name, FUCKING STILL USING C WHEN THERE ARE A BAJILLION FUCKING BETTER LANGUAGES AVAILABLE THAT WOULD PREVENT STUPID SHIT LIKE THIS FROM EVEN BEING WRITTEN.

    I'm really getting tired of every fucking serious security vulnerability being due to some poorly-written C code that was reviewed by a drunken monkey. It's like the FOSS guys don't want to be taken seriously.

     

     

    Well, you got the function name right, so you're getting warm. Unfortunately, fixing the data types and accessing struct members by name wouldn't fix this one, the error is more fundamental than that. You're neither a monkey nor drunk, so keep trying -- where's the error?

     

    (Also, I'm curious to know what language you think OpenSSL should have been written in? I'm not saying you're wrong, mind.)

     

    Ah fuck, I misread it, it's actually indexing into the data array in the SSL3 record, not the SSL3 record itself. I would hang my head in shame, except that indexing into a struct via a pointer and reading arbitrary bytes out of it is entirely possible with C. Also my comments about datatypes are valid, because payload is an unsigned int, yet the maximum permissible size of payload according to the RFC is 2 bytes. (I'm making the - perhaps insane - assumption that an int in OpenSSL land is 32 bits. More fuckery that makes attempting to read C code snippets an exercise in frustration.)

    What the flaw boils down to is a failure to check that the size of the data is actually what the user says it is, and this is not a C issue, but a code review issue. So I retract my vitriol against C for today, but I still maintain that there are more appropriate languages to use. Even C++ would be a step up, at least that would bring OpenSSL into the late 20th century.

     

     

    Nah, the vitrol against C is fair, although C++ still has the problem. If you had a language that bounds-checked all array accesses, every time, that would solve the problem right there; the language would throw an error of some kind when you tried to copy 65535 bytes from a 16 byte array. I mean, ALGOL 60 did that, ffs.

     

    C (and C++) weren't really designed for memory safety, which makes them somewhat inappropriate for security software, but they get used anyway because everyone uses them for everything.

     



  • @aristurtle said:

    Nah, the vitrol against C is fair, although C++ still has the problem. If you had a language that bounds-checked all array accesses, every time, that would solve the problem right there; the language would throw an error of some kind when you tried to copy 65535 bytes from a 16 byte array. I mean, ALGOL 60 did that, ffs.

     

    C (and C++) weren't really designed for memory safety, which makes them somewhat inappropriate for security software, but they get used anyway because everyone uses them for everything.

     

     

    but then you'd have the complaint that all those checks are unneeded and people turn them off

     in D there are arrays which are essentially structs with a pointer and length, you then always know the size of the buffer you are passed, however it wouldn't solve it in release mode when the checks are turned off

     



  • @ratchet freak said:

    @aristurtle said:

    Nah, the vitrol against C is fair, although C++ still has the problem. If you had a language that bounds-checked all array accesses, every time, that would solve the problem right there; the language would throw an error of some kind when you tried to copy 65535 bytes from a 16 byte array. I mean, ALGOL 60 did that, ffs.

     

    C (and C++) weren't really designed for memory safety, which makes them somewhat inappropriate for security software, but they get used anyway because everyone uses them for everything.

     

     

    but then you'd have the complaint that all those checks are unneeded and people turn them off

     in D there are arrays which are essentially structs with a pointer and length, you then always know the size of the buffer you are passed, however it wouldn't solve it in release mode when the checks are turned off

     

     

     

    Don't turn them off in release build (unless the compiler's static analysis shows that the index can never exceed the array size or something; then I guess it's a potentially valid optimization). Again, this goes back to freaking ALGOL 60, which ran on computers many times slower than the crappiest embedded system that you will ever use today.

     

    @C. A. R. Hoare, from his Turing Award speech 34 years ago said:

    A consequence of this principle is that every occurrence of every subscript of every subscripted variable was on every occasion checked at run time against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interest of efficiency on production runs. Unanimously, they urged us not to—they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980, language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law.

     

    This isn't a radically new idea.

     



  • The thing with array bound checks though, like all forms of insurance, is that 95% of the time it's a complete waste of resources, but that 5% of the time you need it is a doozy.

    I do agree with one part of the Hoare quote though - I think software development should have a bit of a licensing or legal requirement on it to prevent so much of the stupidity from getting into the public domain.  Maybe not as old-school-apprentice-guild model as professional engineers for the civil engineering areas, but something better than the astonishing "NO WARRANTY OR CLAIM OF MERCHANTABILITY WHATSOEVER, SUCKER" everyone gets away with putting in their software license boilerplate.



  • @aristurtle said:

    @ratchet freak said:

    @aristurtle said:

    Nah, the vitrol against C is fair, although C++ still has the problem. If you had a language that bounds-checked all array accesses, every time, that would solve the problem right there; the language would throw an error of some kind when you tried to copy 65535 bytes from a 16 byte array. I mean, ALGOL 60 did that, ffs.

     

    C (and C++) weren't really designed for memory safety, which makes them somewhat inappropriate for security software, but they get used anyway because everyone uses them for everything.

     

     

    but then you'd have the complaint that all those checks are unneeded and people turn them off

     in D there are arrays which are essentially structs with a pointer and length, you then always know the size of the buffer you are passed, however it wouldn't solve it in release mode when the checks are turned off

     

     

     

    Don't turn them off in release build (unless the compiler's static analysis shows that the index can never exceed the array size or something; then I guess it's a potentially valid optimization). Again, this goes back to freaking ALGOL 60, which ran on computers many times slower than the crappiest embedded system that you will ever use today.

     

    @C. A. R. Hoare, from his Turing Award speech 34 years ago said:

    A consequence of this principle is that every occurrence of every subscript of every subscripted variable was on every occasion checked at run time against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interest of efficiency on production runs. Unanimously, they urged us not to—they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980, language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law.

     

    This isn't a radically new idea.

     

    Nor is it a hypothetical question.  We've known since ninteen-freaking-eighty-nine that the C language is inherently unsuitable for secure programming.  Hoare is exactly right: in any sane world, it would be considered an act of criminal negligence to write an operating system, browser, or anything network-facing or otherwise posessed of imortant security requirements in C, C++, Objective-C or any of its other misbegotten descendants.  The fact that the language was not dead by 1990 tells us a lot.

     



  • @Mason Wheeler said:

    We've known since ninteen-freaking-eighty-nine that the C language is inherently unsuitable for secure programming. 
     

    I think that's a bit extreme - C has enough power to let you get yourself in trouble if you're not careful and respectful, but that doesn't say anything about being inherently unsuitable for those tasks. 

    That's like saying the universe is inherently unsuitable for life because it provides the tools for life to end other life.

    Second edit: I think I would classify C as I classify flying: neither are inherently unsafe, but they are very unforgiving.



  • @Quango said:


    Our sites use IIS, so I sit happily watching all the Heartbleed Bug mayhem out there.

    Until the next patch Tuesday of course..

    When was last time IIS had a security patch? How many security patches were issued for IIS in last 12 month?


  • @too_many_usernames said:

    @Mason Wheeler said:
    We've known since ninteen-freaking-eighty-nine that the C language is inherently unsuitable for secure programming. 
     

    I think that's a bit extreme - C has enough power to let you get yourself in trouble if you're not careful and respectful, but that doesn't say anything about being inherently unsuitable for those tasks. 

    That's like saying the universe is inherently unsuitable for life because it provides the tools for life to end other life.

    Second edit: I think I would classify C as I classify flying: neither are inherently unsafe, but they are very unforgiving.

     

    It takes years of training to become a pilot, but any idiot who doesn't know what they're doing can pick up a C compiler and write code with security holes in it. I stand by my original claim: the Morris Worm proved that the C language is inherently insecure, and thus unsuitable for any task with inherent security requirements, such as operating systems and network-facing software. Being "careful and respectful" doesn't work; 25 years later we just found a catastrophic hole in Internet security based on the very same fundamental problem.

    You think the guy who wrote that code doesn't know better?  You think he didn't have better tools available than we had 26 years ago?  And yet he still made a mistake and nobody caught it.

     



  • @alegr said:

    When was last time IIS had a security patch? How many security patches were issued for IIS in last 12 month?

    Since Quango is a Slashdotter who certainly won't look up the answer, I'll do it for him. The last critical exploit in IIS was from 2010. There are two extremely minor issues newer than that, one in 2012 and one in 2013.

    (The one from 2012 is complete horseshit. You have to be logged-in to the server locally to "exploit", if that's even the word, it.)



  • I wonder how would Linus would react to this sort of code commit.



  • @too_many_usernames said:

    I think that's a bit extreme - C has enough power to let you get yourself in trouble if you're not careful and respectful, but that doesn't say anything about being inherently unsuitable for those tasks.

    How many developers do you know who are "careful and respectful"? What percentage of the total population of developers are?

    @too_many_usernames said:

    That's like saying the universe is inherently unsuitable for life because it provides the tools for life to end other life.

    The universe contains gamma ray bursts which can, in seconds, entirely sterilize entire solar systems of life, at least all forms of life we're aware of.

    @too_many_usernames said:

    Second edit: I think I would classify C as I classify flying: neither are inherently unsafe, but they are very unforgiving.

    Flying's pretty forgiving actually, fixed-wing at least. Unless you go out of your way to pilot planes with nasty envelopes. Helicopters require a lot of careful attention.



  • Juggling running chainsaws is safe, if you're "careful and respectful" enough to not fumble one. 

     edit:

    @ubersoldat said:

    I wonder how would Linus would react to this sort of code commit.
     

    It's an input sanitization bug buried in a 500+ line diff. If you're not looking for it, it's not glaringly obvious. I believe that the researcher caught it with some static analysis tool. I only caught it because somebody linked me to the diff and told me that the bug would reveal up to 64k of process memory in the heartbeat response payload, which was a pretty big hint to go on. And I'm in the secure embedded software business (where we all use C and C++, because reality is dumber than fiction).


  • ♿ (Parody)

    @blakeyrat said:

    he universe contains gamma ray bursts which can, in seconds, entirely sterilize entire solar systems of life, at least all forms of life we're aware of.

    Also, magnetars!



  • @aristurtle said:

    Here's the changeset where the bug was introduced:

     

     

    See if you can find it without being told where it is!

    My favorite comment from there:

    @kaepora said:

    I second @garu. Everyone who works on and contributes to OpenSSL is a hero. 👍

    Huge kudos to the OpenSSL team. I'm sure they are doing their best and this bug shouldn't shake our support for the world-supporting work they're doing.

    Participation Trophies for Everyone! Look, it's kind of harsh to rag on the guy for introducing this bug. This is tricky shit and it's not as much his fault as it is the fact that OpenSSL apparently doesn't put much effort into verifying patches.

    Still, that comment really shows us where FOSS has put the bar..



  • @The_Assimilator said:

    Even C++ would be a step up, at least that would bring OpenSSL into the late 20th century.

    You had me until you said "C++". Sorry, no, C++ would be 100x the clusterfuck this would be.

    But, yeah, the default in so much FOSS is just to use C. The wages of C are laughably-destructive bugs like this one. shrug FOSS should be using a better platform, like .NET, but then they wouldn't be sticking it to The Man which is the entire reason FOSS exists in the first place.



  • @aristurtle said:

    C (and C++) weren't really designed for memory safety, which makes them somewhat inappropriate for security software hardly any software at all in the year 2014

    FTFY.



  • @Mason Wheeler said:

    @too_many_usernames said:

    @Mason Wheeler said:
    We've known since ninteen-freaking-eighty-nine that the C language is inherently unsuitable for secure programming. 
     

    I think that's a bit extreme - C has enough power to let you get yourself in trouble if you're not careful and respectful, but that doesn't say anything about being inherently unsuitable for those tasks. 

    That's like saying the universe is inherently unsuitable for life because it provides the tools for life to end other life.

    Second edit: I think I would classify C as I classify flying: neither are inherently unsafe, but they are very unforgiving.

     

    It takes years of training to become a pilot, but any idiot who doesn't know what they're doing can pick up a C compiler and write code with security holes in it. I stand by my original claim: the Morris Worm proved that the C language is inherently insecure, and thus unsuitable for any task with inherent security requirements, such as operating systems and network-facing software. Being "careful and respectful" doesn't work; 25 years later we just found a catastrophic hole in Internet security based on the very same fundamental problem.

    You think the guy who wrote that code doesn't know better?  You think he didn't have better tools available than we had 26 years ago?  And yet he still made a mistake and nobody caught it.

    Agreed. However, what is the biggest user of C today? It's FOSS, which is mentally stuck in 1976..



  • @ubersoldat said:

    I wonder how would Linus would react to this sort of code commit.

    Before or after the exploit? Before I doubt he'd find it, it's not like the kernel hasn't had tons of security vulnerabilities in it. Afterwards he'd post an epic rant to the mailing list which would reassure the minions.


  • Discourse touched me in a no-no place

    @morbiuswilters said:

    Sorry, no, C++ would be 100x the clusterfuck this would be.
    C++ lets you hide the fact that something's unsafe inside a million layers of template “goodness”. It combines the lack of safety of C with the complexity and brevity of a high-level language. What Could Possibly Go Wrong?

    C at least has the excuse of being the first vaguely acceptable and portable level over assembler.



  • @morbiuswilters said:

    @The_Assimilator said:
    Even C++ would be a step up, at least that would bring OpenSSL into the late 20th century.

    You had me until you said "C++". Sorry, no, C++ would be 100x the clusterfuck this would be.

    But, yeah, the default in so much FOSS is just to use C. The wages of C are laughably-destructive bugs like this one. shrug FOSS should be using a better platform, like .NET, but then they wouldn't be sticking it to The Man which is the entire reason FOSS exists in the first place.

    OpenSSL needs to run on pretty much everything, and the .NET CLI doesn't.

    Believe me, I would love an environment running on this craptastic embedded processor that gave me safe memory management and garbage collection and free back massages after lunch, but there isn't one, and if there was it wouldn't run fast enough. I can't even get the kind of memory access checks that were commonplace in goddamn 1960. Everyone around me speaks only C and C++, with maybe some Python to write scripting tools or whatever (the Python interpreter doesn't run on the part).

    C (and C++, which is arguably worse) are the default systems programming languages, and in 2014 that's shameful.



  • @Mason Wheeler said:

    @too_many_usernames said:

    @Mason Wheeler said:
    We've known since ninteen-freaking-eighty-nine that the C language is inherently unsuitable for secure programming. 
     

    I think that's a bit extreme - C has enough power to let you get yourself in trouble if you're not careful and respectful, but that doesn't say anything about being inherently unsuitable for those tasks. 

    That's like saying the universe is inherently unsuitable for life because it provides the tools for life to end other life.

    Second edit: I think I would classify C as I classify flying: neither are inherently unsafe, but they are very unforgiving.

     

    It takes years of training to become a pilot, but any idiot who doesn't know what they're doing can pick up a C compiler and write code with security holes in it. I stand by my original claim: the Morris Worm proved that the C language is inherently insecure, and thus unsuitable for any task with inherent security requirements, such as operating systems and network-facing software. Being "careful and respectful" doesn't work; 25 years later we just found a catastrophic hole in Internet security based on the very same fundamental problem.

    You think the guy who wrote that code doesn't know better?  You think he didn't have better tools available than we had 26 years ago?  And yet he still made a mistake and nobody caught it.



    The problem is that the same feature set that makes C unsecure (direct control over memory access) is also what makes it more useful or suitable for certain applications. I haven't used C in a while, and I'll probably never write another line of C code in my life, but I remember doing small projects in college that were rated on the basis of speed and memory use, and pretty much every time the guys who wrote their shit in C got it to run with the smallest number of CPU cycles. Sometimes just because C didn't have memory management overhead, but often because they could do interesting tricks ideosyncratic to that specific project to gain a bit of speed here and there that added up.

    And hey, that's probably not such a big deal these days. I sure as fuck don't care about shaving milliseconds of the runtime of any function I write. But some people work in places where that's important. A millisecond shaved from a low-level function that gets called a billion times from multiple threads becomes a pretty significant performance increase.

    The problem is not that "C is inherently unsafe for secure applications" it's "C is too robust and low-level to be trusted by people without rigorous training and review processes." This problem isn't solved by just banning C from all use in secure applications, since there is still the problem that C might be the best choice for that application. The problem is solved by not letting idiots write your "super secure" code-base.

     



  • @Mason Wheeler said:

    It takes years of training to become a pilot,
     

    I wasn't aware that a national average of 60 hours was "years."*

    But anyway, I suppose I should say that I'm not defending C (and its ilk) for being open to such issues - really my issue is with the processes that people (don't) follow that give rise to this thing. The whole "do it fast, worry about consequences later" mentality, or the "I'm an OSS guru!" mentality.  It's also a good example of "you don't get what you don't pay for" - in this case, properly vetted and tested code.

    Is it inconvenient, costly, and a general pain? Yes.  But I wouldn't go laying all the blame on the tools - after all, we have known about the ills of C for many decades - so why hasn't anything better replaced it?  I can't say I know how much of it is just preference bias compared to how much it's actually just the "best tool for the job."

    I think some of it is indeed obstinance, but in my experience, there are indeed things which can be done faster and easier in C than in other languages - mostly related to low-level hardware interfaces; probably because that's what C was designed for.

    I guess what we need is a better meta-language, telling us when we should use C and when we should use something else...

     

    *This doesn't mean you're an excellent pilot, by any stretch, but you're not a "dangerous" one.



  • @dkf said:

    @morbiuswilters said:
    Sorry, no, C++ would be 100x the clusterfuck this would be.
    C++ lets you hide the fact that something's unsafe inside a million layers of template “goodness”. It combines the lack of safety of C with the complexity and brevity of a high-level language. What Could Possibly Go Wrong?

    C at least has the excuse of being the first vaguely acceptable and portable level over assembler.

    It also has the excuse of being 45 years old. Nobody should go to a 45 year old prostitute and then act surprised when she dislocates a hip, but that's exactly what FOSStards do.

    By the way, I should reiterate that I write lots of C and work with FOSS. But at least I'm cognizant of the fact that it's a case of mass hysteria and nothing more.



  • @aristurtle said:

    OpenSSL needs to run on pretty much everything, and the .NET CLI doesn't.

    But it could, but people are idiots. Oh, I've known several people who do embedded C#. It's really just a case of mental illness that makes people use C/C++.



  • @dkf said:

    C at least has the excuse of being the first vaguely acceptable and portable level over assembler.

     

    It certainly wasn't. C was derived from B which was Ken Thompson's attempt to create a subset of BCPL so that he could fit the compiler into memory on a PDP-7. BCPL had, for example, a range-based for loop. We didn't get that back until, what, C++11?

     



  • @Snooder said:

    The problem is that the same feature set that makes C unsecure (direct control over memory access) is also what makes it more useful or suitable for certain applications. I haven't used C in a while, and I'll probably never write another line of C code in my life, but I remember doing small projects in college that were rated on the basis of speed and memory use, and pretty much every time the guys who wrote their shit in C got it to run with the smallest number of CPU cycles. Sometimes just because C didn't have memory management overhead, but often because they could do interesting tricks ideosyncratic to that specific project to gain a bit of speed here and there that added up.

    Yeah, and those "tricks" are how we get bugs like this. Besides, very people need to optimize at that level today.

    @Snooder said:

    A millisecond shaved from a low-level function that gets called a billion times from multiple threads becomes a pretty significant performance increase.

    It's more like "microsecond"..

    @Snooder said:

    The problem is not that "C is inherently unsafe for secure applications" it's "C is too robust and low-level to be trusted by people without rigorous training and review processes."

    In the vast, vast, vast majority of cases you are wrong. Maybe, somewhere, C really does need to be used (although there's an argument you should just be using assembly in those rare, rare cases) but most of the time it does not. CPU time is cheap and ultimately you're making a trade-off between eeking out an almost-imperceptible gain in performance at the expense of a sane development environment and any claim to security.



  • @too_many_usernames said:

    ..so why hasn't anything better replaced it?

    Outside of FOSSLand and the Unixville slums, it largely has been replaced.



  • @too_many_usernames said:

    @Mason Wheeler said:

    It takes years of training to become a pilot,
     

    I wasn't aware that a national average of 60 hours was "years."*

    *This doesn't mean you're an excellent pilot, by any stretch, but you're not a "dangerous" one.

    I happen to know two pilots, both of whom have less than 100 flight hours, and both of whom have been "in training" for more than two years.

    Realistically, when starting out, you are only flying in clear weather, in daylight, and you are paying a trainer at least $100/hour + costs, so getting in even two flight hours per week is very optimistic, and probably "laughably naive".

    I think the flying comparison is basically perfect. Just as someone with little experience flying should not fly others, someone with little experience dealing with C's pointer fuckery should not write code intended for public use.



  • @Snooder said:

    The problem is that the same feature set that makes C unsecure (direct control over memory access) is also what makes it more useful or suitable for certain applications. I haven't used C in a while, and I'll probably never write another line of C code in my life, but I remember doing small projects in college that were rated on the basis of speed and memory use, and pretty much every time the guys who wrote their shit in C got it to run with the smallest number of CPU cycles. Sometimes just because C didn't have memory management overhead, but often because they could do interesting tricks ideosyncratic to that specific project to gain a bit of speed here and there that added up.

    And hey, that's probably not such a big deal these days. I sure as fuck don't care about shaving milliseconds of the runtime of any function I write. But some people work in places where that's important. A millisecond shaved from a low-level function that gets called a billion times from multiple threads becomes a pretty significant performance increase.

    Yeah, but these days, with modern hardware speeds, a millisecond is long enough that the best way to save it is by improving your algorithm, not doing weird low-level tricks, and I'm saying this as someone who knows his way around weird low-level tricks.

    The problem is not that "C is inherently unsafe for secure applications" it's "C is too robust and low-level to be trusted by people without rigorous training and review processes." This problem isn't solved by just banning C from all use in secure applications, since there is still the problem that C might be the best choice for that application. The problem is solved by not letting idiots write your "super secure" code-base.

    Again, do you think the guy who wrote this update was an idiot?  I don't; the rest of the code shows he understands what he's doing.  No, he's a very smart, talented developer... who made a mistake.  To err is human, and it's really that simple.  The problem is that the C language doesn't take that into account, and so is fundamentally at odds with reality itself.  And when the tasks being performed are so unforgiving, when the consequences of failure are so high, that's unforgivable.  C needs to die, and its entire family along with it.  It's 25 years overdue for its funeral.

     



  • @morbiuswilters said:

    @aristurtle said:
    OpenSSL needs to run on pretty much everything, and the .NET CLI doesn't.

    But it could, but people are idiots. Oh, I've known several people who do embedded C#. It's really just a case of mental illness that makes people use C/C++.

     

    Using what runtime on what processor and OS?

    This isn't me accusing you of lying; this is genuine curiosity. Are we talking about "embedded C#" but it's really the Microsoft CLR running on "embedded Windows" on an x86 board? Or are we talking actual embedded? (Where I am, the only reasonable OS options are embedded Linux and VxWorks, for example. I am of the opinion that VxWorks is worse.)



  • @aristurtle said:

    @morbiuswilters said:

    @aristurtle said:
    OpenSSL needs to run on pretty much everything, and the .NET CLI doesn't.

    But it could, but people are idiots. Oh, I've known several people who do embedded C#. It's really just a case of mental illness that makes people use C/C++.

     

    Using what runtime on what processor and OS?

    This isn't me accusing you of lying; this is genuine curiosity. Are we talking about "embedded C#" but it's really the Microsoft CLR running on "embedded Windows" on an x86 board? Or are we talking actual embedded? (Where I am, the only reasonable OS options are embedded Linux and VxWorks, for example. I am of the opinion that VxWorks is worse.)

    I don't know, I never asked. I don't think it was x86, though.


  • ♿ (Parody)

    @morbiuswilters said:

    @Snooder said:
    The problem is not that "C is inherently unsafe for secure applications" it's "C is too robust and low-level to be trusted by people without rigorous training and review processes."

    In the vast, vast, vast majority of cases you are wrong. Maybe, somewhere, C really does need to be used (although there's an argument you should just be using assembly in those rare, rare cases) but most of the time it does not. CPU time is cheap and ultimately you're making a trade-off between eeking out an almost-imperceptible gain in performance at the expense of a sane development environment and any claim to security.

    While people typically focus on speed, it seems like portability is the real reason why we can't get away from C. Its long history has given it time to get into practically everywhere. Who wants to take the risk and massive cost to replace it with something else?

    Bounties seem like the most promising way to find stuff like this in open source. No matter the technology and processes used, there will be crazy bugs that slip through, and bounties at least give all those extra eyes out there a positive incentive. I'm still amazed at how many "used after free" bugs show up in the chrome bounties list every release.



  • @Buttembly Coder said:

    I think the flying comparison is basically perfect. Just as someone with little experience flying should not fly others, someone with little experience dealing with C's pointer fuckery should not write code intended for public use.
     

    Yes, I agree with this.

    @Mason Wheeler said:

    C needs to die, and its entire family along with it.  It's 25 years overdue for its funeral.

    Nice soapbox, but how would you pull this off? If none of the numerous alternatives to C have managed to knock it off its throne, then what would it take? Perhaps it would indeed be legal pressure, but that sounds "artificial" to me.

    From my standpoint: make me want to give up C.  I don't want a language that changes every year by adding (or worse) changing language features - new libraries are OK, but don't change the language. Don't make me rely on runtime libraries, but provide them. And have them simple, without dependencies on fifty other libraries. And provide them with fully tested code, and make the test plans and results available to me. Don't make me tied to a specific vendor for binary tools. Support all the processors I need to support, native code (256kB RAM is a luxury on my micros, most only have 64kB, usually only 2-4MB FLASH; heck I've only got 1024 bytes of FLASH for boot code on one of my modules!), and supporting real-time operation, and I've got to be running app code in less than 500ms after a reset, so no crazy initialization requirements.

    Give me those things in another language, and I'll look at it. Other languages do have some of those things, but not all of them.

    Until then, C is still the best option.


Log in to reply