Hackers can take over any Chrysler vehicle from the last 2 years. Yes, fully remotely. Yes, including steering, brakes and transmision.



  • @anonymous234 said:

    If you design a car's entertainment system with the proper methodology, in a language and environment focused on security (i.e. not C)

    Please, tell me how I can do systems/embedded programming in anything other than C, C++, or ASM.



  • @lesniakbj said:

    Please, tell me how I can do systems/embedded programming in anything other than C, C++, or ASM.

    Go look at the exploit at the top of the thread.

    The answer is:

    #YOU FIND A FUCKING WAY

    because otherwise you're literally putting lives in danger.



  • @blakeyrat said:

    Go look at the exploit at the top of the thread.

    I did. This can all be remedied with better QA practices, correct coding practices, and using your fucking brain instead of designing systems like an idiot. You can make secure code in C, ASM, C++...



  • @boomzilla said:

    That's a familiar sort of shudder inducing quote..

    The idea is that industry after industry is going to fall at the hands of programmersplanners who automate and rationalize it.

    ...they automated the Chrysler, but I did not drive a Chrysler, so I said nothing.


    Filed under: I seem to be mixing my metaphorsdespots



  • @lesniakbj said:

    I did.

    I don't think it's sunk in yet.

    @lesniakbj said:

    This can all be remedied with better QA practices, correct coding practices, and using your fucking brain instead of designing systems like an idiot.

    Let me ask you this: do you think Chrysler's QA practices and coding practices are insufficient? And no ex-post-facto "well obviously it is", I want your answer as if you were answering before this exploit came to light.

    @lesniakbj said:

    You can make secure code in C, ASM, C++...

    Of course it's possible.

    But it's significantly, extraordinarily, incredibly easier in a programming language where you are not managing your own memory.

    The main reason for not switching languages appears to be something like, "waaah! we'd have to upgrade the shitty-ass CPUs we use! It might add $2 to the unit-cost of the car!" That is a reason I would classify as, "bulldung++".



  • All I said was, and I quote:

    Please, tell me how I can do systems/embedded programming in anything other than C, C++, or ASM.

    Now, answer my question. How can I build embedded systems in anything other than a low level language?

    But it's significantly, extraordinarily, incredibly easier in a programming language where you are not managing your own memory.

    Duh, of course, but in the embedded realm you don't have those luxuries. You either program them yourself (introducing errors, using program space you need for other things, etc), or you get by without them.



  • @lesniakbj said:

    Now, answer my question. How can I build embedded systems in anything other than a low level language?

    #YOU FIND A FUCKING WAY

    Why am I repeating myself?

    @lesniakbj said:

    Duh, of course, but in the embedded realm you don't have those luxuries.

    So upgrade the hardware until you do.

    I'm sorry, are you actually retarded? Was that a conclusion you were literally too stupid to reach, or...?



  • I was thinking more in general, not as a way to solve a problem for existing systems.


  • :belt_onion:

    @blakeyrat said:

    YOU FIND A FUCKING WAY

    I'll paraphrase a @Blakeyquote:

    Not unless you pay me I won't.


  • @blakeyrat said:

    So upgrade the hardware until you do.

    Yet that hardware is still going to be running some embedded software that is written in.. Well what do you know; C, C++ or ASM.

    Upgrading hardware doesn't give you features like memory management or any of those standard things you find in your OS. You literally don't even have a standard library. You program that yourself.



  • @lesniakbj said:

    Yet that hardware is still going to be running some embedded software that is written in.. Well what do you know; C, C++ or ASM.

    Then you haven't upgraded it far enough.


  • :belt_onion:

    What is every operating system written in?

    1. A low level language.
    2. C#
    3. Java

    Upgrading your hardware won't help that...



  • That is how things currently are; that is not how they must be.



  • @blakeyrat said:

    The auto-park feature in my car does software control of steering in reverse or forward. (So you can back into a parallel spot, then drive forward a bit to straighten out the car.) It only works under about 5 MPH, but God knows if that limitation is hardware or software.

    IIRC in one of the black hat or defcon talks (by the same guys, I think), it's mentioned that you could work around that by spamming the bus with messages that you're were in fact driving at less than 5 MPH, even though you're not. (Because, of course, the only way whatever component is responsible for the electronic steering knows the speed is if it's told the speed by whatever component is measuring it.)


  • ♿ (Parody)

    @blakeyrat said:

    That is how things currently are; that is not how they must be.

    Tomorrow: High level embedded development. Next Tuesday: Unconditional basic income!



  • @lesniakbj said:

    Please, tell me how I can do systems/embedded programming in anything other than C, C++, or ASM

    Ada?

    🛂

    More seriously -- [i]we need a better systems language, pronto[/i]

    Idea for language designers:

    • Start with a relatively "clean" syntax (Ada is cripplingly verbose, while C++ is too ambiguity-laden)
    • Incorporate a type system that provides Ada's capacity for constraints and at least some of the type composability of say Haskell or ML
    • Build in the static-memory-safety of Rust (it's the only approach that will get you remotely close to writing memory-safe code for that ATmega or PIC24F your boss just ordered a million of)
    • Make it possible to runtime interoperate with C as you can't "rewrite everything" overnight, and some hardware (the Cortex-M NVIC, for instance) was designed with C calling conventions in mind
    • And last but not least, build it atop a production-grade toolchain infrastructure, with the ability to operate with minimal runtime support -- you should be able to write a Cortex-M's reset handler in this new lang.

    @blakeyrat said:

    But it's significantly, extraordinarily, incredibly easier in a programming language where you are not managing your own memory.

    GC is impractical at the true microcontroller scale, even for smaller 32-bit parts like the Cortex-M0s I own. Try fitting your stack, heap, and static data into 4kB of RAM on a 32-bit machine. (Furthermore, parts this small don't even usually have a heap or malloc() to begin with, because trying to do dynamic allocation on a heap a mere kilobyte or two in size is a nightmare waiting to happen, in addition to needing your code space for application functions!)

    @blakeyrat said:

    The main reason for not switching languages appears to be something like, "waaah! we'd have to upgrade the shitty-ass CPUs we use! It might add $2 to the unit-cost of the car!" That is a reason I would classify as, "bulldung++".

    Do you realize just how much more a big, fat SOC costs in hardware NRE? You're moving to a much higher end board process (4 layers minimum, with finer pitches and even via in pad), much more sophisticated assembly processes (as your car ECU can't go the way of the XBox360 RROD, even if you're @abarker with an engine chronically short on coolant), and vastly more EMI control efforts as you're talking about a clock 2-3 orders of magnitude faster than the microcontroller solution. Never mind the increased power draw (your average cell phone SoC can't exactly sit there sleeping on a microamp of current waiting for an interrupt to turn up from an accelerometer), or the issues with thermal control (as they generate much more heat themselves, which is a problem when thermal margins are strictly limited by the operating environment).

    Furthermore, these systems-on-chip require the use of a commodity OS as they are far too complex to attempt a full-custom code package for, which paradoxically increases your attack surface as there are many more system utilities, libraries, and kernel components involved, all of which can be vulnerability sources.

    @blakeyrat said:

    So upgrade the hardware until you do.

    Atop all that I already mentioned, consumer-type system-on-chip hardware may not have the reliability or environmental qualifications needed for the task. (Have fun finding a full QML Class V flow ARM SoC, or better yet, one that can survive going down the nearest oil well and back up again. ;) )



  • @tarunik said:

    GC is impractical at the true microcontroller scale, even for smaller 32-bit parts like the Cortex-M0s I own. Try fitting your stack, heap, and static data int

    SO UPGRADE THE HARDWA--

    you know what? Nevermind.

    People who do embedded programming are just too stupid. I give up. No wonder their shit is full of security holes.



  • @powerlord said:

    Edit: For that matter, why are things like transmission and brakes connected to a network in the first place, let along one with access to the Internet?

    Ah, the Internet of Things. Welcome to The World of Tomorrow.



  • @tarunik said:

    Ada?

    Got me there. Didn't even think of Ada. Wonder why haha.



  • I'll just leave this here, I'm sure you've all seen it:

    https://cosmos.codeplex.com/


  • :belt_onion:

    RTFPYR

    He went on to explain precisely why it is impractical to upgrade the hardware. And cost is not the only factor. Complexity is also a factor, and for something like a vehicle chip, you do not want complexity.



  • @tarunik said:

    even if you're @abarker with an engine chronically short on coolant

    :wtf: was I summoned to this thread that I wasn't participating in?



  • @abarker said:

    was I summoned to this thread that I wasn't participating in?

    Because you live in the ideal environment for heat-torturing @blakeyrat's hypothetical ARM system-on-chip powered car computer. ;)



  • No responses to the operating system written in C#?


  • ♿ (Parody)

    @Magus said:

    No responses to the operating system written in C#?

    Looks like it currently targets x86, which isn't particularly relevant here (I think).



  • I'm rather horrified that it exists, but also wish people were actually working on it appreciably.



  • What if I told you Microsoft had been working on something like this in the past?



  • @blakeyrat said:

    SO UPGRADE THE HARDWA--

    Build me a system based around an ARM applications processor/System on Chip that can survive going down an oil well and back up again. THEN we will talk about upgrading the hardware.

    If you want info on the oil well environs, check out this ADI piece on high temp electronics and this archived appnote from a well logging electronics service provider.

    Or should I just go BDGI at this point?

    P.S. Oh, and you can't exactly send a cooling umbilical up to the surface, not that that'd work anyway. ;)



  • @tarunik said:

    Build me a system based around an ARM applications processor/System on Chip that can survive going down an oil well and back up again. THEN we will talk about upgrading the hardware.

    Jesus Christ you fuckers be stupid.

    Can you hack into it with a cellphone? No? THEN I'M NOT TALKING ABOUT YOU YOU STUPID FUCKER!



  • Oh, I knew about that one, but it's been dead for half a decade.



  • And I quote:

    @tarunik said:

    Do you realize just how much more a big, fat SOC costs in hardware NRE? You're moving to a much higher end board process (4 layers minimum, with finer pitches and even via in pad), much more sophisticated assembly processes (as your car ECU can't go the way of the XBox360 RROD, even if you're @abarker with an engine chronically short on coolant), and vastly more EMI control efforts as you're talking about a clock 2-3 orders of magnitude faster than the microcontroller solution. Never mind the increased power draw (your average cell phone SoC can't exactly sit there sleeping on a microamp of current waiting for an interrupt to turn up from an accelerometer), or the issues with thermal control (as they generate much more heat themselves, which is a problem when thermal margins are strictly limited by the operating environment).

    Furthermore, these systems-on-chip require the use of a commodity OS as they are far too complex to attempt a full-custom code package for, which paradoxically increases your attack surface as there are many more system utilities, libraries, and kernel components involved, all of which can be vulnerability sources.


    Still want to upgrade the hardware at this point?



  • @tarunik said:

    Still want to upgrade the hardware at this point?

    I want to to fix the fucking problem. I don't give a shit how it's done.

    But the best way to fix brain-dead bullshit like this is to STOP USING TERRIBLE PROGRAMMING LANGUAGES. And if that require upgrading the hardware, then yes, go fucking do that you useless lumps of ass.

    At the very least, stop making excuses here based on "we've always done it that way" or based on "but it'd cost a tiny bit more!" Stop it. It's pissing me off.


  • BINNED

    @blakeyrat said:

    STOP USING TERRIBLE PROGRAMMING LANGUAGES

    We'd have to come to some agreement about what the terrible programming languages are first. 🚎


  • ♿ (Parody)

    @blakeyrat said:

    But the best way to fix brain-dead bullshit like this is to STOP USING TERRIBLE PROGRAMMING LANGUAGES.

    Only if you put your head in the sand and refuse to understand the reasons behind those languages being used.

    Engineering will continue to be about tradeoffs no matter how many tantrums you throw.



  • @blakeyrat said:

    But the best way to fix brain-dead bullshit like this is to STOP USING TERRIBLE PROGRAMMING LANGUAGES. And if that require upgrading the hardware, then yes, go fucking do that you useless lumps of ass.

    And I already suggested a formula for taking a stab at a non-terrible systems language upthread, in the same post you catastrophically failed to read.

    Or are you going to say that any language that doesn't run on a software virtual machine with full garbage collection and VM-enforced memory and security boundaries is automatically terrible? Are you trying to say that our CPUs should just run MSIL bytecode directly?

    🛂



  • @tarunik said:

    Are you trying to say that our CPUs should just run MSIL bytecode directly?

    Sure why not.

    I had a fucking RAZR phone a decade ago that could do it with Java, and it ran fast enough to play 320x240 games at 15 FPS.

    You're acting as if this is some kind of amazing invention from beyond space when it's something that existed back in 2007.



  • @blakeyrat said:

    I had a fucking RAZR phone a decade ago that could do it with Java, and it ran fast enough to play 320x240 games at 15 FPS.

    Now take the Java-CPU from that RAZR, and make it get 10 years of battery life from a single CR2032.



  • @tarunik said:

    Now take the Java-CPU from that RAZR, and make it get 10 years of battery life from a single CR2032.

    Shocking revelation, but Chrysler cars already all include battery chargers as standard equipment. It's called an "alternator".

    Your time-pod must be REALLY malfunctioning if you know about RAZR CPUs but not alternators.



  • WTF? I'm trying to make a point about embedded apps in general -- keep in mind that the airbag controller in your car has a backup battery in it so that it can still fire the airbag if the main electrical system gets wiped out in a severe crash, for instance. Imagine if some crazed driver trying to flee the police runs into your parked car head-on. Now imagine what'd happen if the airbags didn't fire because your Java-CPU-based airbag controller drained the airbag backup battery, and the airbag computer lost main power in the first microseconds of the crash because the main battery wiring was disrupted.

    Or closer to home: try making that Java-CPU run when attached to the engine block of your car, interpreting the pings from multiple knock sensors in real-time to adjust engine timing for maximum power output without preignition or detonation.


  • :belt_onion:

    It's almost as if processing power isn't the only factor in this equation


  • Discourse touched me in a no-no place

    @blakeyrat said:

    People who do embedded programming are just too stupid.

    Then blakey screams because his 2027 Ford Fission won't turn on one time in ten due to a race condition in an OnIgnitionKeyTurned event handler.


  • Discourse touched me in a no-no place

    @tarunik said:

    survive going down an oil well and back up again.

    Duh, obviously it'd be oil-cooled, so there's no problem.


  • Discourse touched me in a no-no place

    @blakeyrat said:

    And if that require upgrading the hardware, then yes, go fucking do that you useless lumps of ass.please make a Ford Fiesta cost $60,000.

    <post can't be empty



  • @FrostCat said:

    Duh, obviously it'd be oil-cooled, so there's no problem.

    Laughs More like oil-heated ;)



  • @tarunik said:

    WTF? I'm trying to make a point about embedded apps in general

    Ok but I'm not so. Good job? Moron?

    The thread is about Chryslers. Not about "embedded apps in general". If you move your dumb little eyeballs to the top of the screen, the topic's written right there for your perusement pleasure.



  • @blakeyrat said:

    STOP USING TERRIBLE PROGRAMMING LANGUAGES

    The language isn't the problem, it's the culture. If you forced them to use a managed language with the current culture, they would screw it up anyways. In today's culture, they look at @tarunik's analysis and decide "it's not worth it". If they gave a shit, they would compare the cost of creating a hardened ARM hardware platform against the cost of writing secure C code and maybe decide ARM is worth it. However, today, they look at it as "cheap embedded C-based system" against "expensive embedded Java-based system" and the winner is obvious.

    The requirements force "scary" features like steering and brakes to be accessible on the internal network. As long as they don't take security seriously and continue to not do stuff like security-specific code reviews, pen testing, and secure auto-updating, these problems will continue to surface.



  • @blakeyrat said:

    The thread is about Chryslers. Not about "embedded apps in general". If you move your dumb little eyeballs to the top of the screen, the topic's written right there for your perusement pleasure.

    You're complaining about the general embedded world though.


  • :belt_onion:

    E_NO_ITS_NOT



  • @tarunik said:

    You're complaining about the general embedded world though.

    No. I'm not.

    But go ahead and keep putting words in my mouth, then arguing against the imaginary words you came up with. By all means. Continue debating with yourself and somehow involving me in the process.



  • @Jaime said:

    If they gave a shit, they would compare the cost of creating a hardened ARM hardware platform against the cost of writing secure C code and maybe decide ARM is worth it. However, today, they look at it as "cheap embedded C-based system" against "expensive embedded Java-based system" and the winner is obvious.

    It's also the cost of creating a hardened software stack to run on that SoC -- just plopping Linux or Windows CE on it simply won't do the business, especially not with the cultural flaws you very rightly point out.

    @Jaime said:

    The requirements force "scary" features like steering and brakes to be accessible on the internal network. As long as they don't take security seriously and continue to not do stuff like security-specific code reviews, pen testing, and secure auto-updating, these problems will continue to surface.

    Not just that, but design against Satan to begin with -- network segmentation and interface restrictions can severely hamper an attacker.


Log in to reply