Boss thinks there isn't an ANSI standard for C



  • Today my boss (CEO/CTO/Chief Architect/OO book author/high tech entrepeneur) entered into a debate on the merits of C++ over C, claiming that pretty much nobody programs in C anymore. That point is debatable perhaps (relative to how much Java, C# and Visual C++ is being developed these days), but given the guy actually said "I don't even think there's an ANSI standard for C, is there?" I don't think I'll get very far. He seemed genuinely surprised when my jaw dropped open and I said "of course there is". 

    *sigh*



  • There's still a lot of C work being done.  Also, I suggest you get a copy of the ANSI C standard and hit him with it.  Of course, that's just me. 



  • @Kederaji said:

    There's still a lot of C work being done.  Also, I suggest you get a copy of the ANSI C standard and hit him with it.  Of course, that's just me. 

    Violence is the solution!  That's what my shrink tells me. 



  • I take it you guys don't do embedded systems work? Really that's where C is surviving.

     There's only one complier that works with C++ for the processor we use, and costs three times as much as an equivalent C complier. It's damn lame.

    Not hearing of ANSI C though, WTF?



  • I hear that they are still maintaining the c++ to c compiler...



  • It's going to be a sad age when C goes the way of assembly :/ . I can imagine the equivalents of:

    i = 0;
    for (;;) { //Why is this loop SO SLOOOOOOOW!?
      hahaha = malloc(0x00FFFFFF);
      readCharacterFromDisk(*hahaha);
      putc(hahaha[i]);
      i++;
      free(hahaha);
    }
    

    /me shudders violently.



    Off-topic: Cyrus when I was your name I immediately thought of Lysis, read a few words, wondered how is he going to turn it into a troll post, and then noticed it wasn't.



  •  @Lingerance said:


    Off-topic: Cyrus when I was your name I immediately thought of Lysis, read a few words, wondered how is he going to turn it into a troll post, and then noticed it wasn't.

    Obviously I need a cool avatar to avoid this problem.



  • @Lingerance said:

    Cyrus when I was your name ...

    When I first was that post, I first thought you were going to talk about when you used Cyrus for a screen name. 



  • @belgariontheking said:

    When I first was that post, I first thought you were going to talk about when you used Cyrus for a screen name. 
    I obviously accidentally threw my copy of the word "saw" through my custom ROT-Word encryption, what a blunder.



  • @belgariontheking said:

    Violence is the solution! 

     

    Of course it is.  It has worked for me.

     "Handle your exceptions appropriately or I will beat you" has been my mantra at more than one place of employment.



  • At least 90% of the OS (or at least the kernel) you're using is written in C as well. This is as true for Windows as it is for Linux (discussion here). Ditto device drivers and anything else that's both low level and performance critical.

    Some days I despair where the next generation of device driver or OS authors is going to come from...



  • Boss: We need to write a driver for the new widget. Any suggestions on the technology?

    Newb: Well, we definitely want something with a web interface. How about PHP? And we can use AJAX to communicate off-chip!

    quiver



  • Kernels being written in C has more to do with historical reasons: Linux, Windows NT and BSD are all at least 15 years old, back when C++ was pretty crap in many ways.  Device drivers are written in C because they have to interface with the kernel, and because programmers in that domain have more training in pure C.  If one were to write a new kernel now for a non-embedded system, there would be little reason not to do it in C++.

    C++ is not lower performance than C; it is unarguably not in theory and nowadays it is not in practice either.  There are only a few slower-performance features: virtual methods, exceptions, iostreams, RTTI and STL containers.  All are entirely optional, and were carefully designed to not require any more performance hit than necessary (with the exception of exceptions, which admittedly really are slow).  In fact, using the C++ features may sometimes be faster than the C equivalent: for instance, a virtual function call can be faster than a switch statement; a cout series of << can be inlined by the optimizer and requires no string parsing step; an STL sort algorithm can be faster than a C qsort() call because the comparison function calls can be inlined.  Whether these gains are actually realized is implementation-dependent, but anyway no one can accuse the C++ standard of specifying a bloated language.

    As for the proof that implementations are up to standard: in the game industry, another application domain where performance is critical but which evolves more quickly than the don't-fix-what-ain't-broke kernel domain, everyone is using C++ nowadays -- even the C holdout John Carmack has now switched over.



  • @Cyrus said:

     @Lingerance said:



    Off-topic: Cyrus when I was your name I immediately thought of Lysis, read a few words, wondered how is he going to turn it into a troll post, and then noticed it wasn't.

    Obviously I need a cool avatar to avoid this problem.

    You obviously need to be Cyrus from the Warriors



  • @seaturnip said:

    C++ is not lower performance than C; it is unarguably not in theory and nowadays it is not in practice either.  There are only a few slower-performance features: virtual methods, exceptions, iostreams, RTTI and STL containers.  All are entirely optional, and were carefully designed to not require any more performance hit than necessary (with the exception of exceptions, which admittedly really are slow).  In fact, using the C++ features may sometimes be faster than the C equivalent: for instance, a virtual function call can be faster than a switch statement; a cout series of << can be inlined by the optimizer and requires no string parsing step; an STL sort algorithm can be faster than a C qsort() call because the comparison function calls can be inlined.

    I wouldn't disagree, but a sort algorithm is something that I would expect to be custom-written rather than using a library function (in a performance critical environment), and C function pointers can also achieve much the same performance gains over a switch. Admittedly, C++ has the romping-home advantage that it's a lot safer in the latter case.

    In John Carmack's case, I believe that the engine backend is still pure C, but I don't have a link to back that up right now.



  • @mfah said:

    In John Carmack's case, I believe that the engine backend is still pure C, but I don't have a link to back that up right now.
     

    I'm pretty sure he used C++ for the Doom 3 engine. 



  • Many kernel developers use pure C for other reasons. One I have heard often is the more transparent relationship between code and the machine instructions produced: if I type a+b in C, I can be fairly sure that this is a straightforward operation that takes a few cycles. In C++, there could be a giant, expensive function call behind this. C is often referred to as "the world's only portable assembler" for good reason, and that is a valuable asset when coding close to the hardware.



  • @Maciej said:

    Many kernel developers use pure C for other reasons. One I have heard often is the more transparent relationship between code and the machine instructions produced: if I type a+b in C, I can be fairly sure that this is a straightforward operation that takes a few cycles. In C++, there could be a giant, expensive function call behind this. C is often referred to as "the world's only portable assembler" for good reason, and that is a valuable asset when coding close to the hardware.

    That's a good point.  Ultimately though it's not that hard to avoid unexpected repercussions in C++, one only needs to be aware and avoid when necessary the quite constrained set of situations that leads to overly fancy stuff.  It's easy to show how abuse of operator overloading can lead to insane results, but it's equally easy to just not overload operators beyond the simple and straightforward uses of the feature.  Implicit object creation is the biggest gotcha, but again a bit of care in passing and returning objects by value, and using the 'explicit' keyword, largely eliminates the problem.  Forsaking all the conveniences of C++ to avoid these few gotchas is throwing out the baby with the bathwater.

    Ultimately this is just another way of phrasing the more common criticism of C++, which is that it's full of arcane quirks and complexity.  Well yes, but you would think smart guys like kernel programmers could manage it as well or better than the rest of us.



  • Without disagreeing with your basic argument, I'd suggest... 

    @seaturnip said:

    C++ is not lower performance than C; it is unarguably not in theory and nowadays it is not in practice either.  There are only a few slower-performance features: virtual methods, exceptions, iostreams, RTTI and STL containers.  All are entirely optional, and were carefully designed to not require any more performance hit than necessary (with the exception of exceptions, which admittedly really are slow). 


    ... that the real killer (at least in most of the bad C++ you encounter in practice) is the way it takes a great deal of care and attention to detail to avoid unknowingly constructing and destructing masses of superfluous temporaries.  *That*'s where all the efficiency gets lost in practice.



  • @seaturnip said:

    Implicit object creation is the biggest gotcha, but again a bit of care in passing and returning objects by value, and using the 'explicit' keyword, largely eliminates the problem.

    Heh, posts that cross in the ether... you wrote this exactly as I was writing mine about the same thing.

    @seaturnip said:

    Forsaking all the conveniences of C++ to avoid these few gotchas is throwing out the baby with the bathwater.

    Ultimately this is just another way of phrasing the more common criticism of C++, which is that it's full of arcane quirks and complexity.  Well yes, but you would think smart guys like kernel programmers could manage it as well or better than the rest of us.

    I think this is the point where we come up against the theory-vs-practice difference.  In theory all those advantages outweigh the risks, but in practice not everyone in the software industry is a good coder, and you have to deal with that.  Simplicity is of the essence when you're trying to create something that needs to be seriously reliable, such as an OS kernel, and you have to plan on the basis that errors will happen.  C++ is a power-tool that becomes a dangerous weapon in inexperienced hands, and the potential for obfuscation and unexpected or unintended consequences all the much greater than it is for C; there are plenty of contexts in which the relative weights of the different considerations balances up the other way.

    For example, take the embedded field that has been mentioned several times in this thread.  Plenty of people do use C++ in the embedded field these days, but there are also plenty who don't.  It's not about memory or processor limitations any more - like you said, modern C++ compilers and libraries can do a perfectly efficient job - it's about the need for many embedded devices to have absolutely rock-solid reliability and never ever crash no matter what.  People can deal with having to reboot their PC twice a day, but if your fridge-freezer crashes and a couple hundred quid's-worth of frozen food goes to waste, you're going to be angry, and if the reason that it happens is because of unreliable software embedded in it then that's going to happen to a lot of people and suddenly the manufacturer is facing a business-threatening loss of consumer confidence, nevemind lawsuits.  Software reliability is a difficult problem, and making your code simple and plain and direct is one way to address that problem, and with C++ unless every single employee is a genius you just know that at least some of them will write at least some bad and obfuscatory code.  So in a lot of code shops, C++ is not used for mission critical code, simply for the added confidence that code reviews will be valid.  To give a similar example: there are embedded coding standards that ban the use of malloc, and this also isn't because of complexity of understanding what malloc does, nor the memory size nor speed overhead of using it; it's because it makes the overall behaviour of the code non-deterministic, and that makes it impossible to guarantee.

    Sure.  There are plenty of "smart guys" who can manage C++ just fine.  But there are plenty of even smarter guys who both understand C++ and know about the use of methodologies to mitigate risks and unpredictability, and there are plenty of places where the decision lands the other way, and it's for good reasons.



  • @Arenzael said:

    @mfah said:

    In John Carmack's case, I believe that the engine backend is still pure C, but I don't have a link to back that up right now.
     

    I'm pretty sure he used C++ for the Doom 3 engine. 


    Ah, here we go. Do a search for C++: http://www.scribd.com/doc/479479/John-Carmack-Archive-Interviews.



  • @mfah said:

    Do a search for C++: http://www.scribd.com/doc/479479/John-Carmack-Archive-Interviews.
     

    Hold on, let me fire up SSDS...



  • @MasterPlanSoftware said:

    Hold on, let me fire up SSDS...


    See you in 2018, then!



  • @mfah said:

    @MasterPlanSoftware said:

    Hold on, let me fire up SSDS...

    See you in 2018, then!

     

    I just wish SOMEONE would invent some kind of search program that would search all my documents without merging them all into text files!



  • Historical and practical reasons.  C++ is treacherous for low level stuff, and while I'm sure everyone would like to make a wild guess at what the compiler will do *this* time on *this* hardware and hope to get predictability, it's probably not a great idea. 

    This is from one interview (http://linuxgazette.net/issue32/rubini.html), but if you do some simple googling you can find many other places where this is discussed, and at much greater length.  This is one of the more benign ones I found, I assume because he was being interviewed and not smacking down some C++ programmer.  It gets old when they keep saying "well just don't use every single feature of c++ and then you won't have any problems!"  

    Alessandro: Many people ask why the kernel is written in C instead of C++. What is your point against using C++ in the kernel? What is the language you like best, excluding C?

    <font color="navy"> Linus: C++ would have allowed us to use certain compiler features that I would have liked, and it was in fact used for a very short timeperiod just before releasing Linux-1.0. It turned out to not be very useful, and I don't think we'll ever end up trying that again, for a few reasons. </font>
    <font color="navy"> One reason is that C++ simply is a lot more complicated, and the compiler often does things behind the back of the programmer that aren't at all obvious when looking at the code locally. Yes, you can avoid features like virtual classes and avoid these things, but the point is that C++ simply allows a lot that C doesn't allow, and that can make finding the problems later harder. </font>
    <font color="navy"> Another reason was related to the above, namely compiler speed and stability. Because C++ is a more complex language, it also has a propensity for a lot more compiler bugs and compiles are usually slower. This can be considered a compiler implementation issue, but the basic complexity of C++ certainly is something that can be objectively considered to be harmful for kernel development. </font>

     



  • @DaveK said:

    it's about the need for many embedded devices to have absolutely rock-solid reliability and never ever crash no matter what.

    Maybe things have changed since I did some embedded programming, but back then, if we needed reliability there was always a watchdog circuit which would reset the processor if it locked up. This was mainly to handle hardware situations (voltage surges, EM interference etc) and it didn't excuse bad programming, but it would handle the exceptional circumstances. Don't MCUs have them built in nowadays? (IIRC, it only needs an output pin, capacitor, resistor and a couple of transistors to make one).

    I'm pretty sure it's just as easy to write bad, obfuscated code in C as it is in C++ (#define). Possibly C is more deterministic than C++, but it's less 'expressive' so you can get more obscure constructs (eg arrays of function pointers). If you want it to be totally "deterministic" why not use assembler instead? Most embedded code doesn't need to be portable, so assembler would seem ideal (FWIW, the embedded programming I did WAS in assembler of various types (and to test it you had to blow an EPROM, plug it in and see, ah, those were the days), because the C compilers back then were slow, expensive and people didn't trust them - sound familiar?)



  • @belgariontheking said:

    Violence is the solution!  That's what my shrink tells me. 

    Isn't it great when one of the voices in your head is a shrink?



  • @mfah said:

    I wouldn't disagree, but a sort algorithm is something that I would expect to be custom-written rather than using a library function (in a performance critical environment).
     

    Unless you think you know your data very well here, this is not a good idea. Most people think that say STL sort uses quicksort - it doesn't, it uses introsort which has n log n worst case behaviour. Therefore unless you know there's a definitive pattern to you data where a hand-tailored sorting algorithm really gives you more performance and you really need that performance, don't bother. For generalised sorting cases, I know I'm not smarter than Knuth or Musser so I don't bother trying - someone has already done the heavy lifting for you.



  • @pscs said:

    @DaveK said:
    it's about the need for many embedded devices to have absolutely rock-solid reliability and never ever crash no matter what.

    Maybe things have changed since I did some embedded programming, but back then, if we needed reliability there was always a watchdog circuit which would reset the processor if it locked up. This was mainly to handle hardware situations (voltage surges, EM interference etc) and it didn't excuse bad programming, but it would handle the exceptional circumstances. Don't MCUs have them built in nowadays? (IIRC, it only needs an output pin, capacitor, resistor and a couple of transistors to make one).

    This is a whole different can of worms.  Suffice to say a watchdog addresses a pretty narrow range of reliability issues, in a pretty specific way.  And it most definitely doesn't address the 'never ever crash no matter what' question - quite the opposite, in one sense.

    I'm pretty sure it's just as easy to write bad, obfuscated code in C as it is in C++ (#define). Possibly C is more deterministic than C++, but it's less 'expressive' so you can get more obscure constructs (eg arrays of function pointers).

    Complicated or obscure code in C generally [i]looks[/i] complicated or obscure.  One of the [i]goals[/i] of C++ was to allow complexity (and obscurity) to be better hidden (erm.. hidden obscurity? you know what I mean) which of course is a double edged sword.

    If you want it to be totally "deterministic" why not use assembler instead?

    Despite its wekanesses, C does offer a lot of things an assembler doesn't, portability being one.

    Most embedded code doesn't need to be portable, so assembler would seem ideal (FWIW, the embedded programming I did WAS in assembler of various types (and to test it you had to blow an EPROM, plug it in and see, ah, those were the days), because the C compilers back then were slow, expensive and people didn't trust them - sound familiar?)

    The embedded software we write is designed to be as portable as possible.  Just because it's embedded doesn't mean it's single-platform (or even single-purpose).

    Right now we're working on a port of a complex system that spans DSPs, microcontrollers and PC-class platforms, and - surprise surprise - the portability issues are in the C++ code for the PC, not the C code for the other parts.  Of course that's in large part because the customer's target platform's development system is based on a ten-year-old version of GCC - but that just highlghts the slowness with which C++ reached a level of stability and standardisation comparable to C.




  • @pscs said:

    @DaveK said:
    it's about the need for many embedded devices to have absolutely rock-solid reliability and never ever crash no matter what.

    Maybe things have changed since I did some embedded programming, but back then, if we needed reliability there was always a watchdog circuit which would reset the processor if it locked up. This was mainly to handle hardware situations (voltage surges, EM interference etc) and it didn't excuse bad programming, but it would handle the exceptional circumstances. Don't MCUs have them built in nowadays? (IIRC, it only needs an output pin, capacitor, resistor and a couple of transistors to make one).

    I'm pretty sure it's just as easy to write bad, obfuscated code in C as it is in C++ (#define). Possibly C is more deterministic than C++, but it's less 'expressive' so you can get more obscure constructs (eg arrays of function pointers). If you want it to be totally "deterministic" why not use assembler instead? Most embedded code doesn't need to be portable, so assembler would seem ideal (FWIW, the embedded programming I did WAS in assembler of various types (and to test it you had to blow an EPROM, plug it in and see, ah, those were the days), because the C compilers back then were slow, expensive and people didn't trust them - sound familiar?)

     

    It can get pretty messy trying to make your code return to the right state after a watchdog reset. Mostly you can't get back to where you were.

    A case in point is the Sky+ box (Sattelite TV with PVR for non-UKers) which can sometimes get it's knickers in a knot and reboot (probably from a watchdog).  Press the wrong remote keys in the right (or wrong) order too fast and suddenly you loose what you were watching and after about a minute of nothing you end up back on the "welcome channel" having lost the "delayed-live" programme you were watching. I would bet that the Sky+ box is programmed in Java, but this is a similar game to C++!

    Now consider if rather than your TV rebooting and losing what was going on for a minute or so, what if the box is at the transmission end... I used to programembedded systems used in the live broadcast chain and your really, really didn't want one of those to reboot!



  • @pscs said:

    Maybe things have changed since I did some embedded programming, but back then, if we needed reliability there was always a watchdog circuit which would reset the processor if it locked up. This was mainly to handle hardware situations (voltage surges, EM interference etc) and it didn't excuse bad programming, but it would handle the exceptional circumstances. Don't MCUs have them built in nowadays? (IIRC, it only needs an output pin, capacitor, resistor and a couple of transistors to make one).

    I'm pretty sure it's just as easy to write bad, obfuscated code in C as it is in C++ (#define). Possibly C is more deterministic than C++, but it's less 'expressive' so you can get more obscure constructs (eg arrays of function pointers). If you want it to be totally "deterministic" why not use assembler instead? Most embedded code doesn't need to be portable, so assembler would seem ideal (FWIW, the embedded programming I did WAS in assembler of various types (and to test it you had to blow an EPROM, plug it in and see, ah, those were the days), because the C compilers back then were slow, expensive and people didn't trust them - sound familiar?)

     

    I think most embedded work is moving the opposite direction, towards higher-level languages.  "Hannibal's Law of x86 Inevitability" from Ars:

    The moment an x86 processor becomes available that you can squeeze into
    your implementation's design parameters (cost, performance, features,
    power, thermals, etc.), then the x86 legacy code base makes that
    processor the optimal choice for your implementation.

     

    Some people bemoan the shift away from low-level C knowledge, but how many of you can write microcode?  Can you design your own 20 million transisitor chip, too?  It's all about abstraction and specialization of labor.  If I can describe my algorithms and interface in a simpler, higher-level language, why shouldn't I?  Sure, there's some loss of efficiency because I'm not handling my own memory allocation, but what about the loss of productivity from making me handle so many easily-automated tasks?  Modern CPUs are already so complex they can't be designed without extensive automated help.  Obviously there is still value in having some stuff written in assembly and C, but why not write the majority OS in Java, C# or Python?

     

    The big shift now is in concurrency, multiple cores and all that.  To take advantage of this power, developers often have to utilize threading.  The problem is that managing concurrency is often complex and fraught with peril.  What will be interesting to see in the coming years is if someone comes up with a simpler abstraction for threading for use in higher-level languages.  Possibly even different abstractions for the different potentials uses of threading, one for tasks that block and one for tasks that can be parallelized.  I think this is pretty much a given for the advancement of multi-core architectures.  Something like it may already exist, for all I know.



  • @morbiuswilters said:

    Sure, there's some loss of efficiency because I'm not handling my own memory allocation, but what about the loss of productivity from making me handle so many easily-automated tasks?

    It's about more than just memory allocation - I thought that was pretty clear. The big thing is about what the compiler translates your code into behind the scenes. With ASM or C you have far more control over that than with just about anything else.
    @morbiuswilters said:
    but why not write the majority OS in Java

    Ask Sun. They tried it and it didn't work.



  • @mfah said:

    @morbiuswilters said:
    Sure, there's some loss of efficiency because I'm not handling my own memory allocation, but what about the loss of productivity from making me handle so many easily-automated tasks?
    It's about more than just memory allocation - I thought that was pretty clear. The big thing is about what the compiler translates your code into behind the scenes. With ASM or C you have far more control over that than with just about anything else.

    Obviously you have more control.  Memory allocation was just a single example of that.  The point is, why do you need that level of control?  The reason almost always comes back to performance and that becomes less and less important as processing power becomes more abundant.  What other control are you exercising that with ASM or C that can't be adequately expressed in a higher level language?  Does your target CPU not support OoOE?  If so, how long until you think it is supplanted by a CPU that does?  At the end of the day, we exercise a limited amount of control over these machines.  I'm not saying C will disappear forever, but it's certainly going to be phased out wherever it is feasible as complexity of the software becomes more of a concern than performance.  Twenty years ago all serious software was written in languages that compiled down to machine code.  Today, the majority isn't.

     @mfah said:

    @morbiuswilters said:
    but why not write the majority OS in Java
    Ask Sun. They tried it and it didn't work.

    I thought it was evident I was talking about future processing power.  Is it feasible to write an OS in Java now?  No.  Twenty years?  Well, no, but only because Java will hopefully be long forgotten by then.  PythonOS for president in '28!



  • It comes down to the difference between "an OS as a means to an end" and "an OS as an end in itself". Sure, processors will one day be powerful enough to run a Java based OS, but wouldn't that processing power be put to better use doing something interesting or useful with data? This is even more true of an interpreted scripting language (and raises the question of what language the interpreter or JVM will be written in).

    Even in the "OS as an end in itself" scenario, low level code will still be essential for hardware. You could probably get away with a Java device driver, but a Java HAL? I think not.



  • @mfah said:

    It comes down to the difference between "an OS as a means to an end" and "an OS as an end in itself". Sure, processors will one day be powerful enough to run a Java based OS, but wouldn't that processing power be put to better use doing something interesting or useful with data? This is even more true of an interpreted scripting language (and raises the question of what language the interpreter or JVM will be written in).

    Even in the "OS as an end in itself" scenario, low level code will still be essential for hardware. You could probably get away with a Java device driver, but a Java HAL? I think not.

     

    There have been silicon implementations of the JVM for awhile now.  The more and more powerful it gets, the less people care about lost potential.  Eventually, it's just not worth worrying about.  I'm sure we could get much better performance by fabbing our own processors, but most people are content to just run on x64, which is just decades of cruft piled on top of a crappy, inefficient 8-bit architecture.  And you can write anything in any language, it's all about the cost.  As hardware power increases, it becomes more costly to run around trying to get the most out of the system.

     

    On the other hand, there is the ever-present desire of software engineers to make horrendously overcomplicated solutions, usually to justify their salary.   So maybe we will always maintain a stasis of complexity, with the upper bound defined by the cost of hardware and the lower bound defined by the pool of C.S. graduates.  It makes me wonder if one day some geeks won't get fed up with regulation of the Internet and start up a sort of BBS over VOIP.  Ridiculous, but you know somebody has already done it for fun and all it takes is a little bit of incentive to make it spread.



  • @morbiuswilters said:

    I'm sure we could get much better performance by fabbing our own processors, but most people are content to just run on x64, which is just decades of cruft piled on top of a crappy, inefficient 8-bit architecture.
     

    Agreed.  TRWTF is the continued use of x86. 



  • @bstorer said:

    continued use of x86.

    Call it inertia. We all know that x86 could be better, but it's so widely used and can be mass-produced so cheaply that it's going to take quite a bit to unseat it. Nobody's particularly queuing up to make the next Itanium either.

    One possible future/way out is a move in some quarters to do regular processing and computations on a GPU. These things are blazingly fast with floating point arithmetic, and ongoing evolutions in both OpenGL and Direct 3D are centainly making it a real possibility. Here for some info.



  • @JukeboxJim said:

    Most people think that say STL sort uses quicksort - it doesn't, it uses
    introsort which has n log n worst case behaviour.

    Huh? This utterly depends on the library implementation. You can't say [i]every[/i] C++ standard library does that. The only thing the standard guarantees is that <font face="Courier New">sort</font> is "approximately" O(nlog n) and <font face="Courier New">stable_sort</font> is O(nlog n) with worst case of O(n*(log n)^2). <font face="Courier New">qsort</font> can even be a bubblesort and it would still be legal (I guess).



  • @mfah said:

    @bstorer said:
    continued use of x86.

    Call it inertia. We all know that x86 could be better, but it's so widely used and can be mass-produced so cheaply that it's going to take quite a bit to unseat it. Nobody's particularly queuing up to make the next Itanium either.

    One possible future/way out is a move in some quarters to do regular processing and computations on a GPU. These things are blazingly fast with floating point arithmetic, and ongoing evolutions in both OpenGL and Direct 3D are centainly making it a real possibility. Here for some info.

    Did you read the Ars article I linked to?  x86 may be a steaming pile, but it's probably only going to become more ubiquitous with time.  There are billions of lines of code written for the processor and technologies like virtualization extensions as well as economic factors will probably ensure its dominance for decades.  Don't be surprised if in 50 years your cyberbrain has to boot in real mode before switching to protected mode.

     

    I actually read an article recently about the opposite in graphics technology.  Now that x86 CPUs are being created with ever-increasing number of cores, it makes sense to switch to ray tracing for 3D rendering because it can be more easily parallelized than current GPU techniques.  GPUs only support a small subset of instructions and the cards are designed with lots of bandwidth to their local memory, but can't write back to main memory as quickly.  And honestly, isn't using a GPU for processing even more of a WTF than the Frankensteinian monster x86 has become? 



  • @morbiuswilters said:

    And honestly, isn't using a GPU for processing even more of a WTF than the Frankensteinian monster x86 has become?

    For general cases I wouldn't argue, but there's some calculations I can think of (like FFT) that could greatly benefit. I suppose the key criteria would be the amount of data that needs to be sent, the actual instructions used, and the amount that needs to be read back. It probably says something about x86 architecture that people are seriously considering this.



  • @mfah said:

    @morbiuswilters said:
    And honestly, isn't using a GPU for processing even more of a WTF than the Frankensteinian monster x86 has become?
    For general cases I wouldn't argue, but there's some calculations I can think of (like FFT) that could greatly benefit. I suppose the key criteria would be the amount of data that needs to be sent, the actual instructions used, and the amount that needs to be read back. It probably says something about x86 architecture that people are seriously considering this.

    True.  I really wanted to buy a Powerbook (and put Linux on it) until Apple switched to Intel procs.  I agree that x86 is terrifyingly crufty, but consider the engineering challenge in keeping something like that alive and competitive.  I give credit to Intel and AMD for the technical prowess required to pull that off.  At the end of the day, cost is just another performance metric and I think it's one that x86 will win.  As someone who prefers clean design (as I imagine most people on this site do), it bothers me, but sometimes the suits are right: economic considerations are just as important as purely technical ones.



  • @mfah said:

    It probably says something about x86 architecture that people are seriously considering this.

    Actually, it says something about the difference between a vector processor and a scalar processor.



  • @Carnildo said:

    @mfah said:
    It probably says something about x86 architecture that people are seriously considering this.

    Actually, it says something about the difference between a vector processor and a scalar processor.

     

    Superscalar, excuse me.  Besides, Intel/AMD keep stapling vector processing units onto the corpse of x86.  Sure, the the Cell processor is great for some things, but do you really think it will overtake x86?  Twenty years ago your predecessors were shouting the virtues of RISC.  x86 just abosrbed RISC and became even more of monstrosity.  And which do you really think is the more amazing engineering feat?  Dancing bears and all that...



  • @morbiuswilters said:

    True.  I really wanted to buy a Powerbook (and put Linux on it) until Apple switched to Intel procs.  I agree that x86 is terrifyingly crufty, but consider the engineering challenge in keeping something like that alive and competitive.  I give credit to Intel and AMD for the technical prowess required to pull that off.  At the end of the day, cost is just another performance metric and I think it's one that x86 will win.  As someone who prefers clean design (as I imagine most people on this site do), it bothers me, but sometimes the suits are right: economic considerations are just as important as purely technical ones.
     

    True.  As much as I love MIPS because it's so clean and simple, it's been years since I did anything serious in assembly.  I'm probably coding on a Python framework, on top of Python, on top of the OS, so what do I really care about the processor?  In fact, I can't guarantee that it'll only run on x86 anyway.  When it comes down to it, I really don't worry about how ugly and mangled the processor design is.



  • 5 years ago Transmeta was going to destroy the x86.

     Where are they now?


Log in to reply