The abhorrent 🔥 rites of C


  • area_pol

    We, sane programmers, know how bad is C. It’s a dinosaur, which should have died long ago. But, due the vicious circle of legacy code, implying more C developers, implying more C code, it won’t die, so we may call it a zombie dinosaur. What’s worse than C is the mentality in many (most?) C programmers. They think code should be clever, they think the more bizarre way of doing things, the better. After all, who cares about things like architecture, design patterns, scalability or maintainability. We have 20y old bubbling masses of pure evil and their disconnected from reality authors forcing on their 20y old wisdom. We have C programmers constantly reinventing pseudo-solutions to problems, which modern world has already solved. The more twisted the solution, the better.

    Let’s take an example: passing data between threads. In modern languages, it’s usually simple. The problem then boils down not to “how to do it”, but “how efficient can I make it”. But we’re talking about C here. What’s worse, we’re talking about Tizen C code at a certain Korean company. Are they the authors, I dare not investigate.

    So, let’s get down to business. How would you pass some data from one thread to another in your favorite language? If your thought was “I will take a pointer to the data, convert it to string, pass it through a custom protocol, so another thread can fetch it, parse the string back into the pointer to memory and have my data”, you’re a born Tizen developer. Of course don’t mind silly things like memory safety, overflows, crashes, arbitrary code execution etc. Just behold:

    /* specialized protocol for cross-thread pushing,
     * based on ffmpeg's pipe protocol */
    
    static int
    gst_ffmpeg_pipe_open (URLContext * h, const char *filename, int flags)
    {
      GstFFMpegPipe *ffpipe;
      gint ret;
    
      GST_LOG ("Opening %s", filename);
    
      /* we don't support W together */
      if (flags != AVIO_FLAG_READ) {
        GST_WARNING ("Only read-only is supported");
        return -EINVAL;
      }
    
      if (!(strncasecmp(filename, "https", 5))) {
        ret = sscanf (&filename[8], "%p", &ffpipe);
     
     } else if (!(strncasecmp(filename, "http", 4)) || 
    !(strncasecmp(filename, "file", 4)) || !(strncasecmp(filename, "dlna", 
    4)) ||
        !(strncasecmp(filename, "clip", 4)) || !(strncasecmp(filename, "pvod", 4))) {
        ret = sscanf (&filename[7], "%p", &ffpipe);
      } else if (!(strncasecmp(filename, "rtp", 3)) || !(strncasecmp(filename, "tcp", 3)) || !(strncasecmp(filename, "udp", 3)) || 
        !(strncasecmp(filename, "mms", 3)) || !(strncasecmp(filename, "rvu", 3))) {
        ret = sscanf (&filename[6], "%p", &ffpipe);
      } else if (!(strncasecmp(filename, "ra", 2))) {
        ret = sscanf (&filename[5], "%p", &ffpipe);
      } else { //gstpipe
        ret = sscanf (&filename[10], "%p", &ffpipe);
      }
      if (ret != 1){
        GST_WARNING ("could not decode pipe info from %s", filename);
        return -EIO;
      }
    
      /* sanity check */
      g_return_val_if_fail (GST_IS_ADAPTER (ffpipe->adapter), -EINVAL);
    
      h->priv_data = (void *) ffpipe;
      h->is_streamed = TRUE;
      if (ffpipe->max_avio_buffer_size) {
        h->max_packet_size = ffpipe->max_avio_buffer_size;
      } else {
        h->max_packet_size = 0;
      }
      /*need to check seek availability in push mode*/
      if (ffpipe->check_url_seek == FALSE) {
        GstQuery *query = NULL;
        gboolean seekable = FALSE;
        gint64 start = -1, stop = -1;
    
        query = gst_query_new_seeking (GST_FORMAT_BYTES);
        if (!gst_pad_peer_query (ffpipe->sinkpad, query)) {
          GST_LOG ("query source failed");
        }
        else {
          gst_query_parse_seeking (query, NULL, &seekable, &start, &stop);
        }
        gst_query_unref (query);
        if (seekable && stop != -1)
          h->is_streamed = FALSE;
        ffpipe->check_url_seek = TRUE;
      }
    
      return 0;
    }
    

    Who can count the crashes?



  • I wonder what Torvalds would say. "Just because you can write horrid code in a language doesn't make it a bad language." perhaps? Hmmm, no, far too civilized.


    I'm so sorry you have to deal with that horrid code.

    return -EINVAL;
    return -EIO;

    Why negate the error codes?


  • FoxDev

    @LB_ said:

    Why negate the error codes?

    If I had to guess, the error codes are #defined to be positive integers, but the convention is to return negative integers


  • Winner of the 2016 Presidential Election

    @RaceProUK said:

    If I had to guess, the error codes are #defined to be positive integers, but the convention is to return negative integers

    IIRC, those are not usually used as return values; they're the possible values for a "magic" error variable. Here, they are used as return values, and they probably want to be able to use the following to check for errors:

    if (gst_ffmpeg_pipe_open() < 0) {
      // handle error
    }
    

  • Winner of the 2016 Presidential Election

    @NeighborhoodButcher said:

    Are they the authors, I dare not investigate.

    Looks like the source code of GStreamer to me. So no, probably not written by Samsung.



  • Have you diffed it with the ffmpeg code it came from? I'd be interested in seeing how it compares to the original.


  • area_pol

    Yup, but the question is about the lines themselves. Samsung tends to modify (and break) everything open source.


  • area_can

    Kind of off-topic but I just noticed how easily you could say the same about web dev:

    We, sane programmers, know how bad is CJavascript/CSS/HTML. It’s a dinosaur, which should have died long ago. But, due the vicious circle of legacy code, implying more Cweb developers, implying more Cweb code, it won’t die, so we may call it a zombie dinosaur … We have Cweb programmers constantly reinventing pseudo-solutions to problems, which modern world has already solved. The more twisted the solution, the better.


  • Winner of the 2016 Presidential Election

    @NeighborhoodButcher said:

    Samsung tends to modify (and break) everything open source.

    <Insert rant about Samsung-specific Android bugs here.>


  • Winner of the 2016 Presidential Election

    @NeighborhoodButcher said:

    Who can count the crashes?

    Actually, for C code, it doesn't look that bad at the first glance, iff the assumptions the code makes about the parameters are always guaranteed to be true.


  • area_pol

    Making assumptions about some arbitrary filename really being a stringified pointer to a memory block is one hell of a guarantee to hang onto. I wonder if you can pass as a user of some ffmpeg player.


  • Winner of the 2016 Presidential Election

    @NeighborhoodButcher said:

    Yup, but the question is about the lines themselves.

    Looks like Samsung "fixed" quite a bit. But the assumptions about the filename parameter are also present in the original code:

    static int
    gst_ffmpeg_pipe_open (URLContext * h, const char *filename, int flags)
    {
      GstFFMpegPipe *ffpipe;
    
      GST_LOG ("Opening %s", filename);
    
      /* we don't support W together */
      if (flags != URL_RDONLY) {
        GST_WARNING ("Only read-only is supported");
        return -EINVAL;
      }
    
      if (sscanf (&filename[10], "%p", &ffpipe) != 1) {
        GST_WARNING ("could not decode pipe info from %s", filename);
        return -EIO;
      }
    
      /* sanity check */
      g_return_val_if_fail (GST_IS_ADAPTER (ffpipe->adapter), -EINVAL);
    
      h->priv_data = (void *) ffpipe;
      h->is_streamed = TRUE;
      h->max_packet_size = 0;
    
      return 0;
    }
    

    Source: https://github.com/genesi/gst-ffmpeg/blob/master/ext/ffmpeg/gstffmpegprotocol.c#L304

    Edit: Wanted to fix my incorrect, original statement with a ninja edit. Fuck you too, read-only mode.



  • @NeighborhoodButcher said:

    “I will take a pointer to the data, convert it to string, pass it through a custom protocol, so another thread can fetch it, parse the string back into the pointer to memory and have my data”

    I tried, C# wouldn't let me do that. So i ate some glue instead.



  • @swayde said:

    I tried, C# wouldn't let me do that.

    You know someone on this forum's going to take that as a challenge, right?

    (And some Googling shows it's possible, ugh. The GC's gonna nail you though.)


  • area_pol

    I especially like this part:
    @NeighborhoodButcher said:

    else { //gstpipe
    ret = sscanf (&filename[10], "%p", &ffpipe);
    }

    When you have no idea what you really got, just move it 10 bytes and parse it!



  • @blakeyrat said:

    You know someone on this forum's going to take that as a challenge, right?

    I know I immediately started to think through how to do something like this.


  • area_pol

    @NeighborhoodButcher said:

    “I will take a pointer to the data, convert it to string, pass it through a custom protocol, so another thread can fetch it, parse the string back into the pointer to memory and have my data”

    I can't find the pointer being converted to string in that code, where is it?

    Threads of a single process share the address space, so one thread's pointers and variables are valid and accessible in the other threads, one could just pass the pointer values directly.

    So maybe this is inter-process communication? Pipes are often used for inter-process communication. But in that case the address spaces are separate so any kind of sending pointers would always fail.

    It would however make sense to transfer data between the threads through some in-process-memory pipe which handles the mutexes and prevents races between threads. This code looks like it is something like that and I would assume this is what FFMPEG does. Are you sure your description is accurate?



  • @blakeyrat said:

    You know someone on this forum's going to take that as a challenge, right?

    I was too lazy to do it myself ;)


  • Java Dev

    It sounds like they're passing along pointers to internal memory through some in-process interface that normally only handles filenames.



  • @Adynathos said:

    I can't find the pointer being converted to string in that code, where is it?

    That code is converting back from a string to a pointer. Presumably, the calling code converted it to string. The important bits are:

    GstFFMpegPipe *ffpipe; /* Define a pointer to GstFFMpegPipe called ffpipe*/
    
    /* ... */
    
    sscanf (&filename[8], "%p", &ffpipe); /* read the string starting 8 bytes 
    after whatever filename points at (presumably the filename was "https://0x0badcode"),
    interpret it as a pointer (the %p), and store it in ffpipe */
    
    

  • Considered Harmful

    @NeighborhoodButcher said:

    They think code should be clever, they think the more bizarre way of doing things, the better. After all, who cares about things like architecture, design patterns, scalability or maintainability.

    Most C code I have to deal with scales way better than the Java stuff. And don't even mention the JavaScript. Our system architect is a veteran of C and shell and he keeps entertaining us with the crap he finds in the design-patterened hipster code when the developers demand more hardware because they're unable to make it scale.

    We have [20y old bubbling masses of pure evil][1] and their disconnected from reality authors forcing on their 20y old wisdom.
    20 years? Dude, Java turned 20 last year! If you were complaining about FORTRAN wastelands created in the early 70s you'd have a point, but then again the reason they're still using this stuff is that it Just Works.
    Let’s take an example: passing data between threads. In modern languages, it’s usually simple.
    I'd have been hard pressed to make a better point for why so much crap is produced in "modern" languages: no, it's not simple at all once you pass and do non-trivial things. There are plenty of ways to shoot yourself in the foot with synchronization problems, but as everybody seems to think it was the runtime's job to handle all these thread-related difficulties for you, threads are completely overused.

    The code you quoted, incomplete as it is, looks more like it was supposed to pass data between processes. Portable as this stuff is supposed to be they would surely be using some threading library for actual threads?
    And then it's probably some copy&paste code judging from the fact that it tries to deal with HTTP and the like.
    I see what you mean. Definitely c&p in a very wrong place.

    Edit: after reading the EFL story, I can fully understand your trauma. No sane person would be the same after such an experience. But be assured, experienced programmers can write assembly in any language.



  • @LaoC said:

    it's not simple

    That right there is the nub of the whole problem.

    Trying to abstract complexity away never really works. Best you can ever do is make a bunch of frequently encountered complex patterns easy to set up.

    But the trouble with making complex patterns easy to design in is that the little bastards all conspire with one another under the hood. Abstractions leak complexity like bacteria leak DNA. The failure modes of an awful lot of modern code, it seems to me, result from the fact that nobody - not even the original coder - is capable of understanding precisely what it is than any given chunk of source code is asking the computer to do.

    None of which is to say that modern languages are inherently and fatally flawed; they generally do what they do about as well as it can be done. It's the scope creep. We're using computers now for things that nobody would have even begun to contemplate using them for when C first appeared, and this is just harder.



  • @asdf said:

    @NeighborhoodButcher said:
    Who can count the crashes?

    Actually, for C code, it doesn't look that bad at the first glance, iff the assumptions the code makes about the parameters are always guaranteed to be true.


    It was written by someone who doesn't know how to make sscanf do what he wanted it to do. This is all you need:
    [code]
    ret = sscanf(filename, "%*s://%p"; &ffpipe);
    [/code]

    Sure, it's slightly opaque to people who don't know how to make sscanf do things, but it will do exactly what is needed without the despicable if-else chain.

    And nobody picked up on the fact that the variable filename doesn't actually point to a filename?



  • @NeighborhoodButcher said:

    What’s worse than C is the mentality in many (most?) C programmers. They think code should be clever, they think the more bizarre way of doing things, the better.

    I admit I used to be like that, back in my learning years. I know the old obsession with cleverness matched only by the obsession with speed, in that "penny-wise, pound-foolish" way that believes concise is fast and makes code unreadable for a negligible performance increase, if not a worse performance.


    At least I've grown up since.



  • I could possibly have accepted such "clever" thing as a stringified pointed in file name if they had used a custom protocol prefix for it. I've already done sketchy stuff (though not that sketchy) in filenames/URLs with custom protocols.

    But this is an abomination. And a pretty useless one too for mere multithreading within a single process, since the address space is shared: Just rely on the fact that malloc() is synchronized, allocate a structure to pass all your data to the thread and hand it as the thread parameter, along with the responsibility for managing its lifetime.

    Edit: And that Samsung "correction" is ridiculous. Even without the scanf-based solution mentioned by @Steve_The_Cynic, that ridiculous if-else chain could have been replaced by a char const * found = strstr(url, "://") and doing the scanf on found + strlen("://") if the result wasn't null...


  • Discourse touched me in a no-no place

    @Medinoc said:

    But this is an abomination.

    QFT

    But at least we know that we can send an attack in by using this sort of “URL”: httpsWTF0x12345678. That's… special, especially as the pointer is supposed to refer to a block of memory that is written to without locking (so far as I can see).


  • Winner of the 2016 Presidential Election

    @Steve_The_Cynic said:

    And nobody picked up on the fact that the variable filename doesn't actually point to a filename?

    The whole reason this method exists is that, apparently, the pointer needs to be passed through an API that expects a URI.

    @Medinoc said:

    I could possibly have accepted such "clever" thing as a stringified pointed in file name if they had used a custom protocol prefix for it.

    I'm pretty sure they did, which is where the "magic" index 10 is coming from.



  • I've never used this protocol, but from the code its clear that "gstpipe://<pointer>" is what its parsing.
    The difficulty is to extend an interface designed for filenames/URLs to allow a zero-copy shared-memory-space implementation. It's kludgy, but its about a million miles from the worst C I've ever seen. By Enlightement standards, its good code.



  • That's obviously what they're doing. Also, this is an internal interface used for gluing two internal threads together. The same app reads and writes both ends. With function overloading you'd just have one that takes a string, and one that takes a void pointer.

    Without it, you just lash together a bit of basic serialization. Since you control both end points, who cares if its a bit Heath-Robinson?



  • @NeighborhoodButcher said:

    We, sane programmers, know how bad is C.

    No, we know it to be exactly the opposite. It has survived through 30 years being used as the language for programming operating systems for a reason. In fact, name one modern operating system that isn't programmed in C. No one is going to use a CGI library programmed in C to make a website, but no one is going to use C# for low level programming.

    @NeighborhoodButcher said:

    After all, who cares about things like architecture, design patterns, scalability or maintainability.

    If You think so, here's an exercise. Try to reimplement the Minix kernel in C#. Or Unix device drivers in Java. Or userland utilities like grep in Haskell. Then measure the speed and see how it scales.

    @LB_ said:

    Why negate the error codes?

    My guess is he's writing code for Linux which (AFAIK) is the only system where error codes are negative.



  • Oh look, someone swooping in and white-knighting for a programming language. I love the Internet. Dude, C's not going to sleep with you no matter what you post here.

    @LaoC said:

    I'd have been hard pressed to make a better point for why so much crap is produced in "modern" languages: no, it's not simple at all once you pass and do non-trivial things. There are plenty of ways to shoot yourself in the foot with synchronization problems, but as everybody seems to think it was the runtime's job to handle all these thread-related difficulties for you, threads are completely overused.

    It's pretty fucking simple in C#. Either await/async or lock() yo shit. (And yes, lock() isn't super-performant, but it's better than not writing worker threads at all because you're huddled into a ball in the corner shivering with fear.)


    Bonus Discourse bullshit: when did using a backtick to make monospace start deleting the space before the word? How do I get the space back? Why does await have the space, but lock() doesn't? But it does here. Not above.



  • @ronin said:

    No, we know it to be exactly the opposite.

    Then you are not a sane programmer.

    @ronin said:

    It has survived through 30 years being used as the language for programming operating systems for a reason.

    Right: backwards compatibility.

    @ronin said:

    In fact, name one modern operating system that isn't programmed in C.

    I like how you put in the word "modern" there to preempt me saying Mac Classic. (PASCAL and assembly. No C.)

    @ronin said:

    No one is going to use a CGI library programmed in C to make a website, but no one is going to use C# for low level programming.

    Yada yada, the real question is: if you were making an OS from scratch today (without any worries about backwards compatibility) would you use C? I move you would not, unless you were a moron.

    @ronin said:

    If You think so, here's an exercise. Try to reimplement the Minix kernel in C#. Or Unix device drivers in Java. Or userland utilities like grep in Haskell. Then measure the speed and see how it scales.

    I like making software that actually does shit.



  • @ronin said:

    In fact, name one modern operating system that isn't programmed in C.
    Microsoft Singularity.

    @ronin said:

    Try to reimplement the Minix kernel in C#.
    They did. Plus drivers, plus userland utilities. Measured speed only showed a 20% drop, and that's because they also used debugging symbols.

    Next question?



  • @TwelveBaud said:

    They did. Plus drivers, plus userland utilities. Measured speed only showed a 20% drop, and that's because they also used debugging symbols.
    Next question

    Link?



  • @TwelveBaud said:

    Next question?

    What language did they write the runtime in?



  • @blakeyrat said:

    Oh look, someone swooping in and white-knighting for a programming language. I love the Internet.

    This from a man who fellates corporations



  • @blakeyrat said:

    Right: backwards compatibility.

    No, it's been chosen time and time again by experienced programmers to build operating systems from scratch.

    @blakeyrat said:

    I like how you put in the word "modern" there to pre-empt me saying Mac Classic. (PASCAL and assembly. No C.)

    And also to prevent you from name dropping THE, yes. And also because you care about "doing shit". MAC Classic belongs in a museum as do Windows 3.1 and Unix System III.

    @blakeyrat said:

    I move you would not, unless you were a moron.

    I would. C is fast, battle tested and has useful abstractions. I move you do not understand operating systems.

    @blakeyrat said:

    I like making software that actually does shit.

    You're right, Unix is definitely not used in commercial products all arround the world, like the PS4 or WhatsApp servers, or smartphones. Oh, wait... it is. If that doesn't qualify as "doing shit" to you I don't know what is.

    I agree that Minix isn't widely used, however. I should have mentioned QNX, which is used for a lot of things.

    @TwelveBaud said:

    Microsoft Singularity.

    The lowest-level x86 interrupt dispatch code is written in assembly language and C.

    Source

    Sory, try again ;)

    @TwelveBaud said:

    They did. Plus drivers, plus userland utilities. Measured speed only showed a 20% drop, and that's because they also used debugging symbols.

    Got a link to that?


  • FoxDev

    @ronin said:

    You're right, Unix is definitely not used in commercial products all around the world, like the PS4 or WhatsApp servers, or smartphones.

    All of which run Linux, not Unix.



  • PS4 and WhatsApp use FreeBSD, which is not Linux.

    Also it if you want to get technical nothing runs Unix. Unix is a brand and the copyright is held by The Open Group. In order to be "a Unix" your operating system has to undergo a certification process (which has to be paid for). When I say Unix I meant the defnition 90% of the people agree on: unix meaning unix-like. BSD code is derived from an actual unix system, though no one has bothered (or will bother) to pay for a certification. For FOSS projects it's a waste of money.



  • @gwowen said:

    This from a man who fellates corporations

    Who?

    Oh, me? When did that happen?



  • @ronin said:

    No, it's been chosen time and time again by experienced programmers to build operating systems from scratch.

    Right; because they had no other options. Not because it was good.

    @ronin said:

    And also to prevent you from name dropping THE, yes.

    I can't name-drop THE because I don't know what THE is, assuming it's not just a typo. THE? THE WHAT?

    @ronin said:

    And also because you care about "doing shit". MAC Classic belongs in a museum as do Windows 3.1 and Unix System III.

    Yes; but not because it did anything wrong. It was best-of-class for many, many years.

    @ronin said:

    I would. C is fast, battle tested and has useful abstractions.

    So are a lot of languages.

    @ronin said:

    I move you do not understand operating systems.

    Oh; right. My opinion's different from yours. I must be a dum-dum idiot. Obviously! The ONLY POSSIBLE EXPLANATION for disagreeing with Ronin is that you're just this side of functionally retarded.

    @ronin said:

    You're right, Unix is definitely not used in commercial products all arround the world, like the PS4 or WhatsApp servers, or smartphones.

    Ignoring for a moment the Unix/Linux thing (because WTF? Saying the PS4 runs on Unix is a blatant lie. Is this the open-source "let's redefine terms at will to confuse the fuck out of people!" thing?)

    For all of those applications, you could swap it out with another kernel tomorrow and not a single person would notice. All the interesting stuff about those applications is in what kernel developers like to call "user space".



  • @ronin said:

    In fact, name one modern operating system that isn't programmed in C


  • FoxDev

    Nothing what we found is simple enough or modern enough to fit into our needs.

    They can grammar goodly clear


  • Discourse touched me in a no-no place

    @gwowen said:

    Since you control both end points, who cares if its a bit Heath-Robinson?

    If you control both sides, you can just pass the pointer directly. No loss of safety by comparison with what they're doing (which is already utterly unsafe), but a heck of a lot less screwing around!



  • @blakeyrat said:

    Right; because they had no other options. Not because it was good.

    It was the best option available. If it was so bad, they would have found a suitable replacement in all those decades instead of keep using C. They could have used pascal or ADA or they could have done like the team that programmed THE...

    @blakeyrat said:

    I can't name-drop THE because I don't know what THE is, assuming it's not just a typo. THE? THE WHAT?

    The THE Operating System by none other than Djikstra itself. Written with a modified ALGOL compiler. It didn't catch on.

    @blakeyrat said:

    Oh; right. My opinion's different from yours. I must be a dum-dum idiot. Obviously! The ONLY POSSIBLE EXPLANATION for disagreeing with Ronin is that you're just this side of functionally retarded.

    It's funny you should bring that up, because to quote yourself:

    I move you would not, unless you were a moron.

    Let it be known, if you don't share blakeyrat's idea that operating systems should not be written in C, you are a moron. That is the only possible explanation. I should not have said that you don't know about operating systems, but I don't take it kindly when someone suggests I'm an idiot.

    @blakeyrat said:

    Ignoring for a moment the Unix/Linux thing (because WTF? Saying the PS4 runs on Unix is a blatant lie. Is this the open-source "let's redefine terms at will to confuse the fuck out of people!" thing?)

    I don't see what's confussing. I could have said unix-like. Ask every Linux/BSD/Solaris user out there. If you say Unix, that's what they'll understand. But that wasn't the important part, anyway.

    @blakeyrat said:

    For all of those applications, you could swap it out with another kernel tomorrow and not a single person would notice.

    That wasn't the point being debated. And you clearly need the kernel to do all that stuff. Try to port an application that makes heavy use of timestamps, or PulseAudio from Linux to BSD. You'll notice a difference (BSD uses the OSS API and timestamps are slower because of the increased precision). Better yet, try using MinGW or GCC for windows...There are surely absolutely no differences between these applictions and their native counterparts right?

    Also, how is Pascal better than C? What features help you write an operating system? Why do you think its better than C for the task?


  • FoxDev

    @CatPlusPlus said:

    https://github.com/redox-os/redox

    Point of order!

    The electron arrangement there has only 11 electrons. Assuming a neutral atom as there is also no charge annotation, that is not a rust Ironatom, rather it is a Sodium atom.

    If this were to be a proper rust ironatom it would need to be notated with a + 15 charge, a ludicrous charge that for all practical purposes does not occur in any natural circumstances, and indeed is all but impossible to achieve under laboratory settings as that massive a positive charge would very quickly strip electrons of neighboring atoms in order to return to a neutral electrostatic charge. And that's not even to mention the amount of effort it would take to push the atom past +8 electric charge, as at that point it would be isoelectrically analogous to Argon, a noble gas.

    Really, calling that an atom of Iron is ludicrous. And if you're trying to call it rust instead where are the oxygen atoms? hmm? is this FeO2? or possibly FeO3? because that's what rust is!



  • Cool! Does it work/boot from anything yet?


  • Notification Spam Recipient

    :pendant: and not the :giggity: kind!



  • QEMU probably, I don't think they have many drivers for other hardware yet. There are screenshots on the blog/in readme.



  • @ronin said:

    I don't see what's confussing.

    Oh right; and when I said "cow" you should have known I was including goats, sheep, chickens, etc. You must have telepathy to discuss things with mega-genius Ronin. Reading his posts without engaging your telepathy is not allowed!

    @ronin said:

    But that wasn't the important part, anyway.

    The important part to me is that you said the PS4 ran on Unix, which is blatantly untrue. But now I've learned the fault was mine, as I did not properly engage my telepathometer while reading your post.

    @ronin said:

    That wasn't the point being debated.

    It kind of defeats your "Unix is so important to the success of the PS4" argument. Pretty thoroughly, I'd say. (Even ignoring the more obvious rebuttal that the PS4 never ran Unix in the first place.)

    @ronin said:

    And you clearly need the kernel to do all that stuff.

    You need a kernel. Just like a car needs a chip to control the engine timings. But it doesn't need to be that kernel, and none of the customers would notice or care if it switched from that kernel to another kernel. (Just like nobody bought their Ford based on what company made the engine-timing chip.)

    @ronin said:

    Also, how is Pascal better than C?

    It's not.

    @ronin said:

    What features help you write an operating system?

    None? I guess? I dunno.

    @ronin said:

    Why do you think its better than C for the task?

    I don't think that, and I didn't say that.

    Is your argument so weak that the only way you can support it is to blatantly make-up shit and put words into my mouth? Because that's pathetic.


Log in to reply