The most popular OS in the world


  • Considered Harmful


  • FoxDev



  • TFA said:

    That’s right. A web server. Your CPU has a secret web server that you are not allowed to access, and, apparently, Intel does not want you to know about.

    Why on this green Earth is there a web server in a hidden part of my CPU? WHY?

    The only reason I can think of is if the makers of the CPU wanted a way to serve up content via the internet without you knowing about it. Combine that with the fact that Ring -3 has 100 percent access to everything on the computer, and that should make you just a teensy bit nervous.

    ... just... no. Step away from the tinfoil hat.


  • FoxDev

    @julianlam said in The most popular OS in the world:

    Step away from the tinfoil hat.

    but fear sells!

    almost as much as sex does!



  • Let's see if I've got this right: Minix v3.0, hidden inside the firmware for Intel Management Engine, and running on a hidden 32-bit x86 CPU core. Part of the hardware of almost every x86-64 processor made since 2007. And the default operation includes running an HTTP server, for... reasons?

    What are you doing, Intel? Go home, Intel, you're drunk!

    Oh, and apparently AMD copied the approach in their own Management Engine equivalent, up to and including the Minix kernel to run it.

    And it was only figured out because of a security vulnerability that exposed it. Google is talking of dropping the use of x86 for their servers entirely because the vulnerability is likely to be irremediable since it is occurring in otherwise inaccessible hardware.

    Seriously? Is this a joke or something? Am I misunderstanding what they are saying? No, really, please tell me that this isn't as crazy as this is sounding to me right now!

    Comments? Corrections? Antidotes for the mind-altering drugs which someone apparently has been dosed with?

    (No comments on whom - it could be Intel, it could be the people reporting on it, it could be both, it could be me only imagining I am reading this for all I know. Honestly, this sounds like something The Onion's editors would have rejected as too implausible.)


  • FoxDev

    @scholrlea said in The most popular OS in the world:

    Google is talking of dropping the use of x86 for their servers entirely because the vulnerability is likely to be irremediable since it is occurring in otherwise inaccessible hardware.

    and go to what? ARM?

    that's not only going to be hideously expensive to do but i also suspect it is infeasible..... from a sheer sense of scale, if not of performance.



  • @accalia I'm just repeating what the article claimed; I was wondering the same thing, to be honest. While I don't think there is any inherent reason why an ARM based processor couldn't outperform x86, right now it's no contest.

    Hey, maybe they'll decide to fund development of Mill. Won't that be fun?


  • FoxDev

    @scholrlea said in The most popular OS in the world:

    While I don't think there is any inherent reason why an ARM (or MIPS, or, heaven forbid given who now owns it, SPARC) based processor couldn't outperform x86

    eeeh. i think there kind of is..... ARM has a different design goal than x86, or at least the recent iterations do.

    i think ARM can beat x86 and x86-64 at compute per watt, but i have my doubts that the architecture could beat in raw performance.

    though i suppose if you want that you can always slap in a GP102 as a coprocessor.....

    that would be an interesting server design........



  • @accalia As I said, there's a lot of debate - and flameage, and WHARGARRBL - about this on the OSDev group. If you ignore the lunatic claiming that it is possible for an SUBLEQ OISC made with ASICs to outperform every other CPU around and do so for a $1 per core (he's been banninated, for what it's worth) - most of the disagreement seems to be over whether the same sets of superscalar optimizations (or a comparable, but different set of them) be applied to a RISC to the same effect as with a CISC.

    While I personally think x86 is wretchedly awful as an ISA, and have a strong architectural preference for MIPS over most of the others anyway, I also don't really see it as relevant to the users or even the majority of the programmers - for better or worse, performance you can see trumps elegance you can't.

    OTOH, this security issue is something at least some people can see, though history tells us that if it is a three-way tussle between good performance, design elegance, and solid security, security always comes dead last - I wish that weren't true, but so far it has been consistently the case. While I like the ideas going into it, the are nowhere near a working system even for a prototype.

    Also, I don't know if you saw the reference to Mill in my last edit of the previous post. Not that it matters; even if they had the money to go full-out on it (which right now they aren't even close), it will be still several years before we can even say if that is worth going to be a viable architecture one day or not.


  • Impossible Mission - B

    @scholrlea said in The most popular OS in the world:

    history tells us that if it is a three-way tussle between good performance, design elegance, and solid security, security always comes dead last - I wish that weren't true, but so far it has been consistently the case.

    So what about C, which is abysmal at both security and elegance?



  • @masonwheeler Performance (or at least, a strong if not really deserved reputation for it), of course. Worse is better etc.



  • The reply by A. Tanenbaum (the author of Minix).

    @scholrlea said in The most popular OS in the world:

    ARM ... take advantages of OoO superscalar implementations or not

    ARM does both out-of-order and superscalar (apparently since the A9 and the A8, respectively), so I guess the answer would be "yes".



  • We can't even get our customers to upgrade to a new version of our software that is fully backwards compatible. Do we really think that anyone is going to spend the effort to switch cpu architecture?


  • area_pol

    @dragoon said in The most popular OS in the world:

    switch cpu architecture?

    Can't someone else make x86_64 architecture cpus?



  • @adynathos
    Nothing is stopping them, but they have to license it from Intel.



  • @cvi OK, good point. My own general thought - which is indeed only general - is that Intel has gone to heroic lengths to keep the x86 ISA going far past it's sell-by date, because the alternative is The End of The Desktop World As We Know It, whereas most of the ones who license ARM cores really don't push the design to anywhere near the same degree, and when they do, they focus more on lower energy consumption and lower waste heat rather than sheer speed.

    Thing is, even Intel doesn't want the x86. They never really did, in fact; my understanding is that their original view of it when it was designed was as a stopgap, a stretch 8080 to cover things until the mighty iAPX432 was ready to take the world by storm (the academic world, mind you, not the home or business worlds, mind you - they thought microcomputers were a passing fad, and wanted to get out of the market). They also needed it as a way to keep the microcontroller crowd happy while they worked on the 8051, as well, but they thought their real challenge was going to be against the likes of Xerox, HP, and Symbolics - you know, the big players in the growing workstation market. 🤷♂

    They only signed on for the PC because it was IBM, and in 1981, turning Big Blue down was something you just didn't do. Furthermore, they knew that the IBM PC was aimed at quashing the home market and bringing the small-business market back into line - the goal was to slowly draw the business market back towards relying on mainframes, with the PCs evolving into a class of smart terminals. No one involved in it - except Microsoft, and maybe Digital Research, though given how reluctant they were about CP/M-86 I'm not sure - thought the home market would last.

    Yeah.. how's that working for you there, IBM?

    Since then, Intel has tried twice - once with the i860, and then again with the ItanicItanium - to replace the x86 to no avail. They are riding a tiger, and know that if they let go, that tiger is going to turn and eat them, which is why they have poured money into continuing to develop the x86 - they don't really have any other alternatives which wouldn't be corporate suicide.



  • @dragoon said in The most popular OS in the world:

    We can't even get our customers to upgrade to a new version of our software that is fully backwards compatible. Do we really think that anyone is going to spend the effort to switch cpu architecture?

    They aren't talking about PCs, only their own servers - which IIUC, all run run some *nixoid clustering OS (not sure what, probably a custom Linux distro, does anyone know?), so the software compatibility issue isn't as big a deal in the first place. While it would be a big blow to Intel's sales for Xeon CPUs - Google has unbelievably huge server farms - and Intel's prestige, it wouldn't really be anything that would impact the rest of the world.



  • Minix, the same Minix that inspired Linux? Whoa. I had no idea it was still a thing.

    @julianlam said in The most popular OS in the world:

    TFA said:

    That’s right. A web server. Your CPU has a secret web server that you are not allowed to access, and, apparently, Intel does not want you to know about.

    Why on this green Earth is there a web server in a hidden part of my CPU? WHY?

    The only reason I can think of is if the makers of the CPU wanted a way to serve up content via the internet without you knowing about it. Combine that with the fact that Ring -3 has 100 percent access to everything on the computer, and that should make you just a teensy bit nervous.

    ... just... no. Step away from the tinfoil hat.

    Except... we know the NSA has been actively putting backdoors in places like this. And this is literally both the most powerful place for a backdoor to be, and the second-hardest place to audit for backdoors (after the silicon itself). So it's not exactly tinfoil hattery.



  • @scholrlea said in The most popular OS in the world:

    They aren't talking about PCs, only their own servers

    If google doesn't want these on their own servers, I don't want these on my own PCs.



  • @scholrlea said in The most popular OS in the world:

    all run run some *nixoid clustering OS (not sure what, probably a custom Linux distro, does anyone know?

    Linux, of course


  • FoxDev

    @dragoon said in The most popular OS in the world:

    @adynathos
    Nothing is stopping them, but they have to license it from Intel.

    IIRC it's actually AMD that they have to license the x86-64 from..... given that AMD64 won the 64 bit race.

    i don't think Intel bought the rights off AMD so thoroughly AMD has to buy the license from intel..........



  • @scholrlea said in The most popular OS in the world:

    Thing is, even Intel doesn't want the x86. They never really did, in fact; my understanding is that their original view of it when it was designed was as a stopgap, a stretch 8080 to cover things until the mighty iAPX432 was ready to take the world by storm (the academic world, mind you, not the home or business worlds, mind you - they thought microcomputers were a passing fad, and wanted to get out of the market).

    There's nothing more permanent than a temporary solution.



  • Also I want to point out that even if every single x86 processor ran a certain OS, that still wouldn't make it the most popular one. Embedded systems outnumber desktop computers by a lot, and those don't run Minix.



  • CIA backdoor.

    There, somebody had to say it.

    fake edit: @anonymous234 sort of stole my thunder, but who cares.


  • Notification Spam Recipient

    @scholrlea said in The most popular OS in the world:

    Yeah.. how's that working for you there, IBM?

    The cloud seems to be taking off quite well!


  • Notification Spam Recipient

    Speaking of vulnerabilities, by raise of hands, how many people have had their WPA2-secured wifi networks tapped yet?

    Is it as widespread as panic would suggest?



  • @masonwheeler said in The most popular OS in the world:

    @scholrlea said in The most popular OS in the world:

    history tells us that if it is a three-way tussle between good performance, design elegance, and solid security, security always comes dead last - I wish that weren't true, but so far it has been consistently the case.

    So what about C, which is abysmal at both security and elegance?

    Where on this spectrum is BIT?



  • @anotherusername said in The most popular OS in the world:

    CIA backdoor.

    There, somebody had to say it.

    fake edit: @anonymous234 sort of stole my thunder, but who cares.

    CIA doesn't have authority to monitor stuff inside the USA, so any data the CIA would collect via a backdoor from its own citizens would not be permissible in court.

    NSA and FBI, on the other hand...


  • Considered Harmful

    @ben_lubar They can just send anything they get over there, though.



  • @pie_flavor said in The most popular OS in the world:

    @ben_lubar They can just send anything they get over there, though.

    I don't think that's how admissibility works. If it was illegal to obtain the information in the first place, even if the information proves that the person is 100% guilty of a crime, it can't be used as evidence.


  • Considered Harmful

    @ben_lubar said in The most popular OS in the world:

    @pie_flavor said in The most popular OS in the world:

    @ben_lubar They can just send anything they get over there, though.

    I don't think that's how admissibility works. If it was illegal to obtain the information in the first place, even if the information proves that the person is 100% guilty of a crime, it can't be used as evidence.

    Eh, they don't care. They'll just ship your ass to Guantanamo.



  • @scholrlea said in The most popular OS in the world:

    OK, good point. My own general thought - which is indeed only general - is that Intel has gone to heroic lengths to keep the x86 ISA going far past it's sell-by date, because the alternative is The End of The Desktop World As We Know It, whereas most of the ones who license ARM cores really don't push the design to anywhere near the same degree, and when they do, they focus more on lower energy consumption and lower waste heat rather than sheer speed.

    Thing is that Intel will have to focus more on lowering energy consumption if they want to stay in the game, or at least, in all the games they are in right now. One example that is going to demand drastically lower energy consumption would be the exaflop goal that's floating around (people are aiming at 2020). The requirements on paper say 20MW/25MW (US and China, respectively) at most, which means that it has to be about 10x more power efficient than the current HPC machines.

    @scholrlea said in The most popular OS in the world:

    Thing is, even Intel doesn't want the x86.

    I would have agreed more readily with this a few years ago. Now, I'm not sure anymore. For sure, x86 is not perfect. But ... none of the current chips actually execute x86 instructions directly - they all have a instruction decoder in the front-end that translates x86 to µOps (and they later redo register allocation - according to one of the Agner Fog manuals, a recent skylake has 180 integer registers per core). This gives quite a bit of freedom in changing up the design (e.g., varying the execution units and whatnot). An example is AVX. IIRC some AMD chips support 8-wide vectors by issuing each half to one of their 4-wide vector EUs; whereas Intel chips from around the same time use 8-wide units (but -again IIRC- more specialized ones). If you look at GPUs, that's essentially what NVIDIA is trying to achieve with their PTX virtual ISA (albeit they have the luxury of being able to translate the PTX to actual machine code offline).

    I unfortunately didn't really find any numbers on the area of the die that's needed for this, nor the power consumption by the front-end. That'd be interesting. (However, for consumer CPUs, the actual CPU cores are already a smallish part of the chip anyway.)

    Cleaning up x86 would be nice. But what you'd likely aim for these days is some sort of virtual ISA that the CPU can easily translate into whatever they're actually executing: you can't change the ISA completely every generation. ARM also does decoding to µOps, but I'm not sure if it's to the same extent. The question is how much you could gain from this new virtual ISA vs x86. Maybe you could save some area in the front-end (my guess is that it's a fairly small part already); the actual execution units would likely stay the same. The only certain thing is that there would be a lot of pain involved in rolling a new ISA out.

    The one area where you could perhaps win something is in the guarantees that x86 makes. I'm told that x86 requires CPUs to bend over backwards to guarantee memory consistency, which requires all sorts of machinery for cache coherency and so on. A few years ago, I was told that this scales really badly with the number of cores. But maybe that's something they solved, considering that we have 20+ core chips now.


  • Discourse touched me in a no-no place

    @cvi said in The most popular OS in the world:

    ARM also does decoding to µOps, but I'm not sure if it's to the same extent.

    Most ARM instructions are pretty simple that way. Not all, but most. (The exceptions are usually for instructions that were originally lawful but pretty meaningless, and so were repurposed.) Also, ARM chips typically have two instruction decoders so they can process THUMB code as well, which is a cut down representation of the most useful instructions that only requires 16 bits per instruction. It's like there's two instruction sets and you can mix and match between them in the one program.


  • Discourse touched me in a no-no place

    @cvi said in The most popular OS in the world:

    But maybe that's something they solved, considering that we have 20+ core chips now.

    The usual way of solving it is by stopping trying to have memory shared between cores. I did use thousand-core shared memory supercomputers back in the day (15 years ago now; doesn't time fly!) and the hardware to keep that consistency at speed and scale was by far the most expensive part. Throw away the sharing and costs become manageable again; modern HPC depends almost entirely on message passing.


  • :belt_onion:

    @ben_lubar said in The most popular OS in the world:

    @pie_flavor said in The most popular OS in the world:

    @ben_lubar They can just send anything they get over there, though.

    I don't think that's how admissibility works. If it was illegal to obtain the information in the first place, even if the information proves that the person is 100% guilty of a crime, it can't be used as evidence.

    Correct, it is not even remotely close to how any of that works.

    More specifically, that's inadmissible in court, and also obscenely illegal to "send it over".


  • :belt_onion:

    @julianlam said in The most popular OS in the world:

    TFA said:

    That’s right. A web server. Your CPU has a secret web server that you are not allowed to access, and, apparently, Intel does not want you to know about.

    Why on this green Earth is there a web server in a hidden part of my CPU? WHY?

    The only reason I can think of is if the makers of the CPU wanted a way to serve up content via the internet without you knowing about it. Combine that with the fact that Ring -3 has 100 percent access to everything on the computer, and that should make you just a teensy bit nervous.

    ... just... no. Step away from the tinfoil hat.

    QFT.

    It's there (but disabled in consumer CPUs, IIRC) to enable remote management stuff, like remote power control, remote VNC, stuff that's useful in datacenters. That's what the Management Engine stuff does - not send data to the NSA...


  • 🚽 Regular

    @cvi said in The most popular OS in the world:

    But what you'd likely aim for these days is some sort of virtual ISA that the CPU can easily translate into whatever they're actually executing

    WebAssembly! 🎺



  • @dkf said in The most popular OS in the world:

    The usual way of solving it is by stopping trying to have memory shared between cores. I did use thousand-core shared memory supercomputers back in the day (15 years ago now; doesn't time fly!) and the hardware to keep that consistency at speed and scale was by far the most expensive part. Throw away the sharing and costs become manageable again; modern HPC depends almost entirely on message passing.

    I was referring to single x86 chips with 20+ cores. You can grab a single Xeon chip with 24 or so cores (plus 4x HT, but those end up using the same cache, so whatever). Then there's the Xeon Phi stuff with 60+ cores. These will still present a single unified view of the memory, so they will need to deal with keeping the promises that x86 makes w.r.t. that.

    Not guaranteeing consistency between "cores" was what GPUs did (and still do to some extent, although they have backtracked a bit in recent generations), even though they had a single shared memory. (I.e., one "core" could keep caches locally indefinitely unless explicitly instructed to stop doing that, as opposed to x86, where caches will be invalidated more or less automatically when somebody else changes the underlying memory.)

    Modern HPC does indeed use message passing extensively between nodes. Doing it between cores in a single node that do share memory can get expensive, though.



  • @zecc said in The most popular OS in the world:

    WebAssembly!

    Considering there were some ARM chips that could decode JVM bytecode in hardware (in addition to the ARM and THUMB instruction sets)...



  • @ben_lubar said in The most popular OS in the world:

    @anotherusername said in The most popular OS in the world:

    CIA backdoor.

    There, somebody had to say it.

    fake edit: @anonymous234 sort of stole my thunder, but who cares.

    CIA doesn't have authority to monitor stuff inside the USA, so any data the CIA would collect via a backdoor from its own citizens would not be permissible in court.

    NSA and FBI, on the other hand...

    Any of those alphabet soup agencies.



  • @ben_lubar said in The most popular OS in the world:

    @pie_flavor said in The most popular OS in the world:

    @ben_lubar They can just send anything they get over there, though.

    I don't think that's how admissibility works. If it was illegal to obtain the information in the first place, even if the information proves that the person is 100% guilty of a crime, it can't be used as evidence.

    Eh... about all they'd really have to do is change the letterhead for the printer and run off a fresh copy, probably.



  • Some of this was covered in my initial response to the previously-mentioned lunatic, Geri.

    The complexity of the instruction set has almost no bearing at all on the cost and time needed to implement the ISA; the real costs are in making it run efficiently. Even a highly CISC ISA such as the VAX can be implemented in a standard FPGA costing $25USD in about a week of work, but such an implementation will have terrible performance.

    and

    The hard work in developing a new generation of CPU is mostly in development and debugging of the die process - improving the transistor density is not a small task, and not an automatic one despite the impression Moore's Law might give people. The x86 ISA? 90% of that was worked out in 1978; despite the heroic (and fundamentally futile, something even the companies involved are aware of) efforts Intel and AMD have made to extend its life, the basic ISA hasn't really changed all that much compared to things like the memory addressing, register file size, register width, caching, instruction pipelining, branch prediction (especially branch prediction!) and MMU - none of which are part of the ISA, even if they led to some of the changes in it. The ARM and MIPS designs have undergone even fewer changes; in effect, the ISA itself is a done deal.

    As an aside, if memory serves, about half the die of the Kaby Lake design is taken up by cache, and about 10% each by the pipeline, instruction re-ordering, instruction simplification (the modern equivalent of microcode), and branch prediction logic. Actually implementing the ISA? Probably less than 5% of the die, even on CPUs with 6 or more cores.

    and

    the reason 'RISC' is more performant (in principle, though many of the advantages disappear or are less distinct once things like caching, register renaming, multi-path branch prediction, and so on are used) isn't because the ISA is small - several so-called 'Reduced Instruction Set' designs actually have pretty big ISAs - but because all of the instructions can be implemented without microcode; the use of load/store discipline reduces the frequency of memory accesses for data; regularizing the instruction set makes it easier for compilers to target it and optimize the generated code; and eliminating rarely-used instructions means that the whole can be fit onto smaller dies, leading to less propagation delay. The term RISC is really a very misleading and unfortunate one, and the idea that 'URISC' would be somehow inherently even better is a gross misunderstanding of the reasoning behind load/store discipline and elimination of low-usage-frequency instructions. OISC, by its very nature, is not actually a RISC design at all, because it isn't load/store - the single instruction is actually more CISCy than any of the 56 instructions in the MIPS 2000 ISA.

    In practice, an efficient OISC implementation would need any incredibly hairy multi-branch-predicting code/data pipeline that would make Kaby Lake's instruction decoding look like the RCA 1802's. The die layout would be much larger and more complex than that of even the current Intel designs.

    That thread is comedy gold BTW, as Geri seems to be somewhere between @SpectateSwamp and Alec Chiu - he wants to a be a Successful Businessman, but he has not idea what he's doing and is fixated on something that will never, ever be what he's is trying to convince people it already is.


Log in to reply