Microsoft's three-value boolean



  • @blakeyrat said:

    Win. Except that line was in A New Hope, not Empire Strikes Back. But other than that, win.
     

    What scene was that?



  • @tdb said:

    @davedavenotdavemaybedave said:
    @bannedfromcoding said:
    IF WE COULD GET RID OF THE ALL LEGACY SHIT and deploy a completely modern architecture, computing would move a few thousand light-years forward.
    Hardware would move forwards by a good bit, but software would be set right back to square one. We would literally have to start again from nothing.

    Nothing you say? So this new hardware architecture would be so radically different that it would be impossible to write, say, a C++ compiler for it? Or an OpenGL implementation?

    Er, if it's not going to be completely different, what was the point? It seems logical to me that 'getting rid of all legacy shit' means starting from scratch. If you start by making it compatible with what we already have, then you're still going to have the backwards compatibility/'legacy' shit.



  • @dhromed said:

    @blakeyrat said:

    Win. Except that line was in A New Hope, not Empire Strikes Back. But other than that, win.
     

    What scene was that?

    I can't find it online, and I'm too lazy to get a timecode from my MP4, but IMDB says I'm right about what movie it's in.

    It's shortly after the empire captures the princess, there's a scene in a conference room and they're talking about the stolen "data tapes" that are in R2D2, and Vader says something about the force, and some snide officer guy goes, "the force ain't helping bro", and the other captains like "nobody's a Sith anymore bro" and Vader's like "choke bitch!"

    Edit: Oh wow my memory sucks. Well, whatever. It's in the film.



  • @blakeyrat said:

    @Someone You Know said:

    @blakeyrat said:

    RISC was more successful than, say, iTanium, but on the whole they didn't make much of a dent in the market. The most amusing thing to me is that you're still crowing the "we ought to be using RISC-based processors by now" line that we heard all through the mid-90s... did you sleep through the last 15 years? Your great vision for the future has already been and gone, buddy.  Their fire has gone out of the universe. You, my friend, are all that is left of their religion.
     

    ETFY

    Win. Except that line was in A New Hope, not Empire Strikes Back. But other than that, win.

     

    That was "Empire" in the sense of the organization, not the movie.



  • @davedavenotdavemaybedave said:

    It seems logical to me that 'getting rid of all legacy shit' means starting from scratch.
    Er, no. Not "throw away all the languages we already have and make up new ones." Have you ever heard of a cross-compiler?



  • @blakeyrat said:

    @bannedfromcoding said:
    That's half of the truth. Yes, it "scaled the shit out of them", because the architecture was already popular as heck and Intel and AMD could spend a sitload of cash on research.
    Nope.

    AMD has never been able to spend a shitload of money. Intel was too busy throwing their money into the iTanium trashcan to worry about x86 much. When Intel finally figured it out, they did produce some awesome CPUs... but before then they were scaling the shit out of PPC by just clocking their P4's to ridiculous extremes. And it worked.

    Which basically confirms that the shitty tech won out.

    - Intel keeps overclocking to infinity their P4's, wins over PPC. (Well that, and MS didn't release the consumer Windows versions for PPC either)

    - Intel invests on a new arch (Itanium), AMD goes down the cheapo road and puts another patch over x86, calls it amd64. AMD wins.

    Legacy stuff has become so tied into x86 that moving to another arch is pretty much impossible. Of course, a "PC compatibility card" is not only a good option, it has been done before. But for something like that to happen, there would have to be a real push from the software/hardware industry as a whole to a new arch. That doesn't seem to be possible with Intel and AMD waving their dicks adding a fuckton of cores on their chips to scale the shit out of the other one instead of proposing an alternative.

    MS is comfortable with x86 as it means their 20+ year old code will still run without tinkering. Apple has done the arch jump twice and has been able to manage it; MS would probably do it if a real push were to come. It might probably come from the ARM side, I'd like to see what happens in the next couple of years. I've mostly given up the idea of "the year of the Linux Desktop" (though it does have become mostly fool-proof) but non-x86 archs do seem to have a fighting chance.



  • @Sir Twist said:

    @davedavenotdavemaybedave said:

    It seems logical to me that 'getting rid of all legacy shit' means starting from scratch.
    Er, no. Not "throw away all the languages we already have and make up new ones." Have you ever heard of a cross-compiler?

    Well, yes, we could hack up software solutions to regain backwards compatibility, if the hardware supports it. That would be 'legacy shit', though. If we're starting from scratch with the hardware, even abstracts like design patterns will need rethinking to some extent - there's no guarantee that what was true under one set of hardware conditions will be true under another.



  • @davedavenotdavemaybedave said:

    even abstracts like design patterns will need rethinking to some extent - there's no guarantee that what was true under one set of hardware conditions will be true under another.
    Yes, fine. But you still don't need to start "from scratch." You build a C (or language-of-your-choice) cross-compiler that understands these supposed new design patterns, compile a new OS (or adjust an existing one to work in the new compiler), then cross-compile the new OS and the compiler. You don't start by toggling in a boot loader to load off of a hand-punched paper tape.



  • @davedavenotdavemaybedave said:

    @Sir Twist said:

    @davedavenotdavemaybedave said:

    It seems logical to me that 'getting rid of all legacy shit' means starting from scratch.
    Er, no. Not "throw away all the languages we already have and make up new ones." Have you ever heard of a cross-compiler?

    Well, yes, we could hack up software solutions to regain backwards compatibility, if the hardware supports it. That would be 'legacy shit', though. If we're starting from scratch with the hardware, even abstracts like design patterns will need rethinking to some extent - there's no guarantee that what was true under one set of hardware conditions will be true under another.

    Exactly how major a paradigm shift are we talking here? A transition to quantum computers or possibly biological neural networks? I realize I'm not really qualified to comment (due to being so firmly rooted in current technology), but I find it hard to imagine any lesser change that would completely destroy all hope of compatibility on the software level.

    As I see it, the most fundamental aspect in popular programming languages is the relatively serial nature of execution. Threading exists, but requires effort and is hard to get right. A shift to GPGPU-like massive parallelism would invalidate the procedural paradigm on large scale and instead promote functional, flow-based or similar paradigms. It would likely be possible to port existing software with relatively little effort and without specific hardware support, but they would utilize only a small fraction of the execution cores and thus have very limited performance compared to native applications.

    I do agree that current computers and their UIs are hardly optimal for humans to use. Personally I'm waiting for a brainplug interface, which will require a completely new way of manipulating data. Perhaps I will be one of the pioneers in defining this way; who knows?



  • @danixdefcon5 said:

    Which basically confirms that the shitty tech won out.

    The reason it won out is because it's not actually that shitty. That's my point.

    PPC didn't have some huge jump in computing power. And since x86 had the same computing power *and* was backwards-compatible, x86 won out. You don't even need to invoke network effects to explain why x86 won in this case, any rational person will see that x86 was the best choice if you want to run the most software at the highest speed.

    @danixdefcon5 said:

    - Intel keeps overclocking to infinity their P4's, wins over PPC. (Well that, and MS didn't release the consumer Windows versions for PPC either)

    Microsoft didn't do it because Microsoft looked at the PPC chips and said, "what's the point?" (See above: there's no compelling reason to migrate to PPC.) They obviously could do the migration if they thought it was worth their time (since they did it for server versions), but they didn't.

    @danixdefcon5 said:

    - Intel invests on a new arch (Itanium), AMD goes down the cheapo road and puts another patch over x86, calls it amd64. AMD wins.

    After, what, 6 years? Itanium was slower at running software than competing "cheapo" AMD chips. Intel was smart to cancel the program... a CPU that requires a highly-specialized compiler to show any benefit is a bad idea. Even if you get everybody and their dog switched to the CPU, you won't get them all switched to the compiler, and their software will run like crap.

    PPC failed because it was a good idea which didn't happen to work in practice. Itanium was simply a bad idea from day one.

    @danixdefcon5 said:

    Of course, a "PC compatibility card" is not only a good option, it has been done before. But for something like that to happen, there would have to be a real push from the software/hardware industry as a whole to a new arch.

    I had one back in my 68k Mac. Quadra 610, had a 486 compatibility card in it. Played Doom fine.

    @danixdefcon5 said:

    That doesn't seem to be possible with Intel and AMD waving their dicks adding a fuckton of cores on their chips to scale the shit out of the other one instead of proposing an alternative.

    I think it would be wiser to realize that there simply isn't an alternative available today. Intel and AMD aren't purposefully wasting time-- if they had some magic silver bullet to increase the speed of their CPUs, you bet your ass they'd use it.

    @danixdefcon5 said:

    Apple has done the arch jump twice and has been able to manage it;

    Yeah, since they were breaking everybody's programs anyway, nobody noticed the slightly higher percentage of broken programs from the arch changes. Fucking Apple.

    @danixdefcon5 said:

    I've mostly given up the idea of "the year of the Linux Desktop" (though it does have become mostly fool-proof)

    The only thing preventing the "year of the Linux desktop" is the Linux culture. The software's fine.

    @danixdefcon5 said:

    but non-x86 archs do seem to have a fighting chance.

    Yeah, that's what Linux fans said when netbooks first came out. In fact, not only did they have a fighting chance, but they could have really kicked ass.

    But, lo and behold, the non-x86 CPUs were too slow to market, and Intel sucked up all those customers too. Kind of a shame, really. Slashdot still holds the delusion that the non-x86 netbook is coming "any day now."



  • @danixdefcon5 said:

    - Intel keeps overclocking to infinity their P4's, wins over PPC. (Well that, and MS didn't release the consumer Windows versions for PPC either)

    Microsoft released Windows NT Workstation 3.51 for the PPC.  Windows 3.1 and Windows 95 never stood a chance of being migrated because they weren't a microkernel architecture and there was no way to preserve any amount of backwards compatibility on a different platform.  Microsoft is in the position they are because they are good at compatibility.

    As for overclocking P4s, that almost killed Intel.  As the marketing department at Intel demanded that the engineers create higher clock rate chips (which were easier to market as upgrades because their performance could be described in a single number), AMD almost ate their lunch.  Back then, AMDs labelling strategy was to slap a number on each chip that described the clock rate of a comparable Intel processor.  When Intel released a 2.8GHz P4 that wasn't any faster than a 2.6GHz P4, AMD simply started labelling their 2600 stock as 2800.  Intel now admits that there is almost no correlation between clock speed and processor usefulness.  The marketing department is now in a world of hurt trying to explain to customers that a Core2Duo E8300 is slower than a Core2Duo E7500, but the Core2Duo E8335 is faster than both.  Anyways, Intel cranking up the clock on the P4 was not a factor in it's market dominance of the PPC, as evidenced by that same behavior causing them to go under 50% marketshare in new PCs for the first time ever.



  • @Jaime said:

    Anyways, Intel cranking up the clock on the P4 was not a factor in it's market dominance of the PPC, as evidenced by that same behavior causing them to go under 50% marketshare in new PCs for the first time ever.
    Logically that only holds true if the behaviour has a constant effect at all times. It seems to me that unless every part of processor design marches in lock-step with materials advances, there will be different advances that are the most beneficial for the least effort at different times. The reason Intel got into the whole marketing-led clock-speed ramping-up was that, for a very long time, it was a good measure of performance. It was only when the focus was entirely marketing-led, and only on cranking out a certain set of headline numbers, that they ran into trouble.

    The proof of how well Intel did generally is that the major significant remaining driver for processor speed increases is the server market. For everyday use, processors are fast enough, and have been for a while. Home and business PC life-to-obsolescence is now about the same as, or longer than expected lifespan of the components. I was involved recently in a purchasing evaluation that had been a regular 18-monthly event for a fair few years now, relating to replacing the desktops for a company. Previously, they replaced every 18 months to keep their hardware vaguely current. This time, there was no really good reason to replace, although the cost of replacing a hard disks and PSUs in each, plus the labour to do so, came close enough to the cost of a new PC that the new warranties swung it. In the past, though, it wouldn't even have been close; this time round there was no noticeable benefit from getting faster hardware.



  • @Jaime said:

    @danixdefcon5 said:

    - Intel keeps overclocking to infinity their P4's, wins over PPC. (Well that, and MS didn't release the consumer Windows versions for PPC either)

    Microsoft released Windows NT Workstation 3.51 for the PPC.  Windows 3.1 and Windows 95 never stood a chance of being migrated because they weren't a microkernel architecture and there was no way to preserve any amount of backwards compatibility on a different platform.  Microsoft is in the position they are because they are good at compatibility.

    Yup, that's why I specified the "consumer Windows" versions. Win 3.x and even Win9x were merely glorified shells with a nice GUI on top; they were still running DOS under the hood, though the Win9x variety would do more stuff by itself.

    Windows NT wasn't just a new OS, it also ran on many platforms, I believe it even ran on Alpha in the early days. :) The HAL architecture remains part of NT up to this day. MS axing win9x for the NT branch was one of the smartest moves they made.

    Oh, and Win2008 R2 still supports Itanium, though MS already said that it will be the last WinServer to support it.



  • @danixdefcon5 said:

    Yup, that's why I specified the "consumer Windows" versions. Win 3.x and even Win9x were merely glorified shells with a nice GUI on top; they were still running DOS under the hood, though the Win9x variety would do more stuff by itself.

    Misleading. Windows 95 and up did virtually everything "by itself", unless you had ancient hardware plugged in that required DOS for driver support.

    @danixdefcon5 said:

    Windows NT wasn't just a new OS, it also ran on many platforms, I believe it even ran on Alpha in the early days.

    And PPC. The Xbox 360 runs the NT kernel on top of its PPC Xenons.



  • Actually, release versions of NT 3.51 and NT 4.0 were available for x86, Alpha, PPC, and MIPS, I think. And there almost was 2000 for them too. Microsoft stopped offering binaries for other platforms just before putting Release Candidate out. I think I still have a multiplatform CD of W2K Beta 2 somewhere...



  • @danixdefcon5 said:

    @Jaime said:

    @danixdefcon5 said:

    - Intel keeps overclocking to infinity their P4's, wins over PPC. (Well that, and MS didn't release the consumer Windows versions for PPC either)

    Microsoft released Windows NT Workstation 3.51 for the PPC.  Windows 3.1 and Windows 95 never stood a chance of being migrated because they weren't a microkernel architecture and there was no way to preserve any amount of backwards compatibility on a different platform.  Microsoft is in the position they are because they are good at compatibility.

    Yup, that's why I specified the "consumer Windows" versions.
    That's a pointless complaint.  The only purpose of Windows 95 was to provide a bridge for users with current 16-bit software and lower end hardware so they could start running 32-bit Windows applications.  A PPC port of Windows 95 wouldn't serve any purpose that isn't already served by a PPC port of Windows NT Workstation.



  • @blakeyrat said:

    Windows 95 and up did virtually everything "by itself", unless you had ancient hardware plugged in that required DOS for driver support.

    Even more, Windows 3.11 had 32 bit disk drivers and FAT+VCACHE drivers, so it didn't need to use DOS for that, too. W 3.11/386 ran its DOS sessions as separate virtual machines, and virtualized disk, file, keyboard and mouse access.


  • @danixdefcon5 said:

    After, what, 6 years? Itanium was *slower* at running software than competing "cheapo" AMD chips. Intel was smart to cancel the program...

    Um...Not yet... I don't think Itanic is officially canceled.


  • @Jaime said:

    Windows 3.1 and Windows 95 never stood a chance of being migrated because they weren't a microkernel architecture and there was no way to preserve any amount of backwards compatibility on a different platform.

    Besides that there was absolutely no business case for porting, a good part of their VXDs was written in assembly. Good luck with porting that to a different architecture.


  • @alegr said:

    @danixdefcon5 said:

    After, what, 6 years? Itanium was slower at running software than competing "cheapo" AMD chips. Intel was smart to cancel the program...

    Um...Not yet... I don't think Itanic is officially canceled.

    Really?

    Flog that dead horse some more, Intel. Maybe you can get it to squeeze some blood from a stone.



  • @blakeyrat said:

    @alegr said:
    @danixdefcon5 said:
    After, what, 6 years? Itanium was *slower* at running software than competing "cheapo" AMD chips. Intel was smart to cancel the program...
    Um...Not yet... I don't think Itanic is officially canceled.

    Really?

    Flog that dead horse some more, Intel. Maybe you can get it to squeeze some blood from a stone.

    Other than it's reputation due to its severe overhype followed by a flop, there's nothing really so intrinsically terrible about Itanium.  In fact, my employer is still in the midst of a long-term migration of all our HP-UX servers off of PA-RISC and on to Itanium. From what I understand, the problem with Itanium was primarily due to perception and secondarily because it was not backwards compatible. 



  • March 2012, the next realease of oracle introduces new types, BOOLEAN2 !

    Pre-register now for a 3-day course on Oracle installation, and get free 7 day course on DB2 installation course.


Log in to reply