Microsoft's three-value boolean



  • Found this in the Win32 API documentation for GetMessage():


    BOOL WINAPI GetMessage(__out LPMSG lpMsg, __in_opt HWND hWnd, __in UINT wMsgFilterMin, __in UINT wMsgFilterMax);
    Return Value
    BOOL

    If the function retrieves a message other than WM_QUIT, the return value is nonzero.

    If the function retrieves the WM_QUIT message, the return value is zero.

    If there is an error, the return value is -1. For example, the function fails if hWnd is an invalid window handle or lpMsg is an invalid pointer. To get extended error information, call GetLastError.


  • @Carnildo said:

    Found this in the Win32 API documentation for GetMessage():

    snip BOOL/long

    I'm pretty sure that in the WinAPI, a BOOL is actually a long. So yes, funny-ish, but not a total WTF.



  • @b-redeker said:

    @Carnildo said:

    Found this in the Win32 API documentation for GetMessage():

    snip BOOL/long

    I'm pretty sure that in the WinAPI, a BOOL is actually a long. So yes, funny-ish, but not a total WTF.

    Actually, it's only an int.

    BOOL

    A Boolean variable (should be TRUE or FALSE).

    This type is declared in WinDef.h as follows:

    typedef int BOOL;



  • @b-redeker said:

    @Carnildo said:

    Found this in the Win32 API documentation for GetMessage():

    snip BOOL/long

    I'm pretty sure that in the WinAPI, a BOOL is actually a long. So yes, funny-ish, but not a total WTF.

    Yes, the compiler permits you to assign three distinct values to a BOOL. This does not make doing so a good idea.


  • @Carnildo said:

    @b-redeker said:

    @Carnildo said:

    Found this in the Win32 API documentation for GetMessage():

    snip BOOL/long

    I'm pretty sure that in the WinAPI, a BOOL is actually a long. So yes, funny-ish, but not a total WTF.

    Yes, the compiler permits you to assign three distinct values to a BOOL. This does not make doing so a good idea.

    It's pretty well-established that WinAPI sucks shit.



  • That's not so much a WTF. The GCC compiler does something neat with booleans:

    #define true !0
    #define false !true

    By the laws of Programming Logic, "true" is simply "Anything thats not zero", but "false" is "Not true" -- so MS's logic works.



  • @Indrora said:

    That's not so much a WTF. The GCC compiler does something neat with booleans:

    #define true !0
    #define false !true

    By the laws of Programming Logic, "true" is simply "Anything thats not zero", but "false" is "Not true" -- so MS's logic works.

     

    Arguable... some programming languages have -1 for True (was it VB6, or Delphi, or something else? I'm not sure).

    True is true, and false is false. Some languages don't have built in boolean types, so they assume the rule that "anything that is not zero is true". But there are also some which state that "anything that is not one is false" (again, fogot exactly which one it was, maybe VB6?).

     Imho the fact that some programming languages don't even have boolean types is "TRWTF". :)



  • @Indrora said:

    By the laws of Programming Logic, "true" is simply "Anything thats not zero", but "false" is "Not true" -- so MS's logic works.
     

    By the specification of C, !0 is 1, and !1 is 0. It has nothing akin to the "nonzero" used here. I don't know if Microsoft define it anywhere, but the code they give here indicates that it isn't 0 or -1.

    One additional advantage of this function is that it has the same name as a function in the [url=http://msdn.microsoft.com/en-us/library/aa921213.aspx]Windows Mobile API[/url], which is just the same except that it returns 0 on errors instead of -1, leaving the user to work out what just happened. Whether or not -1 counts as "nonzero" is, again, unstated.


  • BINNED

    @__moz said:

    Whether or not -1 counts as "nonzero" is, again, unstated.
     

    What? I agree that Win32 API is sucky but -1 != 0 should be pretty obvious.

     

    Their all-upper-case shit is horrible to read though, IMO. Seriously, stuff like LPCWSTR instead of const wchar_t* ?? We don't have segmented addressing anymore, why declare pointers as "long pointers"? </mini-rant>



  • As far as I can tell, that's potentially way more than three distint return values.



  • @Indrora said:

    That's not so much a WTF. The GCC compiler does something neat with booleans:

    #define true !0
    #define false !true

    By the laws of Programming Logic, "true" is simply "Anything thats not zero", but "false" is "Not true" -- so MS's logic works.

     

     What does this have to do with GCC? The C language spec says

    1. any value that compares equal to 0 is "false"; any other value is "true"
    2. all built-in booley operators yield 0 for false and 1 for true

    Your definitions are equivalent to

    #define true 1
    #define false 0



  • @topspin said:

    Their all-upper-case shit is horrible to read though, IMO. Seriously, stuff like LPCWSTR instead of const wchar_t* ?? We don't have segmented addressing anymore, why declare pointers as "long pointers"? </mini-rant>

    You do realize all this dates back to Windows 3.0 right? Or even earlier. That's pretty much exactly why it's shit. The simple fact of the matter is, by all rights, computers back then really weren't powerful enough to run windowing GUIs in a reasonable fashion. Apple got around it by putting pretty much 75% of the OS in a ROM chip. (And by having Wozniak, who is by all rights a genius.) Microsoft had to make all kinds of tweaks, compromises, etc because they didn't have the ROM chip option.

    If you aren't reading Raymond Chen's blog The Old New Thing, you should be.



  • @topspin said:

    @__moz said:

    Whether or not -1 counts as "nonzero" is, again, unstated.
     

    What? I agree that Win32 API is sucky but -1 != 0 should be pretty obvious.

    Except that, on Planet Microsoft,  "nonzero" doesn't mean "not 0" because this is not a boolean type. See the link in Carnildo's original message.



  • @blakeyrat said:

    You do realize all this dates back to Windows 3.0 right? Or even earlier. That's pretty much exactly *why* it's shit. The simple fact of the matter is, by all rights, computers back then really weren't powerful enough to run windowing GUIs in a reasonable fashion.

    What?  Windows 3.0 was released in 1990.  By that stage we'd had Amigas, Atari STs, X windows and Apple Lisas for five years or more.  Perhaps you mean specifically x86-based PCs? 

    @blakeyrat said:

    Apple got around it by putting pretty much 75% of the OS in a ROM chip. (And by having Wozniak, who is by all rights a genius.) Microsoft had to make all kinds of tweaks, compromises, etc because they didn't have the ROM chip option.

    Huh?  The original Apple Mac had 128kB of RAM and a 64kB ROM.  A PC could handle 640kB RAM.  Anything that could be done by the Mac's QuickDraw ROM could just as easily have been done by code running out of RAM on a PC.  The reason that Windows was shit is because it was made by Microsoft, which is a huge and bloated corporation where everything is designed by committee.  They just didn't have the imagination or agility to do anything better.

    @blakeyrat said:

    If you aren't reading Raymond Chen's blog The Old New Thing, you should be.

    On the other hand I fully agree with that.

     



  • @DaveK said:

    What?  Windows 3.0 was released in 1990.  By that stage we'd had Amigas, Atari STs, X windows and Apple Lisas for five years or more.  Perhaps you mean specifically x86-based PCs?

    Sorry, no I meant PCs in general, so X11 doesn't count. By the time Windows 3.0 was released, the tide was turning, but I'd say that computers really didn't have the resources to run a good GUI until closer to 1995. I think this is one of the reasons System 7.0 and Windows 95 were so exciting and hype-worthy.

    @DaveK said:

    Huh?  The original Apple Mac had 128kB of RAM and a 64kB ROM.

    Ok? What's your point?

    @DaveK said:

    Anything that could be done by the Mac's QuickDraw ROM could just as easily have been done by code running out of RAM on a PC.  The reason that Windows was shit is because it was made by Microsoft, which is a huge and bloated corporation where everything is designed by committee.  They just didn't have the imagination or agility to do anything better.

    Well, ok, but:

    1) In the timeframe we're talking about, Microsoft wasn't "a huge and bloated corporation where everything is designed by committee." Hell, Microsoft still doesn't design by committee*, so I don't even know where that comes from. You're not confusing Microsoft with IBM are you? Or with the Slashdot stereotype?

    2) Other GUIs at the time (like GeOS for example) also were shit. Mac OS was best of a bad bunch...

    I think it's unfair to say that Microsoft was bad at it, I think a more accurate position would be that Apple was really good at it.

    *) Well, maybe Windows Live products are designed by committee. But the vast majority of their products aren't.



  • QBasic uses -1 for true. Given that vb "borrows" "some" of the syntax for qbasic, I would think it kept this convention.



  • @blakeyrat said:

    @DaveK said:
    What?  Windows 3.0 was released in 1990.  By that stage we'd had Amigas, Atari STs, X windows and Apple Lisas for five years or more.  Perhaps you mean specifically x86-based PCs?

    Sorry, no I meant PCs in general, so X11 doesn't count.

    Huh?  Do you, like, think X is something that only runs on mainframes?  What does "PCs in general" mean?

    @blakeyrat said:

    By the time Windows 3.0 was released, the tide was turning, but I'd say that computers really didn't have the resources to run a good GUI until closer to 1995. I think this is one of the reasons System 7.0 and Windows 95 were so exciting and hype-worthy.

    I think you're just not very aware of anything beyond what was in the Windows world back then.  You originally said 

    @blakeyrat said:

    computers back then really weren't powerful enough to run windowing GUIs in a reasonable fashion

    and now you've added the qualifier "good" to GUI (without defining it, but you did artfully introduce a comparison to a different OS than we were talking about that didn't come out for another five years) simply to create an ad-hoc criterion to dismiss all the perfectly valid examples I gave you of computers that ran windowing GUIs in reasonable fashions.

    @blakeyrat said:

    @DaveK said:
    Huh?  The original Apple Mac had 128kB of RAM and a 64kB ROM.

    Ok? What's your point?

    It's the next sentence, the one that that was establishing facts for, before your useless interjection, that's what it is.  Try reading ahead at least a few lines before you respond.

    @blakeyrat said:

    @DaveK said:
    Anything that could be done by the Mac's QuickDraw ROM could just as easily have been done by code running out of RAM on a PC.  The reason that Windows was shit is because it was made by Microsoft, which is a huge and bloated corporation where everything is designed by committee.  They just didn't have the imagination or agility to do anything better.

    Well, ok, but:

    1) In the timeframe we're talking about, Microsoft wasn't "a huge and bloated corporation where everything is designed by committee." Hell, Microsoft still doesn't design by committee*, so I don't even know where that comes from. You're not confusing Microsoft with IBM are you? Or with the Slashdot stereotype?

    In 1990, MS had over fifty-six hundred employees and a turnover in excess of $1B.  Considering the product range at those days consisted of a couple of versions each of DOS and Windows3, and the just-launched Office suite, I think that's a lot of employees, a lot of overheards, a lot of bureaucracy.

    @blakeyrat said:

    2) Other GUIs at the time (like GeOS for example) also were shit. Mac OS was best of a bad bunch...

    Waittaminnit, now you think a valid comparison is that 8-bit OS that ran on Commdore-64s?  GEM was pretty horrid, I'll agree, but Mac, Amiga and X11 were all far in advance of and far more user-friendly than Windows 3.




  • @henke37 said:

    QBasic uses -1 for true. Given that vb "borrows" "some" of the syntax for qbasic, I would think it kept this convention.

    Fairly sure off the top of my head that behaviour goes all the way back to the earliest Microsoft BASICs.  It was certainly the case on all the Commodore machines I used to use.

     



  • @pbean said:

    Arguable... some programming languages have -1 for True (was it VB6, or Delphi, or something else? I'm not sure).
     

    VB does. I guess it's a matter of taste whether you handle booleans as 1 bit signed (-1 and 0) or unsigned (0 and 1) integers.



  • @DaveK said:

    Huh?  Do you, like, think X is something that only runs on mainframes?

    In the timeframe we're talking about, X only did run on mainframes.

    @DaveK said:

    I think you're just not very aware of anything beyond what was in the Windows world back then.

    I never used Windows until Windows 98.

    @DaveK said:

    Waittaminnit, now you think a valid comparison is that 8-bit OS that ran on Commdore-64s?  GEM was pretty horrid, I'll agree, but Mac, Amiga and X11 were all far in advance of and far more user-friendly than Windows 3.

    Yes, I think it's valid to compare one GUI for home computers in 1991 with another GUI for home computers in 1991.

    And X11 was never more user-friendly than Windows 3, not until much later on.



  • @blakeyrat said:

    @DaveK said:
    Huh?  Do you, like, think X is something that only runs on mainframes?
    In the timeframe we're talking about, X only *did* run on mainframes. @DaveK said:
    I think you're just not very aware of anything beyond what was in the Windows world back then.
    I never used Windows until Windows 98. @DaveK said:
    Waittaminnit, now you think a valid comparison is that 8-bit OS that ran on Commdore-64s?  GEM was pretty horrid, I'll agree, but Mac, Amiga and X11 were all far in advance of and far more user-friendly than Windows 3.
    Yes, I think it's valid to compare one GUI for home computers in 1991 with another GUI for home computers in 1991.

    And X11 was never more user-friendly than Windows 3, not until much later on.

    Wait...X11 and user-friendly in the same sentence?



  • @fatbull said:

    @pbean said:

    Arguable... some programming languages have -1 for True (was it VB6, or Delphi, or something else? I'm not sure).
     

    VB does. I guess it's a matter of taste whether you handle booleans as 1 bit signed (-1 and 0) or unsigned (0 and 1) integers.

    I always thought that some languages came up with -1 for true because -1 is all ones when stored in two's complement format.  It seems to make sense, in this model false is binary 0000000000000000 and true is 1111111111111111.


  • @blakeyrat said:

    @DaveK said:
    Huh?  Do you, like, think X is something that only runs on mainframes?

    In the timeframe we're talking about, X only did run on mainframes.

    Never heard of SPARCstations?  AIX on an RT6150?  There were a host of unix workstations by 1990.

    @blakeyrat said:

    @DaveK said:
    I think you're just not very aware of anything beyond what was in the Windows world back then.

    I never used Windows until Windows 98.

    Ah, and I was just about to ask "You're not saying any of this from experience, are you?"

    @blakeyrat said:

    @DaveK said:
    Waittaminnit, now you think a valid comparison is that 8-bit OS that ran on Commdore-64s?  GEM was pretty horrid, I'll agree, but Mac, Amiga and X11 were all far in advance of and far more user-friendly than Windows 3.

    Yes, I think it's valid to compare one GUI for home computers in 1991 with another GUI for home computers in 1991.

    Except that you aren't.  You're comparing a GUI for an 8-bit home games machine from 1986 with one for a 16-bit small-office workstation from 1990.  Some might consider that relevant to the comparison.

    @blakeyrat said:

    And X11 was never more user-friendly than Windows 3, not until much later on.

    It's no longer clear, since you said that 98 was the first windows you used: have you ever actually used Windows 3?  I mean like, eight hours a day in an office environment doing your job?  I have; it was a hell of a step down from the Amiga I was used to using.

    And hey!  I never even mentioned Archimedes OS yet.

    Seriously: Win3 was years behind the times even when it was brand new. 



  • BINNED

    @blakeyrat said:

    @topspin said:
    Their all-upper-case shit is horrible to read though, IMO. Seriously, stuff like LPCWSTR instead of const wchar_t* ?? We don't have segmented addressing anymore, why declare pointers as "long pointers"? </mini-rant>

    You do realize all this dates back to Windows 3.0 right? Or even earlier. That's pretty much exactly why it's shit.
    [...]
    If you aren't reading Raymond Chen's blog The Old New Thing, you should be.

     

    Yes I do, on both accounts.
    Doesn't change the fact that it's horrid shit.



  • @topspin said:

    Doesn't change the fact that it's horrid shit.

    I think that's why people are running towards .net. If you're working on an older project, then you're kinda screwed though... but hey, at least Microsoft didn't just deprecate the entire goddamned API, provide a lame "transition" API, then constantly try to deprecate the "transistion" API as well.



  • @Jaime said:

    @fatbull said:

    @pbean said:

    Arguable... some programming languages have -1 for True (was it VB6, or Delphi, or something else? I'm not sure).
     

    VB does. I guess it's a matter of taste whether you handle booleans as 1 bit signed (-1 and 0) or unsigned (0 and 1) integers.

    I always thought that some languages came up with -1 for true because -1 is all ones when stored in two's complement format.  It seems to make sense, in this model false is binary 0000000000000000 and true is 1111111111111111.

    This is pretty much why BASIC (at least the dialects I've used, which are all made by m$) has -1 for true. The language doesn't have actual boolean operators, only bitwise ones. Ever tried something like "IF a AND b" when you really meant "IF a<>0 AND b<>0"? If a and b don't happen to have any common one bits, the former will evaluate to 0.



  • @DaveK said:

    @blakeyrat said:

    @DaveK said:
    Huh?  Do you, like, think X is something that only runs on mainframes?

    In the timeframe we're talking about, X only did run on mainframes.

    Never heard of SPARCstations? AIX on an RT6150? There were a host of unix workstations by 1990.

    In fact, it's never made sense to run the X server on a mainframe. Back in the day, there were X terminals which only ran the X server, with pretty much everything else (including the display manager) running on the mainframe.

    And yes, there were self-contained workstations running X (both server and applications) as well.



  • @topspin said:

    Their all-upper-case shit is horrible to read though, IMO. Seriously, stuff like LPCWSTR instead of const wchar_t* ?? We don't have segmented addressing anymore, why declare pointers as "long pointers"? </mini-rant>
     

    Actually, we do have segmented addressing.  Still.  On x86.  What, you thought the segment registers disappear in 32-bit protected mode?  Don't make me laugh.   On Win32, three of the six segment registers (DS, ES, SS) contain the same selector that selects a read-write segment.  CS can't contain such a thing, because its segment has to be executable, so it contains a *different* selector that selects the same range of memory, but in a read/execute mode.  FS contains a selector for a tiny (16-bit) segment containing the thread-control-block, and it is done this way so that any code that wants thread-specific data can just look in FS:xxxx.  GS isn't used.

    Of course, you can't tell that this is going on most of the time, and in fact, a Win32 LPFROBBLE is not a far (long) pointer.  It is actually a large near pointer. (large because it is 32-bit, near because the segment selector is not included.  Win16 LPFROBBLEs were 16:16.)



  • I know that TRS-80 Basic returned TRUE as -1 and FALSE as 0, used it many times as a shortcut in formula's by including phrasing like x = x + 1 * (X>128) for example.

     

    If I remember that worked in GW Basic in PC DOS as well, and many other BASIC interpreters available around the same time period.  BUT that may be due to the fact that Microsoft was responsible for a large percentage of the BASIC interpreters accross a wide variety of personal computers.



  • @DaveK said:

    @blakeyrat said:
    @DaveK said:
    Waittaminnit, now you think a valid comparison is that 8-bit OS that ran on Commdore-64s?  GEM was pretty horrid, I'll agree, but Mac, Amiga and X11 were all far in advance of and far more user-friendly than Windows 3.

    Yes, I think it's valid to compare one GUI for home computers in 1991 with another GUI for home computers in 1991.

    Except that you aren't.  You're comparing a GUI for an 8-bit home games machine from 1986 with one for a 16-bit small-office workstation from 1990.  Some might consider that relevant to the comparison.

    I have a pretty strong suspicion that the intended comparison was to 16-bit PC/GEOS, or Geoworks Ensemble, which was released in 1990; which my family's second computer used (first one was a C64); and which I personally thought had a better interface than Windows 3.1. I still sometimes miss that little screwdriver icon for bolting a menu open.



  • @kilroo said:

    @DaveK said:
    @blakeyrat said:
    @DaveK said:
    Waittaminnit, now you think a valid comparison is that 8-bit OS that ran on Commdore-64s?  GEM was pretty horrid, I'll agree, but Mac, Amiga and X11 were all far in advance of and far more user-friendly than Windows 3.
    Yes, I think it's valid to compare one GUI for home computers in 1991 with another GUI for home computers in 1991.

     

    Except that you aren't.  You're comparing a GUI for an 8-bit home games machine from 1986 with one for a 16-bit small-office workstation from 1990.  Some might consider that relevant to the comparison.

    I have a pretty strong suspicion that the intended comparison was to 16-bit PC/GEOS, or Geoworks Ensemble, which was released in 1990; which my family's second computer used (first one was a C64); and which I personally thought had a better interface than Windows 3.1. I still sometimes miss that little screwdriver icon for bolting a menu open.

    I had GEOS for the Commodore 64 -- if I had had a mouse way back then I'd STILL be using it.



  • Forget a boolean with three values - Micrososft also has a tristate with five values:



  • @Quietust said:

    Forget a boolean with three values - Micrososft also has a tristate with five values: http://msdn.microsoft.com/en-us/library/aa432714%28office.12%29.aspx

     

     

    Doesn't count -- three of the five states are unsupported making it a true boolean -- not even a tri-state, unless unsupported represents the third state.  Wait. What?



  •  .NET has a tri-valued boolean...

    It's called bool? (or Nullable<bool>) and the states are true, false, and null.



  •  TRUE, FALSE, SUPPORT_NOT_FOUND



  • @Quietust said:

    Forget a boolean with three values - Micrososft also has a tristate with five values:

    It's even worse than that, it's a tristate boolean with five values - of which three are unsupported!?

    How can anyone type that shit without realising they're talking gibberish? 

     



  • @Carnildo said:

    Found this in the Win32 API documentation for GetMessage():

    BOOL WINAPI GetMessage(__out LPMSG lpMsg, __in_opt HWND hWnd, __in UINT wMsgFilterMin, __in UINT wMsgFilterMax);
    Return Value
    BOOL

    If the function retrieves a message other than WM_QUIT, the return value is nonzero.

    If the function retrieves the WM_QUIT message, the return value is zero.

    If there is an error, the return value is -1. For example, the function fails if hWnd is an invalid window handle or lpMsg is an invalid pointer. To get extended error information, call GetLastError.

    I don't see a WTF. It's just a case of later fix. Originally, there was no separate return value for an error case. Just FALSE and non-zero. Later, it was decided that the function should return a distinctive code for an error.

     I wish I lived in an ideal world where you don't have to maintain backward compatibility with tons of existing code (in binary and source form), and the APIs are designed from the start ideally. But in the real life, Microsoft has to maintain compatibility with thousands of shitty business-critical and other widely used applications.

    By the way, compatibility is why you have to use LPWCSTR. Because there was no wchar_t type. And wchar_t size is implementation-specific. While WSTR is always a string of 16-bit characters.

    Those programmers that used int, long, short in place of LRESULT, LPARAM, WPARAM were unpleasantly surprised when they had to port the code from 16 bit to 32 bit and to 64 bit. The types are there for a reason.



  • @pbean said:

    Arguable... some programming languages have -1 for True (was it VB6, or Delphi, or something else? I'm not sure).

    In Delphi, the value of True is 1, and the value of False is 0.  A boolean is a 1-byte variable, and if you use typecasts or pointers or something ugly like that to change the byte value to anything other than 0, it will still be treated as true.



  • @alegr said:

    @Carnildo said:

    Found this in the Win32 API documentation for GetMessage():

    BOOL WINAPI GetMessage(__out LPMSG lpMsg, __in_opt HWND hWnd, __in UINT wMsgFilterMin, __in UINT wMsgFilterMax);
    Return Value
    BOOL

    If the function retrieves a message other than WM_QUIT, the return value is nonzero.

    If the function retrieves the WM_QUIT message, the return value is zero.

    If there is an error, the return value is -1. For example, the function fails if hWnd is an invalid window handle or lpMsg is an invalid pointer. To get extended error information, call GetLastError.

    I don't see a WTF. It's just a case of later fix. Originally, there was no separate return value for an error case. Just FALSE and non-zero. Later, it was decided that the function should return a distinctive code for an error.

     I wish I lived in an ideal world where you don't have to maintain backward compatibility with tons of existing code (in binary and source form), and the APIs are designed from the start ideally. But in the real life, Microsoft has to maintain compatibility with thousands of shitty business-critical and other widely used applications.

    By the way, compatibility is why you have to use LPWCSTR. Because there was no wchar_t type. And wchar_t size is implementation-specific. While WSTR is always a string of 16-bit characters.

    Those programmers that used int, long, short in place of LRESULT, LPARAM, WPARAM were unpleasantly surprised when they had to port the code from 16 bit to 32 bit and to 64 bit. The types are there for a reason.

    ^ this is truth (whether -1 or 1 or non-zero)



  •  Yeah, it's called C.



  • @alegr said:

    I don't see a WTF. It's just a case of later fix.

    Those two are not mutually exclusive.  In fact, poorly-handled back-compat is one of the prime source of WTFs.



  • BINNED

    @Steve The Cynic said:

    Actually, we do have segmented addressing.

    Way to miss the point because ...

    @Steve The Cynic said:

    Of course, you can't tell that this is going on most of the time, and in fact, a Win32 LPFROBBLE is not a far (long) pointer.  It is actually a large near pointer.

     

    That's exactly what I said.



  • @blakeyrat said:

    @DaveK said:
    What?  Windows 3.0 was released in 1990.  By that stage we'd had Amigas, Atari STs, X windows and Apple Lisas for five years or more.  Perhaps you mean specifically x86-based PCs?
    Sorry, no I meant PCs in general, so X11 doesn't count. By the time Windows 3.0 was released, the tide was turning, but I'd say that computers really didn't have the resources to run a good GUI until closer to 1995. I think this is one of the reasons System 7.0 and Windows 95 were so exciting and hype-worthy.

    My Mac Plus with 4Mb RAM (yes, it was upgraded) was pretty capable of running System 7. Windows95 was barely reaching parity with System  7, before that Windows was nothing but a glorified DOSShell. Most non-x86 personal computers managed to have better GUIs than the dog-ugly Windows thingy, and sometimes with less RAM, which seems to be DaveK's point.

    I don't really think that Windows sucking is only because of Microsoft sucking at programming stuff, though. I blame the shitty x86 arch; we got stuck with the computing equivalent of the steam engine, when we should be using RISC-based processors by now.



  • @danixdefcon5 said:

    I blame the shitty x86 arch; we got stuck with the computing equivalent of the steam engine, when we should be using RISC-based processors by now.

    We had RISC processors. Remember? There are still a few scattered around, Xboxes have PowerPCs in them for example. As do Playstation 3s.

    They went away because the "shitty x86 arch" scaled the shit out of them. Actually, to be more accurate, RISC processors added more and more CISC features, and CISC processors added more and more RISC features, that the whole distinction kind of dissolved away into nothing. Then AMD and Intel scaled the shit out of them.

    RISC was more successful than, say, iTanium, but on the whole they didn't make much of a dent in the market. The most amusing thing to me is that you're still crowing the "we ought to be using RISC-based processors by now" line that we heard all through the mid-90s... did you sleep through the last 15 years? Your great vision for the future has already been and gone, buddy.



  • @blakeyrat said:

    RISC was more successful than, say, iTanium, but on the whole they didn't make much of a dent in the market. The most amusing thing to me is that you're still crowing the "we ought to be using RISC-based processors by now" line that we heard all through the mid-90s... did you sleep through the last 15 years? Your great vision for the future has already been and gone, buddy.  Their fire has gone out of the universe. You, my friend, are all that is left of their religion.
     

    ETFY



  • @blakeyrat said:

    They went away because the "shitty x86 arch" scaled the shit out of them.

    That's half of the truth. Yes, it "scaled the shit out of them", because the architecture was already popular as heck and Intel and AMD could spend a sitload of cash on research.

    But the ONLY reason they were (and are) popular is the same fuckin' downward spiral because of which we're stuck with Windows, which's stuck with more legacy-support code than the "current revision" code.

    Backward-FUCKING-compatibility.

    "Who cares <your-favourite-CPU-and-arch-here> WAS 75x faster and cooler to code at? It couldn't run <office-suite-all-our-documents-are-in>/<business-critical-custom-app-we-don't-have-source-code-for>/<my-favourite-game>/<whatever-else>, so I'm not gonna use it!".
    That leads to low sales, which leads to high prices and low profits, which leads to even less sales and no money to spend on research, which leads to losing the technological edge.

    I agree, modern x86 is tolerable. Thing is, IF WE COULD GET RID OF THE ALL LEGACY SHIT and deploy a completely modern architecture, computing would move a few thousand light-years forward. But the anchor of back-compat (and lazy devs) will hold it in place for long still... until, one day, it'll drown. The sooner it happens the better.

    What I'd personally consider a good idea would be moving to a new architecture and preserving a hardware "PC emulator card" thing. But, of course, no one wants to invest in new, risky tech, while "what is already here is good enough."


    PS. All modern "x86" CPUs are actually a RISC inside, except, IIRC Transmeta, which are VLIV. The old, crappy CISC instruction set ends up at the instruction decoder, which's one of the reasons you (and compilers) are "encouraged" to not use some of the more convolutes CISC instructions.



  • @bannedfromcoding said:

    IF WE COULD GET RID OF THE ALL LEGACY SHIT and deploy a completely modern architecture, computing would move a few thousand light-years forward.
    Hardware would move forwards by a good bit, but software would be set right back to square one. We would literally have to start again from nothing.



  • @davedavenotdavemaybedave said:

    @bannedfromcoding said:
    IF WE COULD GET RID OF THE ALL LEGACY SHIT and deploy a completely modern architecture, computing would move a few thousand light-years forward.
    Hardware would move forwards by a good bit, but software would be set right back to square one. We would literally have to start again from nothing.

    Nothing you say? So this new hardware architecture would be so radically different that it would be impossible to write, say, a C++ compiler for it? Or an OpenGL implementation? Sure, certain parts of the operating system kernels and other core components would need to be rewritten, and compilers would need new backends to emit correct assembly instructions. Hell, even porting userspace software to a new operating system doesn't need starting from scratch, and that often involves a completely new set of system calls. I bet it wouldn't take a month after commercial launch before we'd have at least Linux and *BSD running on the new architecture to some degree.



  • @bannedfromcoding said:

    That's half of the truth. Yes, it "scaled the shit out of them", because the architecture was already popular as heck and Intel and AMD could spend a sitload of cash on research.

    Nope.

    AMD has never been able to spend a shitload of money. Intel was too busy throwing their money into the iTanium trashcan to worry about x86 much. When Intel finally figured it out, they did produce some awesome CPUs... but before then they were scaling the shit out of PPC by just clocking their P4's to ridiculous extremes. And it worked.

    @bannedfromcoding said:

    But the ONLY reason they were (and are) popular is the same fuckin' downward spiral because of which we're stuck with Windows, which's stuck with more legacy-support code than the "current revision" code.

    Backward-FUCKING-compatibility.

    Backwards-compatibility is a good thing. As a long-time Mac user, you'll never convince me otherwise... if you think it's so horrible, try using a platform that obsoletes your applications every 4-5 years like clockwork.

    @bannedfromcoding said:

    "Who cares <your-favourite-CPU-and-arch-here> WAS 75x faster and cooler to code at? It couldn't run <office-suite-all-our-documents-are-in>/<business-critical-custom-app-we-don't-have-source-code-for>/<my-favourite-game>/<whatever-else>, so I'm not gonna use it!".

    That leads to low sales, which leads to high prices and low profits, which leads to even less sales and no money to spend on research, which leads to losing the technological edge.

    In some strange theoretical universe, this may lead to losing the technological edge. In this universe, it didn't.

    There's nothing about Windows less technically-advanced than other OSes. (Well, I mean, all OSes vary in the little details, but in general Windows isn't any worse than any of the others, ans significantly better in a lot of ways.) And Windows is the OS with by far the most code-based backwards compatibility. (Linux backwards-compatibility is all based around brainwashing their users to learn and love 1977 computer interfaces; it doesn't involve software much.)

    @bannedfromcoding said:

    I agree, modern x86 is tolerable. Thing is, IF WE COULD GET RID OF THE ALL LEGACY SHIT and deploy a completely modern architecture, computing would move a few thousand light-years forward.

    What, specifically, would be better? What exactly was I doing all that time with my PowerPC 603 & 604 Macs that was so utterly impossible to do on x86? Specific examples, please. Enlighten me.

    Because I used the fucking RISC machines you're crowing about, and they didn't do anything better than equivalent Intel boxes.

    @bannedfromcoding said:

    But the anchor of back-compat (and lazy devs) will hold it in place for long still... until, one day, it'll drown.

    Backwards-compatibility in Windows uses less resources now than ever before. The shims don't load unless the app actually needs them, and all of the compatibility hacks/fixes that used to be in Win32 have been moved to shims.

    Again: Windows, with its large backwards-compatibility load, should be bloated, slow, RAM-sucking to use according to your philosophy here. But it's not. What gives? Are you just spouting nonsense?

    @bannedfromcoding said:

    What I'd personally consider a good idea would be moving to a new architecture and preserving a hardware "PC emulator card" thing. But, of course, no one wants to invest in new, risky tech, while "what is already here is good enough."

    I'm an investor. Pitch me.

    @bannedfromcoding said:

    PS. All modern "x86" CPUs are actually a RISC inside, except, IIRC Transmeta, which are VLIV. The old, crappy CISC instruction set ends up at the instruction decoder, which's one of the reasons you (and compilers) are "encouraged" to not use some of the more convolutes CISC instructions.

    Yes; I said as much. CISC chips adopted RISC features, RISC chips adopted CISC features, and the distinction was made pointless five years ago when Intel Core CPUs started appearing.



  • @Someone You Know said:

    @blakeyrat said:

    RISC was more successful than, say, iTanium, but on the whole they didn't make much of a dent in the market. The most amusing thing to me is that you're still crowing the "we ought to be using RISC-based processors by now" line that we heard all through the mid-90s... did you sleep through the last 15 years? Your great vision for the future has already been and gone, buddy.  Their fire has gone out of the universe. You, my friend, are all that is left of their religion.
     

    ETFY

    Win. Except that line was in A New Hope, not Empire Strikes Back. But other than that, win.


Log in to reply