The Epitome of WTF



  • A few weeks ago someone in the JoelonSoftware forums recommended to me a book called Big Blues, about how IBM nearly went out of business during the rise of the PC industry (and Microsoft).  I'm only about half done with it and all I can say is that IBM makes my company (and Snoofle's which sometimes sounds alot like mine) look pretty tame.  They seemed to have written the book on how to kill projects by committee-ing them to death.  Here is a really breath-taking example, and the epitome of "worse than failure":

    In 1987, any model of DEC Vax could talk to any other model (using DEC's proprietary-but-good networking protocol called DECNET), and they could also talk to other machines.  Apple machines had Appletalk.  PC's were starting to be networked via early TCP/IP.  IBM's mainframe and minicomputer models had no networking protocol and couldn't even talk to each other.  People used to say that if you wanted two IBM machines to talk to each other, you had to put a Vax between them.  IBM didn't like the snarky comments, so they attempted to develop their own networking protocol.

    After 4000 people worked for 4 years on it, the project was cancelled as a failure.



  •  Where's the WTF here? That IBM monsters which were never designed to talk to each other weren't able to?



  • Yes, but they're also a shining example of a large company completely changing tack and returning to success.



  • Sheesh, so many times, the WTF is that I have to point out the obvious WTF to some of you people.  Okay, listen closely:

     4000 people * 4 years = 16,000 man-years flushed down the toilet.  If you want some more fun, take a guess at some average salary and multiply that in too.



  • Not being fond of IBM's big iron (I did like OS/2), I don't know what layer 3 protocols they supported that early.  They did have the PC LAN, DLC, and token ring in layers 1 and 2 at that time. I assume it's that unnamed layer 3 protocol they failed with.  Why they didn't just use TCP/IP (or even NETBIOS, in small LANs) is beyond me.



  • @jetcitywoman said:

    Sheesh, so many times, the WTF is that I have to point out the obvious WTF to some of you people.  Okay, listen closely:

     4000 people * 4 years = 16,000 man-years flushed down the toilet.  If you want some more fun, take a guess at some average salary and multiply that in too.

     

    A "WTF" isn't just a failed project, or a piece of software that doesn't work. It's something that fails spectacularly in a most fascinating and often humorous manner.



  • @Jeff S said:

    @jetcitywoman said:

    Sheesh, so many times, the WTF is that I have to point out the obvious WTF to some of you people.  Okay, listen closely:

     4000 people * 4 years = 16,000 man-years flushed down the toilet.  If you want some more fun, take a guess at some average salary and multiply that in too.

     

    A "WTF" isn't just a failed project, or a piece of software that doesn't work. It's something that fails spectacularly in a most fascinating and often humorous manner.

     

     

    Calling 16,000 wasted man-years less than "spectacular" is TRWTF.

     



  •  http://en.wikipedia.org/wiki/Token_ring

    IBM definitely had networking in the 80s.



  • @ammoQ said:

     http://en.wikipedia.org/wiki/Token_ring

    Are you calling Token Ring a WTF? Because Token Ring is freaking awesome? How many other networking protocols do you know based upon the idea of the talking stick?

    EDIT: Yeah, you better edit your post. Don't make me come over there, ammoQ...



  • @bstorer said:

    Are you calling Token Ring a WTF? Because Token Ring is freaking awesome? How many other networking protocols do you know based upon the idea of the talking stick?

     

    Not at all (calling it a WTF) (sorry, I've edited my post several times...)



  • @emurphy said:

    Calling 16,000 wasted man-years less than "spectacular" is TRWTF

    Not to mention humorous.  I think he works for IBM.

    BTW, by posting this, I'm not trying to slam IBM, really.  I know they've since come back to their core business and are doing really well again.  Cheers to them!  But they did go through a phase of spectacular wtfery.

    Another less breathtaking thing mentioned in the book was when Gates n Crew were trying to work with the IBM developers on OS/2.  The two groups had completely different development methods.  IBM was slow and meticulous.  Their programmers' performance was measured by lines of code.  Gates' guys were fast and efficient.  They would make snarky comments about each other.  At one point (I assume to make a point) the Microsoft guys rewrote some of the IBM'ers code, reducing it by 160%.  The developers were not amused.  But I thought it was amusing that when IBM management ran their performance metrics (remember, lines of code), they went back to Gates and asked why his programmers had negative performance.  (Bunch of slackers!) 

    How can anybody not find these things amusing?



  • @jetcitywoman said:

    Microsoft guys rewrote some of the IBM'ers code, reducing it by 160%.

     

    Impressive. Instead of 1000 lines, there were only -600 lines left. After IBM developers wrote another 600 lines, the file was empty again.



  • @jetcitywoman said:

    16,000 man-years flushed down the toilet.

    The way you said that was damn sexy.



  • @morbiuswilters said:

    The way you said that was damn sexy.
     

    Oh stop it... you say that about every post I make too.



  • @ammoQ said:

    Impressive. Instead of 1000 lines, there were only -600 lines left. After IBM developers wrote another 600 lines, the file was empty again

    Hey, you must work for IBM! 

    Seriously, I could have the percentage wrong - I'm typing from memory and the book is at home.  But it implied that their metrics somehow combined the Microsoft code output with the IBM code output to end up with a negative number.  I'm starting to learn project accounting here in my company and I can sort of see how this works.  When I was young and naive, I thought that accounting was a hard skill that stressed precision and accuracy.  Now I'm old and cynical and know that it's actually a black art and corporations manage to combine all kinds of sorcery into the numbers.  I'm still puzzled that the SEC hasn't caught on yet....



  • @jetcitywoman said:

    I'm still puzzled that the SEC hasn't caught on yet....

    You're joking, right?  The SEC is the body tasked with coming up with the accounting standards to begin with.  And whatever accounting trickery public corporations use, the feds take the cake when it comes to cooking the books. 



  •  @bstorer said:

    How many other networking protocols do you know based upon the idea of the talking stick?

     http://en.wikipedia.org/wiki/Arcnet

     Beat out token ring by a decade.  Would run over two bare copper wires.  Terminated or unterminated.  Add/remove stations at will.  Could run 2000 feet without repeaters.  Guaranteed ping times.  Had support for longer lengths (with repeaters) by slowing token timing.  Able to determine station availability.  Seriously, the thing was that robust. Unified driver.  Fast as hell (ie: Could reach 100% load).  Freaking awesome.

     Too bad this monstrosity of CD/MA called "Ethernet" won.  :-(



  • @shepd said:

    Too bad this monstrosity of CD/MA called "Ethernet" won.  :-(

    Who in God's name doesn't use switched Ethernet at this point?  And, yeah, I'm always thinking "boy, Ethernet just doesn't have enough token-passing".  Especially when dealing with hundreds of connected servers. 



  • @morbiuswilters said:

    @shepd said:

    Too bad this monstrosity of CD/MA called "Ethernet" won.  :-(

    Who in God's name doesn't use switched Ethernet at this point?  And, yeah, I'm always thinking "boy, Ethernet just doesn't have enough token-passing".  Especially when dealing with hundreds of connected servers.

     

     Just because complicated hardware can band-aid the problem doesn't mean it's not there...  Even with all that lovely extra hardware, it's rare to find ethernet get to 100% throughput.  Arcnet, though, 100% throughput was normal, anything less and you had a network issue.  Just saying...  And even so, Ethernet is still ridiculously picky about cabling, the cabling is VERY expensive (compared to Arcnet), and it requires twice the wiring.  :-)



  • And he's not even a girl (anymore)

    Everytime I see his avatar, it reminds me of Wiske, the girl in this Belgian comic. FIY: "einde" means "the end"; every book ends with this picture (or a slight 'funny' variation).



  • @Me said:

    Everytime I see his avatar (...)

    MPS' avatar, that is. (Past editing limit)



  • 16,000 man years to develop something that took the authors of *nix - what - a couple of man months to design and code the first time around?

    Spectacular example of [mis]management not having a clue. Any properly designed system can be broken down into chunks sufficiently small and simple that any code monkey can bang it out relatively quickly.



  • At my old company, a client of ours had to deal with IBM, so I did too (by proxy). They seemed to be a seething mass of red tape and bureaucracy who would charge a fortune for a two-line change in a configuration file.

    I'm sure they're full of WTFs!



  • @shepd said:

     http://en.wikipedia.org/wiki/Arcnet

     Beat out token ring by a decade.  Would run over two bare copper wires.  Terminated or unterminated.  Add/remove stations at will.  Could run 2000 feet without repeaters.  Guaranteed ping times.  Had support for longer lengths (with repeaters) by slowing token timing.  Able to determine station availability.  Seriously, the thing was that robust. Unified driver.  Fast as hell (ie: Could reach 100% load).  Freaking awesome.


    Ok, comparing old arcnet to old ethernet:
    100% of 2.5MBit/s = 2.5MBit/s
    50% of 10MBit = 5.0MBit/s

    arcnet plus compared to more advanced ethernet:
    100% of 20MBit/s = 20MBit/s
    50% of 100MBit/s = 50MBit/s

     Seems like ethernet still wins.



  • @snoofle said:

    16,000 man years to develop something that took the authors of *nix - what - a couple of man months to design and code the first time around?

    Spectacular example of [mis]management not having a clue. Any properly designed system can be broken down into chunks sufficiently small and simple that any code monkey can bang it out relatively quickly.

    Yep, my husband mentioned that about the *nix difference also.

    The book makes no small mention of the "Process" that IBM used at the time for any project.  Got idea?  Run it through every company division to see if anybody had any input or arguments against it.  (And I do mean EVERY division.  The Mainframe teams had input over the printer teams and the minicomputer teams, and vica versa.  Which meant that if the mainframe team felt that a minicomputer project idea had the potential to reduce it's mainframe market share, it would do everything it could to nix the project.) 

    Everybody agree it's a good idea?  Run it through committees at every level of the company to flesh out the design, including the international portions of the company to make sure the project works for foreign markets.  Disagreements about the design?  Run it through an escalating series of committees that ended with the Management Committtee who were the end-all-be-all decision-makers.  What they said was law. 

    Finally got a workable design?  Collect a team of programmers and analysts to implement it.  And by "team" we're talking about thousands of bodies.  Disagreements during implementation?  Work it up through the same escalating series of committees until it ended up at the Management Committee again.

    Not surprisingly a great many projects failed before seeing the light of day.



  • @whoever said:

    Ok, comparing old arcnet to old ethernet:
    100% of 2.5MBit/s = 2.5MBit/s
    50% of 10MBit = 5.0MBit/s

    arcnet plus compared to more advanced ethernet:
    100% of 20MBit/s = 20MBit/s
    50% of 100MBit/s = 50MBit/s

     Seems like ethernet still wins.

    Not to mention gigabit and ten gigabit ethernet.  I cannot understand people who complain about worthless, dead standards that failed because they were not as cheap or fast or easy as the victorious one.



  • @whoever said:

    <wbr>

    Ok, comparing old arcnet to old ethernet:
    100% of 2.5MBit/s = 2.5MBit/s
    50% of 10MBit = 5.0MBit/s

    arcnet plus compared to more advanced ethernet:
    100% of 20MBit/s = 20MBit/s
    50% of 100MBit/s = 50MBit/s

     

    Need I remind you that arcnet was firstcomer to the situation?  2.5 mbits in 1976 was, well... faster than many CPUs of the time! 1976 ethernet compared to arcnet? Arcnet was #Division by zero error#% faster than ethernet at the time!

    Again, in 1992, Arcnet had 20 mbits over 2 copper wires.  Ethernet?  10 mbits over 4 copper wires.  Winner?  Arcnet by 400%!

     In 1995, when Ethernet finally caught up to Arcnet (took it's time, eh?) Ethernet did 12.5 mbits per wire (Remember 100base-T4?  100base-VG? yay...), as compared to  10 mbits per wire on Arcnet.  A whopping 25% more.  Wait, did I subtract the ~20% performance that you can't get from most ethernet networks?  5% more.  WOoooOOOooo...

    Of course, by this time, everyone had gone Ethernet crazy because it had become cheaper to implement (sad, but true).  And so Arcnet's fate was set.  Oh well...

     (1995 year taken from cisco literature:  http://www.cisc<wbr>o.com/en/US/doc<wbr>s/internetworki<wbr>ng/technology/h<wbr>andbook/Etherne<wbr>t.html )

     Oh, and at each release level, Arcnet was both cheaper (read the wikipedia article, it was HALF THE PRICE) cardwise, but cabling wise (it ran other flat-wire telephone cable 100% fine in my experience).  At each release, it was also faster than ethernet.  It's only dead because it wasn't advertised the way ethernet was.  And it's too bad.  I still can't believe I need ~1,000+ drivers on my linux box for ethernet cards, compared to ONE SINGLE UNIFIED DRIVER for Arcnet cards.  And yes, that unified driver got full performance, so no worries there.  Imagine how much easier it would be to install your windows boxes if it supported every single network card (ie:  One driver built in) out of the box, always.  And how much better PXE/TFTP would be.



  • @morbiuswilters said:

    Not to mention gigabit and ten gigabit ethernet.
     

    Not really a valid comparison.  I doubt we had 10 gigabit ethernet back in the 80s. That would be like complaining about CRT televisions in the 50s because we have LCDs and plasmas now.



  • @skippy said:

    @morbiuswilters said:

    Not to mention gigabit and ten gigabit ethernet.
     

    Not really a valid comparison.  I doubt we had 10 gigabit ethernet back in the 80s. That would be like complaining about CRT televisions in the 50s because we have LCDs and plasmas now.

    WTF?  No, it would be like calling someone a moron who went on and on about CRTs were better than LCDs and how it is unfair LCDs won. 



  • @shepd said:

    Again, in 1992, Arcnet had 20 mbits over 2 copper wires.  Ethernet?  10 mbits over 4 copper wires.  Winner?  Arcnet by 400%!

    As you say 4 wires I'll assume you mean 10-base-T instead of 10-base-2 (which was one coax cable (essentially two wires)).

    10-base-T was 10mbits FULL DUPLEX. In bidirectional use, that works out the same as arcnet's 20mbits single-duplex.

    Wikipedia also says that 10-base-T came out when arcnet was still 2.5mbps, so a lot of people switched (10 vs 2.5 mbps, full duplex vs single, switched vs hubbed, no contest really). When arcnet plus finally came out at 20mbps, not many bothered to switch back.

    @shepd said:

    In 1995, when Ethernet finally caught up to Arcnet (took it's time, eh?) Ethernet did 12.5 mbits per wire (Remember 100base-T4?  100base-VG? yay...), as compared to  10 mbits per wire on Arcnet.  A whopping 25% more.  Wait, did I subtract the ~20% performance that you can't get from most ethernet networks?  5% more.  WOoooOOOooo...

    You can't measure transfer rates per-wire, it has to be per-pair (there always has to be a return). And again, ethernet was full-duplex and arcnet was single.

    Even better, it seems arcnet could only allow communications between two pcs on the entire network at once, where ethernet (with switches aka transparent routers) allows simultaneous communication between an unlimited number of non-overlapping pairs of pcs at a time. So ethernet is a lot faster on the whole than it looks from your argument.

    According to wikipedia, very few 20mbps arcnet plus products made it to the market, and were very expensive. Only the 2.5mbps arcnet stuff was cheaper than ethernet.

    @shepd said:

    Of course, by this time, everyone had gone Ethernet crazy because it had become cheaper to implement (sad, but true).  And so Arcnet's fate was set.  Oh well...

    Oh, and at each release level, Arcnet was both cheaper (read the wikipedia article, it was HALF THE PRICE) cardwise, but cabling wise (it ran other flat-wire telephone cable 100% fine in my experience).  At each release, it was also faster than ethernet.  It's only dead because it wasn't advertised the way ethernet was.  And it's too bad.  I still can't believe I need ~1,000+ drivers on my linux box for ethernet cards, compared to ONE SINGLE UNIFIED DRIVER for Arcnet cards.  And yes, that unified driver got full performance, so no worries there.  Imagine how much easier it would be to install your windows boxes if it supported every single network card (ie:  One driver built in) out of the box, always.  And how much better PXE/TFTP would be.

    It was one driver because it was a tightly controlled standard. Ethernet was a free-for-all, and the price fell rapidly because of the competition.

    Also: "at each release level, Arcnet was faster". And? Arcnet and ethernet didn't release new versions together, and each time a new ethernet was introduced it was faster than the current arcnet as well. It's called leapfrog. A is faster, so B releases a faster version. B is now faster, so A releases a faster version. And as I said above, the 20mbps arcnet stuff was really expensive. The "half the price from the wikipedia article" was for 2.5mbps arcnet.

    Basically arcnet got squished when 10-base-T 10mbps full-duplex switched ethernet came out in the mid/late 80s, and arcnet plus (20mbps single-duplex hub-based) released in '92 (hint: that's a lot later) couldn't save it, especially as 100mbps ethernet was standardised in '95 and 1gbps ethernet was standardised in '98.



  • @shepd said:

     Too bad this monstrosity of CD/MA called "Ethernet" won.  :-(

    15-20 years ago, we had lots of RISC-based processors, XMP Crays, HP-PA, Alpha's and the whole DEC hardware variety, SPARCstations and the PowerPC Macs. Oh, and whatever runs on the zSeries mainframes.

    Now most of the hardware runs on the "ugly duckling" of all of them: the x86 architecture. The only survivors seem to be SPARC (server-only), PowerPC, and ARM for embedded systems.



  • @danixdefcon5 said:

    @shepd said:

    Too bad this monstrosity of CD/MA called "Ethernet" won.  :-(

    15-20 years ago, we had lots of RISC-based processors, XMP Crays, HP-PA, Alpha's and the whole DEC hardware variety, SPARCstations and the PowerPC Macs. Oh, and whatever runs on the zSeries mainframes.

    Now most of the hardware runs on the "ugly duckling" of all of them: the x86 architecture. The only survivors seem to be SPARC (server-only), PowerPC, and ARM for embedded systems.

    Yes, we know this.  x86 may be ugly but it works.  It won because it was all around the best.  Perhaps not the prettiest, but it was cheap enough, ubiquitous enough and powerful enough to dominate the marketplace.  The market is the only judge of product superiority.  It is the aggregate of millions of individual decisions made by people with all different levels of knowledge, experience and perspective.  These are not decisions made by engineers who value their own arbitrary standard of beauty more than the real costs of a particular technology.  You don't have to like it, but for whatever reasons x86 has satisfied many millions of individual requirements for a CPU.  That is the only real measure of success.



  • @morbiuswilters said:

    @danixdefcon5 said:

    @shepd said:

    Too bad this monstrosity of CD/MA called "Ethernet" won.  :-(

    15-20 years ago, we had lots of RISC-based processors, XMP Crays, HP-PA, Alpha's and the whole DEC hardware variety, SPARCstations and the PowerPC Macs. Oh, and whatever runs on the zSeries mainframes.

    Now most of the hardware runs on the "ugly duckling" of all of them: the x86 architecture. The only survivors seem to be SPARC (server-only), PowerPC, and ARM for embedded systems.

    Yes, we know this.  x86 may be ugly but it works.  It won because it was all around the best.  Perhaps not the prettiest, but it was cheap enough, ubiquitous enough and powerful enough to dominate the marketplace.  The market is the only judge of product superiority.  It is the aggregate of millions of individual decisions made by people with all different levels of knowledge, experience and perspective.  These are not decisions made by engineers who value their own arbitrary standard of beauty more than the real costs of a particular technology.  You don't have to like it, but for whatever reasons x86 has satisfied many millions of individual requirements for a CPU.  That is the only real measure of success.

    And that's why Windows is the best operating system.  No further debate is necessary.  Economic success is all that matters.  Those "linux" wankers really ought to close up shop.



  • @merreborn said:

    And that's why Windows is the best operating system.  No further debate is necessary.

    It's certainly the most successful and it is the best for most people.  How is that not obvious?  It may not be best for everyone, and that's fine, but if we're going to say one particular contender is the best, what criteria is better than satisfying the most people possible?

     

    @merreborn said:

    Those "linux" wankers really ought to close up shop.

    Now you're just putting words in my mouth.  I didn't say people shouldn't compete to try to be the best, that would just be moronic.  I'd be glad if someone overtook Windows.  Not because I hate Microsoft, but because it means that someone did an even better job of satisfying people.  This phenomenon is often known as "progress".  However, Linux is still a long, long way from being able to satisfy as many people as Windows does, so it's kind of a moot point. 



  • @merreborn said:

    Those "linux" wankers really ought to close up shop.
     

    I agree 100%.

    Nice try guys! It is just getting old and annoying now.


Log in to reply