PS3 WTF



  • Surely this is the worst WTF on the planet:

    http://news.cnet.com/8301-13506_3-10173656-17.htm

    Quote:

    "We don't provide the 'easy to program for' console that (developers) want, because 'easy to program for' means that anybody will be able to take advantage of pretty much what the hardware can do, so then the question is, what do you do for the rest of the nine-and-a-half years?" explained Hirai [CEO of Sony Computer Entertainment].

    More bad PR for the 'paperweight'...





  • Surely this is not the worst WTF.  A shovel is easier to use than a Gatorpillar Holemaster 9000 hydraulic digging machine.  But the Gatorpillar can do a lot more if you invest the time to learn to use it.

    That might not be so great for the Gatorpillar customer if he wants to buy a decorative cover for his choice of digging aparatus.  The smaller installed base means that cover designers tend to target the traditional shovel market.  Whether that makes the Gatorpillar a bad digging solution depends on how big the barrier is to learn and design for the more powerful machine.

    By the way, I'm a console agnostic.  I don't know whether PS3 or XBox is the better gaming system and don't really care.



  • This is more like making the Gatorpillar purposefully hard to use so that digging takes much longer and you spend more on replacement parts. But not exactly like that.

    I'll bet the XBox 360 is also difficult to develop for, but only because Microsoft like to be smartarses ;-p



  • Hirai picked some very bad wording. But what he's saying is correct and no WTF at all. Sony intentionally went with a design that is much harder to develop for. There's no doubt about that. The advantage is that the amount of raw power is much, much greater. The disadvantage is that it's going to take a long time for people to figure out how to get to that power.

    It was simply impractical to develop a system that had as much raw power as the cell has without lots of bizarre quirks at a reasonable cost. So Sony went with bizarre quirks.

    This gave them two things. First, bragging rights. The raw power of the PS3 is unmatched, that's why people use farms of them as supercomputers. Second, over the lifetime of the PS3, it will get better as developers figure out how to use the raw power.

    You can make any engineering decision seem stupid by focusing on the downside and mentioning that it was done intentionally. "Our high-performance cars have low fuel economy intentionally." This is true, but it's not because engineers like low fuel economy. It's because sacrificing fuel economy allows you to get more of the things you would otherwise trade off for fuel economy.



  • The Sony CEO in the article may have tried to spin the situation as "we made it hard so that we'd be special and have more room for improvement".  But I'm pretty sure that the real situation is that the hardware engineers designed a cool new machine and the software engineers didn't produce a cool new platform to match.   So the whole system is harder and/or just different for the application programmers to use.  This doesn't strike me as a WTF because the CEO is just saying what he has to say in the situation.

    As a scientific simulation programmer, a system like the PS3 is more exciting to me than the Xbox.  So I'm probably cutting them more slack for being new, weird, and difficult than a pure gamer would. 


  • :belt_onion:

    @ComputerForumUser said:

    I'll bet the XBox 360 is also difficult to develop for, but only because Microsoft like to be smartarses ;-p
    And I'll bet the XBox 360 is as easy to program for as it is for Windows.

    Or I might have missed the point of the XNA framework.



  • Actually the Xbox is a beaut to develop fro purely because it's almost exactly the same as windows development.  PS3 however forces OTT thread usage and effectively has half the memory of the 360 making it extremely difficult to develop for.  The correct answer at interviews when asked what do you think of the PS3 is 'it sucks'.  That way they know you've experience developing for it.



  • @joelkatz said:

    The advantage is that the amount of raw power is much, much greater. The disadvantage is that it's going to take a long time for people to figure out how to get to that power.
     

     I'll doubt we'll ever see the PS3 utilizied on the same level the PS2 was in the second half of it's life time. Damn near everything is multi-platform, built on middleware. That's great for the developer's bottom line, but bad for gamers that want to see their consoles fully utilized. I don't see PS3 claiming enough marketshare to drive enough exclusive titles for developers to get that familiar with the PS3.



  •  All the power in the world isn't very useful when the mainboard in the thing can't drive said hardware at optimum.

     PS3 is a classic Sony case; too much power and flash, not enough actual thought put into it.  A CELL processor in a fuckin gaming console?  Are you joking?  May as well put a 12 cylinder diesel engine in a mini.



  • The upshot is I sit with a copy of the PS3 version of Fallout 3 and I'm limited to  drooling over the DLC that's only available to XBox and PC versions. 



  • @AlpineR said:

    By the way, I'm a console agnostic.  I don't know whether PS3 or XBox is the better gaming system and don't really care.

    Dreamcast FTW!



  • @DaveK said:

    Dreamcast FTW!

    Ever since I saw that "it's thinking" commercial campaign, I've had vague concerns that the Dreamcast might someday evolve into SkyNet... so I'm kind of glad the console failed to take hold in the market.

    I don't see the compelling value in a PS3, myself. I suppose "not Microsoft" has weight with some people, but I'm rather the opposite.



  • @joelkatz said:

    Hirai picked some very bad wording. But what he's saying is correct and no WTF at all. Sony intentionally went with a design that is much harder to develop for. There's no doubt about that. The advantage is that the amount of raw power is much, much greater.
     

    They tell us the amount of raw power is much, much greater, but (the console game playing public) we sure haven't seen it yet. Right now, the Xbox 360 and PS3 are right on-par with each other, graphics-wise.



  • @AlpineR said:

    As a scientific simulation programmer, a system like the PS3 is more exciting to me than the Xbox.  So I'm probably cutting them more slack for being new, weird, and difficult than a pure gamer would. 

    I tend to agree with this. One of the appealing factors of the PS3 is that it isn't just another frickin' Intel Inside POS running a 20+ year old crappy architecture. Also, Sony is the only one of the Big Three that actually lets you put another OS on the thing... the upshot is that the full 256Mb are available to the OS, at the expense of having no 3D graphics. It might not be bleeding-edge on RAM, but it has served me as a "PC" ever since my desktop PC died 1 month ago.

    I don't know about game developing, but the Cell BE API didn't seem that complicated to me; the most daunting task is multithreading, but that applies to anything multithreaded, it isn't Cell-centric. In fact, videogames are the ones that benefit the most with multithreading; any game developer worth its salt knows that.

    Anyway ... before this turns into an XBox360 vs. PS3 wankfest, I'd like to remind everyone that the Wii mopped the floor with both consoles, despite having the crappiest hardware of the three. Ok, one plus is that neither of the three consoles use Intel stuff, not even the XBox.



  • Well, the Xbox 360 has a trio of PPC chips, so it's not like it's an off-the-shelf PC either. (The original Xbox was though, almost literally.)

    Anyway, the Wii mopped the floor sales-wise, but it's also not really in the same market as the Xbox 360 and PS3 in any case.



  • @blakeyrat said:

    Well, the Xbox 360 has a trio of PPC chips, so it's not like it's an off-the-shelf PC either. (The original Xbox was though, almost literally.)

    Common PCs have had anywhere between 2 and 4 cores for a while now (AMD has chips with 3 cores).  What's the big difference?


  • @tdb said:

    Common PCs have had anywhere between 2 and 4 cores for a while now (AMD has chips with 3 cores).  What's the big difference?

    3 * PPC - ncores * x86



  • @Vempele said:

    @tdb said:
    Common PCs have had anywhere between 2 and 4 cores for a while now (AMD has chips with 3 cores).  What's the big difference?
    3 * PPC - ncores * x86
    The CPU architecture might make a difference if anyone still used assembly today.  Stuff like optimizing algorithms for different cacheline sizes or branch predictors is relatively minor.  With the compiler taking care of producing the machine code and doing a lot of low-level optimizations, the programmers can largely ignore the underlying CPU architecture.  There are bigger differences between Linux and Windows PCs than there are between a Windows PC and Xbox 360.

    It may be that Xbox 360's architecture is devoid of a lot of old cruft (ISA, PS/2, FDC, PIC, ...) that is present in x86 computers.  While that's certainly something I'd welcome in my desktop computers as well, it hardly makes a difference from a software developer point of view.



  • @tdb said:

    @Vempele said:

    @tdb said:
    Common PCs have had anywhere between 2 and 4 cores for a while now (AMD has chips with 3 cores).  What's the big difference?
    3 * PPC - ncores * x86
    The CPU architecture might make a difference if anyone still used assembly today.  Stuff like optimizing algorithms for different cacheline sizes or branch predictors is relatively minor.  With the compiler taking care of producing the machine code and doing a lot of low-level optimizations, the programmers can largely ignore the underlying CPU architecture.  There are bigger differences between Linux and Windows PCs than there are between a Windows PC and Xbox 360.

    It may be that Xbox 360's architecture is devoid of a lot of old cruft (ISA, PS/2, FDC, PIC, ...) that is present in x86 computers.  While that's certainly something I'd welcome in my desktop computers as well, it hardly makes a difference from a software developer point of view.

     

    Well, fine, grumpy-Gus, but that all applies to the Cell CPU too.



  • @blakeyrat said:

    @tdb said:

    @Vempele said:

    @tdb said:
    Common PCs have had anywhere between 2 and 4 cores for a while now (AMD has chips with 3 cores).  What's the big difference?
    3 * PPC - ncores * x86
    The CPU architecture might make a difference if anyone still used assembly today.  Stuff like optimizing algorithms for different cacheline sizes or branch predictors is relatively minor.  With the compiler taking care of producing the machine code and doing a lot of low-level optimizations, the programmers can largely ignore the underlying CPU architecture.  There are bigger differences between Linux and Windows PCs than there are between a Windows PC and Xbox 360.

    It may be that Xbox 360's architecture is devoid of a lot of old cruft (ISA, PS/2, FDC, PIC, ...) that is present in x86 computers.  While that's certainly something I'd welcome in my desktop computers as well, it hardly makes a difference from a software developer point of view.

     

    Well, fine, grumpy-Gus, but that all applies to the Cell CPU too.

    Nah.  The Cell architecture is radically different.  A multi-core PC has symmetric processors and a common memory bus; a Cell chip has asymmetric processors, isolated memory spaces, and a circular intercpu connect bus.  The way you design and structure your code on these kinds of machines is radically different to the ways you're used to normally thinking about coding.  All your algorithms need to be incremental and streaming, highly parallelised with unrolled and interleaved loops.  Oh, and you don't do branches, you pretty often calculate something both ways and then mux the right answer into your selection.  You think of everything much more like a signal flow graph.  It's a very real difference and nothing like the "just add a few more identical cores" concept used on desktop platforms.

     



  • @DaveK said:

    Nah.  The Cell architecture is radically different.  A multi-core PC has symmetric processors and a common memory bus; a Cell chip has asymmetric processors, isolated memory spaces, and a circular intercpu connect bus.  The way you design and structure your code on these kinds of machines is radically different to the ways you're used to normally thinking about coding.  All your algorithms need to be incremental and streaming, highly parallelised with unrolled and interleaved loops.  Oh, and you don't do branches, you pretty often calculate something both ways and then mux the right answer into your selection.  You think of everything much more like a signal flow graph.  It's a very real difference and nothing like the "just add a few more identical cores" concept used on desktop platforms.

     

     

    Jesus christ, it sounds like trying to design a game on an FPGA.  Or a breadboard.  That's completely retarded.  The computing industry standardized on synchronous hardware and iterative algorithms for a reason.



  • @Aaron said:

    @DaveK said:

    Nah.  The Cell architecture is radically different.  A multi-core PC has symmetric processors and a common memory bus; a Cell chip has asymmetric processors, isolated memory spaces, and a circular intercpu connect bus.  The way you design and structure your code on these kinds of machines is radically different to the ways you're used to normally thinking about coding.  All your algorithms need to be incremental and streaming, highly parallelised with unrolled and interleaved loops.  Oh, and you don't do branches, you pretty often calculate something both ways and then mux the right answer into your selection.  You think of everything much more like a signal flow graph.  It's a very real difference and nothing like the "just add a few more identical cores" concept used on desktop platforms.

     

     

    Jesus christ, it sounds like trying to design a game on an FPGA.  Or a breadboard.  That's completely retarded. 

    You're just being a wuss!  Whassamatter, afraid you might learn something new?  You have a compiler and assembler to take care of the tricky scheduling and writebacks for you, you code in standard C or whatever.  But you have to design your algorithms differently.

    @Aaron said:

    The computing industry standardized on synchronous hardware and iterative algorithms for a reason.

    Yes.  Simplicity of conceptual design.  Not performance.  That didn't matter much when nobody could see an end to Moore's Law, but now it's plateauing we're really going to have to do some new things.  Standard algorithms run on increasing numbers of multicores max out quite quickly as the cores all start stomping all over each other for access to the same bus.  The only way we're going to get full performance out of future multiprocessing silicon is to redesign our algorithms to get the most out of the available hardware resources and data-moving bandwidth which our synchronous iterative algorithms are so wasteful of.

    TL;DR:  The old "for x = 0 to width, for y = 0 to height, do something; wait for vblank" model is hopelessly busted.
     



  • @DaveK said:

    TL;DR:  The old "for x = 0 to width, for y = 0 to height, do something; wait for vblank" model is hopelessly busted.

    The alternative "spawn a bunch of worker threads" model doesn't work too well when your working set is larger than the amount of memory an SPE can access directly.



  • @Carnildo said:

    @DaveK said:
    TL;DR:  The old "for x = 0 to width, for y = 0 to height, do something; wait for vblank" model is hopelessly busted.

    The alternative "spawn a bunch of worker threads" model doesn't work too well when your working set is larger than the amount of memory an SPE can access directly.

    You're still thinking in the old way if you even think about a working set of data that all has to be accessed at once.  Use smaller tiles.  This is a streaming architecture, all you need is enough for double- or maybe triple-buffering a chunk at a time.

    And why would you "spawn threads"?  Streaming algorithms are perfect for low-overhead zero-context-switching coroutine processing.  Why would you want to be running an OS on the SPEs in the first place, even a lightweight RTOS?

    This is what I mean by it requiring a whole new way of thinking, and you're not there yet if you're still thinking in terms of the old problems. A lot of the limitations you see are just things that you have been trained to perceive as fundamental limits of possibility, when in fact they were mere side-effects of the way that our inefficient (but synchronous and iterative!) designs mean that 99% of that silicon is just sitting there doing nothing 99% of the time.  We've been breaking our backs to squeeze an extra GHz out of the clock rate, when we were only ever using a fraction of what it could do anyway.



  • @CDarklock said:

    I don't see the compelling value in a PS3, myself. I suppose "not Microsoft" has weight with some people, but I'm rather the opposite.

     

    Personally, I bought mine for the BD player (having the gaming system is sort of a side-effect).  The PS3 remains one of the best BD players to this date, and it's one of the cheapest.  Can't go wrong there (assuming you have a want for BD).



  • @joelkatz said:

    Hirai picked some very bad wording. But what he's saying is correct and no WTF at all. Sony intentionally went with a design that is much harder to develop for. There's no doubt about that. The advantage is that the amount of raw power is much, much greater. The disadvantage is that it's going to take a long time for people to figure out how to get to that power.

    It was simply impractical to develop a system that had as much raw power as the cell has without lots of bizarre quirks at a reasonable cost. So Sony went with bizarre quirks.

    This gave them two things. First, bragging rights. The raw power of the PS3 is unmatched, that's why people use farms of them as supercomputers. Second, over the lifetime of the PS3, it will get better as developers figure out how to use the raw power.

    You can make any engineering decision seem stupid by focusing on the downside and mentioning that it was done intentionally. "Our high-performance cars have low fuel economy intentionally." This is true, but it's not because engineers like low fuel economy. It's because sacrificing fuel economy allows you to get more of the things you would otherwise trade off for fuel economy.

    Well said, my friend.  Well said.



  • @DaveK said:

    @Carnildo said:

    @DaveK said:
    TL;DR:  The old "for x = 0 to width, for y = 0 to height, do something; wait for vblank" model is hopelessly busted.

    The alternative "spawn a bunch of worker threads" model doesn't work too well when your working set is larger than the amount of memory an SPE can access directly.

    You're still thinking in the old way if you even think about a working set of data that all has to be accessed at once.  Use smaller tiles.  This is a streaming architecture, all you need is enough for double- or maybe triple-buffering a chunk at a time.

    And why would you "spawn threads"?  Streaming algorithms are perfect for low-overhead zero-context-switching coroutine processing.  Why would you want to be running an OS on the SPEs in the first place, even a lightweight RTOS?

    This is what I mean by it requiring a whole new way of thinking, and you're not there yet if you're still thinking in terms of the old problems. A lot of the limitations you see are just things that you have been trained to perceive as fundamental limits of possibility, when in fact they were mere side-effects of the way that our inefficient (but synchronous and iterative!) designs mean that 99% of that silicon is just sitting there doing nothing 99% of the time.  We've been breaking our backs to squeeze an extra GHz out of the clock rate, when we were only ever using a fraction of what it could do anyway.

    QFT.

     

    x86 is a dead roadmap. In the last 12 years, it went from upping GHz to upping cores, both of which are bound to reach the physical limit. Its about time something new comes out, and Cell not only achieves that, but it also makes it easier to do: C instead of assembly.



  • @danixdefcon5 said:

    QFT.

    x86 is a dead roadmap. In the last 12 years, it went from upping GHz to upping cores, both of which are bound to reach the physical limit. Its about time something new comes out, and Cell not only achieves that, but it also makes it easier to do: C instead of assembly.

     

     Yeah, but:

    1) People have been saying that for 15 years, and x86 is still king

    2) You not only have to be better than x86 to win, you have to be better than x86 *and* run x86 apps better than native x86 chips. Ask Intel's Itanium group if you need evidence.



  •  @blakeyrat said:

    2) You not only have to be better than x86 to win, you have to be better than x86 *and* run x86 apps better than native x86 chips. Ask Intel's Itanium group if you need evidence.

    Which arguably is the reason it got used for a gamesconsole which traditionally has had custom achitectures.  Is a good consumer niche market to gain some ground. It might also explain why sony chose to makeit "easy" to install linux, and  contributed* to the linux kernel to get the damn thing to run on a PS3. It's basically free research.

    I'm not saying that Cell is going to be the future, but if you would want to launch a new architecture then games consoles with a free pass to start tinkering with the architecture is a pretty good step in the right way.

     

    * Might have been IBM who contributed, not sure.


Log in to reply