Why (most) High Level Languages are Slow (article)



  • @RaceProUK said:

    Unless you use Mono. But then, who uses Mono anyway? Y'know, apart from Mono.

    I've seen a lot of people come down with Mono. They note the many advantages it offers, but after about a month, they become disillusioned and are cured of any desire to try it again.



  • @xaade said:

    Well, right off the bat, collision detection is much harder.

    It is, compared to discrete-pixel kinematics, but many of the challenges you go on to describe can exist with 2D dynamics as well.

    @xaade said:

    In 2d, you can have walls, and floors, and as long as you don't have gaps the width of a unit (like the player character), then you won't fall. You have to make sure you place the characters bottom pixel one pixel above any surface they are to walk on, and you have to make sure nothing pushes them below the floor. In 2d angle grade is only determined by the angle of elevation from the surface.

    In 3d, you have a surface, and you need to make sure that there are no cracks anywhere. Then you have to make sure that no downward force can push you through a floor. Then you have to make sure you don't teleport under or partially under a floor. Then you have to make sure one of your bones isn't stuck on the other side of a surface, or you will be stuck jiggling all day. And that just the surface. Then you have to decide how your character will run on a surface, you have to implement speed reductions for surface grades. Then you have to calculate the true surface grade based on the angle the player is facing. Because if a character is angled into a grade that is steeper in one direction than the other, you have to calculate based on the angle between the two edges based on the steepness of each edge, and apply geometry to calculate the grade of the angle.

    You don't necessarily need to take angles into consideration - all you need are the collision normals and dot products.

    It's also probably easier to have a single collision volume for player-world collisions with kinematic bones rather than making bone joints into dynamic motors.

    That said, trying to implement a dynamics engine that's comparable to PhysX, Havok or Bullet is going to be a world of hurt that's probably best avoided.



  • Most applications written in high level languages are slow (in the sense of using far, far more than the feasible minimum CPU time required to get any given job done) because most developers (a) have better computers than the end users do (b) have drunk the "optimization? pffft" kool-aid and (c) are living exemplars of Sturgeon's Law.

    This is actually starting to shift a little with the rise of battery-operated devices, because although application developers can still rely on profiling and highly selective optimization to get rid of perceptible performance lag, gross CPU-use inefficiency now shows up as reduced battery runtime.



  • You have heard of Unity 3D right? Guess what they're built on.

    Even if they're trying to move off it because of issues now.



  • @tar said:

    I don't even want to think about the complexity involved in creating a game in 4D. How do you make a simulation of something which has more dimensions than real life!?!

    Easy. You tap on the walls in morse code.



  • @blakeyrat said:

    A 3D game is a 2D window into a 3D world. So a 4D game would be a 3D window into a 4D world.

    I like it, but it's about as useful as saying:

    We see a 3d world with 2 eyes, so to see a 4d world, we just need 3 eyes.

    Which is actually true now that I think about it, it just has to be a certain distance away from the other 2 eyes in the 4th dimension.



  • Or d) they want to get shit done instead of spending months carefully tweaking their routines to get a negligible difference in performance.

    Programming is about tradeoffs. And most of the time, a new feature has more weight than an optimization tweak.



  • That's essentially (b).



  • It's a surprisingly non-lethal and refreshing Kool-Aid, then.



  • @Maciejasjmj said:

    carefully tweaking their routines to get a negligible difference in performance

    The trouble is that it has now become standard coding dogma that optimization and "tweaking the routines" are the same thing, and that performance is to be assumed unimportant until explicitly measured as otherwise.

    In the hands of a member of the 90%, this dogma results in consistently poor choices of algorithms and data structures. Applying multiple rounds of profiling and hotspot tweaking to code whose architecture is pessimal often results in code that not only still performs badly but is bloody difficult to maintain as well.

    "The simplest thing that can possibly work" overlaps only very occasionally with "the first approach the cheap offshore developer thought of".



  • @blakeyrat said:

    2D window into a 3D world. So a 4D game would be a 3D window into a 4D world

    I can't get the whole quote after about 30 attempts (WP8), so that will have to do.

    I'm surprised nobody's posted this yet. It seems too confusing to use in an actual game though.



  • @Groaner said:

    Ultimecia

    Ultimecia only appeared towards the end of the game, so wasn't really much of a concern for most of it. Sephiroth is one step ahead of you throughout and a constant threat. Ultimecia did have some awesome music though.

    I think her story would have been much more interesting if Square hadn't flat out refuted this theory. They could at least have left it open for interpretation.



  • @Groaner said:

    RakNet has functionality to compress quaternions and other things into those types. I have little desire to attempt to use them until they become an actual need.

    Makes sense. 😄

    That is, the desire not to use them unless absolutely.necessary

    No fixed point support? That'd at least let you handle textures up to 65536x65536.

    Ah, that was in a GLSL (ES) shader. There's highp, which deals with the problem. Unfortunately doing the associated computations in highp was also measurably slower on the test device.

    FWIW, didn't try doing the calculations in fixed point by hand, but considering that had likely required (highp) ints in the shader, I'm fairly certain it wouldn't have improved performance. (I'm quite certain it wouldn't have improved my sanity.)



  • @flabdablet said:

    In the hands of a member of the 90%, this dogma results in consistently poor choices of algorithms and data structures.

    Well of course it's better if we use the most optimal data structure. The point is, it's often not much better.

    If you have a list of 5 items and want it in alphabetical order, it doesn't matter much whether you use quick sort, bubble sort, the sort that comes with your standard library, or a homegrown one-liner in O(n3). The end user won't notice, and the only variable to really optimize is the development time.

    @flabdablet said:

    "The simplest thing that can possibly work" overlaps only very occasionally with "the first approach the cheap offshore developer thought of".

    Well, fair enough. But writing maintainable and sensible code is a different problem than optimization.



  • @xaade said:

    Skyrim

    The NPC that was following me, swam into a lake and was killed by a fish.

    One of my favourite moments in gaming was playing Oblivion for the first time with my brother watching, robbing the Imperial City, swimming away through the moat-lake thing around it, laughing about how I got away scot free... then turning around and seeing a guard (in full plate armour no less) swimming after me with rage-face.

    Was laughing for a solid 5 minutes, the way those games systems produce random moments like that is utterly fantastic.



  • @Magus said:

    Okay, cool, thanks for clearing that up. I now know that you will never understand the flaw in what you're saying.

    There is no flaw. Only fact, that you are too emotional and irrational to accept. Oh well.



  • @tar said:

    I don't even want to think about the complexity involved in creating a game in 4D. How do you make a simulation of something which has more dimensions than real life!?!

    Wasn't that Fable's bs promises? You could plant seeds and trees would grow or whatever, cos it was the only game that included the dimension of time? Lol, Molyneux was funny.



  • @Maciejasjmj said:

    writing maintainable and sensible code is a different problem than optimization.

    Indeed. Though in my personal experience (most of which has been with not-particularly-high-level languages, it must be said) it's been quite pleasing to find how often a bit of time spent on thinking about optimization during the design stage has resulted in code that is sensible and requires very little maintenance after the fact.

    This should not actually be a huge surprise. The underlying machine architecture is what it is, and keeping half an eye on it during design can be quite a useful way to keep any tendency toward abstraction astronautics firmly grounded.

    Writing code that's clear and straightforward and easy for the next coder to read is a laudable aim. But again, that's now become dogma to such an extent that many coders behave as if achieving some moderate level of source code clarity is the coder's only legitimate aim; implementation issues be damned! Optimization eeeeeeevil! ...as if optimization is necessarily something that must be done after first release to QA.

    Premature optimization is bad when it detracts from code clarity. If it improves code clarity, which it often can do by reducing the stack of abstractions that a coder must juggle in order to understand what the code in front of them is actually supposed to do, or by finding concise ways to map this layer's required behavior onto the abstractions provided by layers below, then it's a good thing - even if the time or RAM or battery savings to the end user are negligible.

    Development time optimization is a tricky thing, and quite often very badly served by attempting to minimize the amount of time any one developer spends on writing code. If I take twice as long to knock out a module as the guy in the next cubicle, but the extra time means that my code is leaner and clearer and results in ten times less revisits than his, I've done well.



  • All that still has nothing at all to do with optimization. Optimizing might or might not result in cleaner, more readable code. Cleaning up the code certainly will, if you're doing it right.

    So why optimize, when your goal is to obtain cleaner and more maintainable code? Why not just clean up the code and focus on clarity, and if that results in more performant solution, well good for you?



  • @tar said:

    I don't even want to think about the complexity involved in creating a game in 4D. How do you make a simulation of something which has more dimensions than real life!?!

    Real life has more than three dimensions. Orthogonal to length, width, breadth and each other, you have at least time, mass and charge. Mapping the last two onto spatial dimensions would probably create quite an unintuitive UI, but you could certainly have fun with time.

    The canonical example of a 4D surprise is the ability to move between the inside and outside of a building with no openings in 3D, by "going around the side" in 4D. A game that allowed movement in time would let you do this in a way that makes intuitive sense: you move pastward until before the building was built, then move to a location that will be inside it, then move futureward again; or go futureward until after the building was destroyed, move to where its inside used to be, then go pastward. The visual time travel effect would look like a movie running backwards or forwards at insane speed, easily doable with any of today's 3D game engines.



  • @Maciejasjmj said:

    Why not just clean up the code and focus on clarity, and if that results in more performant solution, well good for you?

    Because it's been my experience that searching for more performant solutions frequently results in clean code and clarity from the get-go, rather than requiring any after-the-fact cleanup at all.

    If your general approach involves understanding the layers underneath you well enough to avoid actively fighting with them, the resulting with-the-grain code is more often clean and performant than either alone.

    Many people seem to believe, and to code as if, optimization is strictly orthogonal to clarity. In my experience, they're rather more closely related than that.



  • @flabdablet said:

    A game that allowed movement in time would let you do this in a way that makes intuitive sense

    Anyone here played the old DOS game "Day of the Tentacle"? You spend most of the game controlling three characters, one in present time, one 200 years in the past, and one 200 years in the future. Lots of this stuff going on - for instance, the one in the future starts off stuck in a kumquat tree, and you have to get her down by getting the one in the past to convince George Washington to chop it down. [spoiler] Paint the kumquats red so they look like cherries, and then tell him you don't believe the story about the cherry tree so he goes to prove it to you.[/spoiler]

    The game does have a bit of an inbuilt cheat in that you can send small objects from one time to another. Makes the gameplay much more interesting though, except for the character in the future who's really tedious to manoeuvre until you get her a tentacle suit. (Humans aren't allowed out without their keepers.)


  • FoxDev

    @Scarlet_Manuka said:

    tentacle suit

    Is this game Japanese by any chance?


  • Discourse touched me in a no-no place

    @flabdablet said:

    Though in my personal experience (most of which has been with not-particularly-high-level languages, it must be said) it's been quite pleasing to find how often a bit of time spent on thinking about optimization during the design stage has resulted in code that is sensible and requires very little maintenance after the fact.

    Choosing sensible data structures and algorithms is not optimization. Optimization is where you've got a number of similar solutions and you need to pick between them (e.g., whether to use a tree or a hash table to implement a mapping) or where you've got to chisel at the code a bit to make that inner loop fit in the L1 cache on your key target system. That latter point is certainly something that shouldn't be done until you're getting close to shipping product; it's way to sensitive to everything else that's happening.

    When you're coding in a high-level language, they've usually already chosen pretty good low-level algorithms for you. You still have to pick good high-level algorithms (as always) and there's always the worry that something will bite at the base level; that's where you usually hold your nose and hope that the compiler/JITter will come good.



  • @flabdablet said:

    The canonical example of a 4D surprise is the ability to move between the inside and outside of a building with no openings in 3D, by "going around the side" in 4D. A game that allowed movement in time would let you do this in a way that makes intuitive sense: you move pastward until before the building was built, then move to a location that will be inside it, then move futureward again; or go futureward until after the building was destroyed, move to where its inside used to be, then go pastward. The visual time travel effect would look like a movie running backwards or forwards at insane speed, easily doable with any of today's 3D game engines.

    That sounds extremely fun. Go make that game, please :)



  • @RaceProUK said:

    Is this game Japanese by any chance?

    Heh. No, LucasArts. In fact the tentacle suit is a repurposed American flag.


  • FoxDev

    @Scarlet_Manuka said:

    Heh. No, LucasArts. In fact the tentacle suit is a repurposed American flag.

    Really? Who's barmy enou-
    *sees it's the sequel to Maniac Mansion*
    Ah, that explains it 😄



  • It's a fun game. Especially when you microwave the hamster. (In the future; the character doing it makes a comment along the lines of how you couldn't do that with the unsafe microwaves of our time.)


  • Discourse touched me in a no-no place

    It's one of the barmiest of that genre of games ever. Awesome.



  • @dkf said:

    Choosing sensible data structures and algorithms is not optimization.

    True. My point is that keeping performance in mind as one of your goals, as opposed to believing that performance is something you can always get when you need it by optimizing after the fact, can often be a help when it comes to deciding which of the data structures, algorithms or architectures you know about are sensible.

    Even after moving out of active development into netadmin, I've found the same kind of thing applies. If I find myself half an hour into some messy scripting or software config task where all my options are just looking terrible, my old "search for performance" instinct will kick in and remind me to step back from what I'm doing because it cannot possibly be supposed to work this badly which clearly means I need to go and learn something new to make more options available.

    It often strikes me that Blakey is simply missing that instinct. He'd rather wrestle with years-out-of-date instructions found by some random method, subdue whatever it is by main force, then spend hours ranting about what a terrible time he had and how sucky all these tools are than to step back and learn something in order to give himself better options.

    I'm not saying that optimization is identical with that process. I'm saying that keeping performance in mind - your performance, your team's performance, and the resulting software's performance - is nowhere near as harmful as the old saw about premature optimization being the root of all evil would have you believe.


  • ♿ (Parody)

    @flabdablet said:

    In the hands of a member of the 90%, this dogma results in consistently poor choices of algorithms and data structures.

    This is true. The problem with that 90%, IME, is that this is better than the alternative of seeing what they come up with to solve their imagined performance problems. Since they can't figure out actual problems, their imaginations just speed up the WTF generation process, not their code.


  • ♿ (Parody)

    @flabdablet said:

    If it improves code clarity, which it often can do by reducing the stack of abstractions that a coder must juggle in order to understand what the code in front of them is actually supposed to do, or by finding concise ways to map this layer's required behavior onto the abstractions provided by layers below, then it's a good thing - even if the time or RAM or battery savings to the end user are negligible.

    +Ú

    The best ones are the ones that don't look like optimizations after the fact but inspire a comment like, "Duh, how else would you do this?"


  • ♿ (Parody)

    You say:
    @dkf said:

    Choosing sensible data structures and algorithms is not optimization

    But then:

    @dkf said:

    Optimization is where you've got a number of similar solutions and you need to pick between them (e.g., whether to use a tree or a hash table to implement a mapping)

    So I'm not sure which one of you to believe, but I'm leaning toward the second one. IOW, I reject your pedantic dickweedery because it doesn't make sense.



  • @Arantor said:

    Even if they're trying to move off it because of issues now.

    Real issues? Or delusional "not 1337 enough" issues?

    I'm wagering the latter.

    @flabdablet said:

    The trouble is that it has now become standard coding dogma that optimization and "tweaking the routines" are the same thing, and that performance is to be assumed unimportant until explicitly measured as otherwise.

    That is correct.

    @flabdablet said:

    In the hands of a member of the 90%, this dogma results in consistently poor choices of algorithms and data structures. Applying multiple rounds of profiling and hotspot tweaking to code whose architecture is pessimal often results in code that not only still performs badly but is bloody difficult to maintain as well.

    90% of all programmers are shitty at their jobs; saying 90% of programmers will get it wrong is not refuting the first paragraph, it's just pointing out the obvious: this industry is full of morons. (Hell, apparently 100% of the programmers working at Rockstar aren't aware of Unicode or how Windows paths work. That's the level of skill we're dealing with in games.)



  • @tar said:

    I don't even want to think about the complexity involved in creating a game in 4D. How do you make a simulation of something which has more dimensions than real life!?!

    Mathematically, modelling and simulating a 4D world really isn't any more complicated than modelling a 3D or 2D or even 5D world. Rendering it in a meaningful way to the user is the complicated part.

    (The MUD I've been programming on the side technically uses a 4D world, however there is no cardinal movement axis along that fourth dimension so the user doesn't realize it. It would be simple to add but heads would explode.)



  • @Maciejasjmj said:

    Or d) they want to get shit done instead of spending months carefully tweaking their routines to get a negligible difference in performance.

    Programming is about tradeoffs. And most of the time, a new feature has more weight than an optimization tweak.

    There are optimizations (the micro-optimizations you're referring to, where you might rewrite a hot loop to gain a 0.1% performance increase), and there are optimizations -- such as migrating a data cleanup into a stored procedure to get an order of magnitude performance boost.

    Thing is, you can't just look at a piece of code and tell if it's hot or cold, and how much a rewrite will save -- caching will ruin any attempt to do that, but apparently people don't grok that.

    Filed under: MEASURE FIRST, THEN OPTIMIZE

    @flabdablet said:

    The trouble is that it has now become standard coding dogma that optimization and "tweaking the routines" are the same thing, and that performance is to be assumed unimportant until explicitly measured as otherwise.

    In the hands of a member of the 90%, this dogma results in consistently poor choices of algorithms and data structures. Applying multiple rounds of profiling and hotspot tweaking to code whose architecture is pessimal often results in code that not only still performs badly but is bloody difficult to maintain as well.

    "The simplest thing that can possibly work" overlaps only very occasionally with "the first approach the cheap offshore developer thought of".


    Exactly! No-amount of "tweaking" is going to get you out of quadratic (or worse!) behavior, or code that sits there thrashing the cache!

    @flabdablet said:

    Premature optimization is bad when it detracts from code clarity. If it improves code clarity, which it often can do by reducing the stack of abstractions that a coder must juggle in order to understand what the code in front of them is actually supposed to do, or by finding concise ways to map this layer's required behavior onto the abstractions provided by layers below, then it's a good thing - even if the time or RAM or battery savings to the end user are negligible.

    Development time optimization is a tricky thing, and quite often very badly served by attempting to minimize the amount of time any one developer spends on writing code. If I take twice as long to knock out a module as the guy in the next cubicle, but the extra time means that my code is leaner and clearer and results in ten times less revisits than his, I've done well.


    Exactly! People don't stop and think about what concepts they're using, and how they map to the machine...

    @flabdablet said:

    True. My point is that keeping performance in mind as one of your goals, as opposed to believing that performance is something you can always get when you need it by optimizing after the fact, can often be a help when it comes to deciding which of the data structures, algorithms or architectures you know about are sensible.

    QFT -- too many "programmers" have no clue about Big-O and are more than happy to write an O(n^3) square wheel when some research would have dug up a data structure or algorithm invented 20, 30, even 40 years ago that is more performant than throwing loops at a problem!

    @flabdablet said:

    It often strikes me that Blakey is simply missing that instinct. He'd rather wrestle with years-out-of-date instructions found by some random method, subdue whatever it is by main force, then spend hours ranting about what a terrible time he had and how sucky all these tools are than to step back and learn something in order to give himself better options.

    No kidding! He needs to learn that there are some problems that brute force and ranting won't solve. I wonder if he'd try to pick up a drawbar by himself if someone dropped it in front of him, and then rant about throwing his back out?



  • @tarunik said:

    QFT -- too many "programmers" have no clue about Big-O and are more than happy to write an O(n^3) square wheel when some research would have dug up a data structure or algorithm invented 20, 30, even 40 years ago that is more performant than throwing loops at a problem!

    One of the most fascinating (and slightly disappointing) things I learned in college was that all the fancy algorithms were basically already written before I was born 😛


  • ♿ (Parody)

    @KillaCoder said:

    One of the most fascinating (and slightly disappointing) things I learned in college was that all the fancy algorithms were basically already written before I was born

    You probably thought your generation invented kinky sex, too. We like to think of the past as a dark and stupid place.



  • Not quite, I don't think there are any hardware upgrades between our generations....


  • ♿ (Parody)

    I'm just saying...it's something that every generation tends to believe about their ancestors.



  • Oh sure, I just mean it's a bit more understandable (I hope!) considering how fast hardware improves, that I assumed software was equally fast moving. Didn't help when professors told me books from 5 years ago were "obsolete" 😮



  • I've played Child of Light and it runs perfectly well in my regular old GPU (R9 200) that can't play an old game like Far Cry 3 (another Ubisoft joint) with anti-aliasing on without dropping frames below 60. I'm pretty sure I couldn't play GTA V on highest. And Child of Light is almost a current game (2014 release I believe?). I really don't see it.


  • ♿ (Parody)

    @KillaCoder said:

    Didn't help when professors told me books from 5 years ago were "obsolete"

    Depends on the book. How old is the Kama Sutra these days?

    Oh, wait, you were talking about computer hardware?



  • You mean that one LucasArts game that is one of the best games ever made?

    The game that someone 7+ years ago asked me to do (what we now call) a Lets Play of, but I refused because I didn't think people would actually watch that sort of thing?

    Nope, never head of it.

    (Yes, I may be slightly bitter about this)



  • @dstopia said:

    an old game like Far Cry 3

    A game that came out in late 2012 is "old" now?

    Makes me wonder what you think of games from before 2010... or before 2000... or before 1990.



  • It is older than Child of Light, for sure. And it is pretty much behind the generational gap from the games coming off the new consoles. So yeah, it is old -- games like GTA V for PC are pretty much a step forward by a very big margin considering the gap between console generations.


  • Discourse touched me in a no-no place

    @powerlord said:

    The game that someone 7+ years ago asked me to do (what we now call) a Lets Play of, but I refused because I didn't think people would actually watch that sort of thing?

    It's never too late. Maybe nobody will watch the video, but you'll still be playing a great game.



  • @tarunik said:

    too many "programmers" have no clue about Big-O and are more than happy to write an O(n^3) square wheel when some research would have dug up a data structure or algorithm invented 20, 30, even 40 years ago that is more performant than throwing loops at a problem!

    Illustrative case

    Related phenomenon: building your designs by thinking only about the interfaces of the abstractions from the next layer down, and not really caring much about their performance, can result in code that has Schlemiel the Painter baked in at so many levels that nothing short of a complete re-architecting is going to be enough to get rid of it.

    Internal inner-platform effects are also depressingly common, especially in web programming. As my friend Don often says: those who fail to understand networking protocols are doomed to re-implement them - poorly - over port 80.



  • @tarunik said:

    He needs to learn that there are some problems that brute force and ranting won't solve

    I've long pictured him as a kind of Jeremy Clarkson figure sans wit or charm.



  • @KillaCoder said:

    considering how fast hardware improves, that I assumed software was equally fast moving.

    It is! Just in the opposite direction.


Log in to reply