Optifine modder rips the Minecraft devs for the code in the newest version.



  • Backstory: Early versions of Minecraft would run beautifully even on low-tier hardware, but after years of "improvements" and feature creep, it's become more and more bloated until people with old, cheap hardware can have trouble getting double-digit fps on default graphics settings. Now in the newest version (1.8) even people with high end machines are complaining about lag spikes.

    'Optifine' is a mod that was created to help by tweaking and simplifying the game's graphics code to make it run better. The result is a slightly less 'pretty', but much more usable game.

    Since there's no modding API, modding Minecraft involves someone actually unpacking and deobfuscating the minecraft .jar for each new version or subversion to create a source file that can be rebuilt later. Mojang encourages and even assists this process, but won't make the game open source.

    As the one with possibly the deepest knowledge of Minecraft's source outside of Mojang, the creator of optifine took to minecraftforum.net to explain the sudden performance trouble:

    http://www.minecraftforum.net/forums/mapping-and-modding/minecraft-mods/1272953-optifine-hd-a4-fps-boost-hd-textures-aa-af-and#c43757

    Minecraft 1.8 has so many performance problems that I just don't know where to start with.

    Maybe the biggest and the ugliest problem is the memory allocation. Currently the game allocates (and throws away immediately) 50 MB/sec when standing still and up to 200 MB/sec when moving. That is just crazy.

    What happens when the game allocates 200 MB memory every second and discards them immediately?

    1. With a default memory limit of 1GB (1000 MB) and working memory of about 200 MB Java has to make a full garbage collection every 4 seconds otherwise it would run out of memory. When running with 60 fps, one frame takes about 16 ms. In order not to be noticeable, the garbage collection should run in 10-15 ms maximum. In this minimal time it has to decide which of the several hundred thausand newly generated objects are garbage and can be discarded and which are not. This is a huge amount of work and it needs a very powerful CPU in order to finish in 10 ms.
    1. Why not give it more memory?
      Let's give Minecraft 4 GB of RAM to play with. This would need a PC with at least 8 GB RAM (as the real memory usage is almost double the memory visible in Java). If the VM decides to use all the memory, then it will increase the time between the garbage collections (20 sec instead of 4), but it will also increase the garbage collection time by 4, so every 20 seconds there will be one massive lag spike.
    1. Why not use incremental garbage collection?
      The latest version of the launcher by default enables incremental garbage collection (-XX:+CMSIncrementalMode) which in theory should replace the one big GC with many shorter incremental GCs. However the problem is that the time at which the smaller GCs happen and their duration are mostly random. Also they are not much shorter (maybe 50%) than a full scale GC. That means that the FPS starts to fluctuate up and down and there are a lot of random lag spikes. The stable FPS with a lag spike from time to time is replaced with unstable FPS and microstutter (or not very micro depending on the CPU). This strategy can only work with a powerful enough CPU so that the random lag spikes become small enough not to be noticeable.
    1. How did that work in previous releases?
      The previous Minecraft releases were much less memory hungry. The original Notch code (pre 1.3) was allocating about 10-20 MB/sec which was much more easy to control and optimize. The rendering itself needed only 1-2 MB/sec and was designed to minimize memory waste (reusing buffers, etc). The 200 MB/sec is pushing the limits and forcing the garbage collector to do a lot of work which takes time. If it was possible to control how and when the GC works then maybe it would be possible to distribute the GC pauses such that they are not noticeable or less disturbing. However there is no such control in the current Java VM.
    1. Why is 1.8 allocating so much memory?
      This is the best part - over 90% of the memory allocation is not needed at all. Most of the memory is probably allocated to make the life of the developers easier.
    • There are huge amounts of objects which are allocated and discarded milliseconds later.
    • All internal methods which used parameters (x, y, z) are now converted to one parameter (BlockPos) which is immutable. So if you need to check another position around the current one you have to allocate a new BlockPos or invent some object cache which will probaby be slower. This alone is a huge memory waste.
    • The chunk loading is allocating a lot of memory just to pass vertex data around. The excuse is probably "mutithreading", however this is not necessary at all (see the last OptiFine for 1.7).
    • the list goes on and on ...

    The general trend is that the developers do not care that much about memory allocation and use "best industry practices" without understanding the consequences. The standard reasoning being "immutables are good", "allocating new memory is faster than caching", "the garbage collector is so good these days" and so on.

    Allocating new memory is really faster than caching (Java is even faster than C++ when it comes to dynamic memory), but getting rid of the allocated memory is not faster and it is not predictable at all. Minecraft is a "real-time" application and to get a stable framerate it needs either minimal runtime memory allocation (pre 1.3) or controllable garbage collecting, which is just not possible with the current Java VM.

    1. What can be done to fix it?
      If there are 2 or 3 places which are wasting memory (bugs), then OptiFine can fix them individually. Otherwise a bigger refactoring of the Minecraft internals will be needed, which is a huge task and not possible for OptiFine.
    1. Example
      A sample log of GC activity with effective FPS for the GC lag spikes is available here.
    • the average rendering FPS is about 50 FPS
    • the GC lag spikes have effective FPS of 7-20
    • there are 1-2 lag spikes per second caused by GC activity

    tldr; When 1.8 is lagging and stuttering the garbage collector is working like crazy and is doing work which has nothing to do with the game itself (rendering, running the internal server, loading chunks, etc). Instead it is constantly cleaning the mess behind the code which thinks that memory allocation is "cheap".

    Mojang devs took to twitter and reddit to defend themselves/make snarky comments and blame the community

    http://www.reddit.com/r/Minecraft/comments/2js5j3/the_creator_of_optifine_sp614x_explains_the_18/cleq0mp

    Games can be either fast or extensible, pick one. I'll let you knife-fight it out with the ones demanding a plugin API.

    http://www.reddit.com/r/Minecraft/comments/2js5j3/the_creator_of_optifine_sp614x_explains_the_18/cleov3c

    I could have said that lots of short-term allocations were a bad thing. Nobody asked me, and I don't control mass changes to the engine like that.

    This one stands out to me, though: "The chunk loading is allocating a lot of memory just to pass vertex data around. The excuse is probably 'mutithreading', however this is not necessary at all (see the last OptiFine for 1.7)."

    Since sp614x is so much better a coder than me (according to Twitter), perhaps he can enlighten me as to what this "memory just to pass vertex data around" is that he's referring to, because I don't see it. Is there memory allocated for each block's model, so that we can bulk-transfer the data for individual faces into an IntBuffer in order to construct the final 16x16x16 renderable chunk? Sure. That's simply necessary, what are we supposed to do, recalculate the model data every time we render a block? If he's not referring to that, then what? The fact that there are 5 10-meg groups of BufferBuilder instances so that each thread can peel off a group as necessary and put data into the builder's IntBuffers before the final upload that happens on the main thread? Typically the chunk rebuild performance ends up bottlenecking at the final upload, so we have more builder groups than threads so that there can be multiple threads' worth of outstanding uploads so that the builder threads don't sit idle most of the time. And don't say "just use a thread-safe GL context," that is a gross LWJGL hack that doesn't work on as many hardware setups as it does work. I'd be really curious to hear how he would propose that we construct the buffers prior to uploading them to GL without having buffers in CPU-side local memory with which to do so.

    And what of Optifine's multithreading in 1.7, anyway? Are we referring to the multi-core chunk loading option where you can find countless people in the comments reporting that it causes stuttering or chunk drop-out?

    Since we're on this subject, why are there umpty-nine versions of Optifine for different machines, anyway? It has always struck me as a shotgun-like approach to performance. Does this hack not work? Try this hack! That one doesn't work? Try this other one. Things get a whole lot harder to optimize when you don't have the chance to release 3-4 versions of the same codebase, all with different optimizations, something that some folks don't appreciate.

    Ultimately, the man has some good points about memory management, but I would love to hear an explanation as to this "passing vertex data around" issue that just reads like Buzzword Bingo, meant to gull inexperienced people into lining up the torches and pitchforks at those poor Mojang idiots who don't know what they're doing, if only they had the infallible advice of Optifine. Until then, I'm going to keep on doing what I'm doing.


  • Discourse touched me in a no-no place

    I saw his post, but not the snark from the Minecraft devs.

    They can call him a hack all they want but he makes their game go a lot faster. And if he's such a hack why did they try to buy his code at one point?



  • If they wrote this in C++, they wouldn't have to debate garbage collection :D

    Only in Java do you have devs arguing how to memory manage a memory managed language.



  • @dookdook said:

    Mojang devs took to twitter and reddit to defend themselves/make snarky comments and blame the community

    Of course they did. "Someone disparaged me on the internet! Quickly, to my Twitter echo chamber!"

    Twitter is one of sites that the Internet would actually be better off without.


  • Discourse touched me in a no-no place

    @delfinom said:

    Only in Java do you have devs arguing how to memory manage a memory managed language.

    It's really pretty hilarous. I can't see how it can be good coding to allocate 50MB/s and then throw it away every few seconds.

    And a lot of their counter-bitching is ridiculous. "Oh, there's 4 versions of Optifine." Yeah, two years ago, when he was developing the feature set. Now it's coalesced to two: a lite version for people with weak computers, and the full version. "Oh, his stuff is just a bunch of hacks compared to our game." Well, of course, he's working from deobfuscated code, and only doing some performance enhancements.


  • Discourse touched me in a no-no place

    Interesting, the subject bounced back and forth between the Minecraft forums and reddit, but apparently at least one MC dev agrees that some of the things that are happening are dumb: "The move to using BlockPos instances instead of integer triplets has been something that's concerned me since the first time it was implemented. It strikes me as dumb, dumb, da-dumb-dumb-dumb."

    Ridiculous numbers of these BlockPos items are created and destroyed constantly. There's nothing like a regular drop to 10fps every couple seconds to GC.



  • Oh! And I was blaming the amount of mods my daughter has to those 10fps. Guess she'll have to go back to 7.10 or 7.2 now.


  • FoxDev

    @delfinom said:

    Only in Java do you have devs arguing how to memory manage a memory managed language.

    That's not limited to Java; in any memory-managed language/framework, you still need to manage memory if you want good performance.

    Remember: just because you have a garbage collector, doesn't mean you don't have to worry about how much garbage you create.



  • And/Or some things aren't collected by the GC automatically.

    .NET Font objects, I'm looking at you.


  • FoxDev

    @trithne said:

    And/Or some things aren't collected by the GC automatically.

    .NET Font objects, I'm looking at you.


    The .NET Font object is a wrapper round a native GDI HFONT, so it's no surprise if it's not auto-GC'd. Then again, that type of object is usually very long-lived, so wouldn't be GC'd anyway.



  • Under normal circumstances, yes, it's fine.

    Unfortunately, I was creating and destroying a lot of Font Objects during runtime (just don't ask. Either I'm stupid, or runtime created controls insist on making a copy of the Font object instead of just referencing it), and hitting the 10,000 GDI object limit.

    Not exactly difficult to null the Font when disposing of the controls, but it has to be done by hand, the GC won't do everything for you.


  • Banned

    @RaceProUK said:

    Remember: just because you have a garbage collector, doesn't mean you don't have to worry about how much garbage you create.

    And that's exactly why most game developers hate garbage collection: it doesn't solve any problems.

    Also, I'm surprised that after 19 years, Java still doesn't make memory pools for frequently used classes. I mean, that's basic optimization technique in C/C++ devs' arsenal. And it's orders of magnitude easier than JIT - a thing they have already made.



  • @FrostCat said:

    "The move to using BlockPos instances instead of integer triplets has been something that's concerned me since the first time it was implemented. It strikes me as dumb, dumb, da-dumb-dumb-dumb."

    Ridiculous numbers of these BlockPos items are created and destroyed constantly. There's nothing like a regular drop to 10fps every couple seconds to GC.

    This wouldn't be a problem if Java supported non-primitive value types like .Net does.



  • @trithne said:

    Twitter is one of sites that the Internet would actually be better off without.

    I don't know how to use it. The message lengths are closer to a password length than anything coherent. What am I supposed to fit in there? "Today I'm mostly eating sprouts"? And where will people fit the hashtags when they retweet it?

    At least the likes of the G+/FB echo chamber allow you to write essays. That way, you can see whether the comments and likes come from people who can read.

    I tried wading in on #fatshamingweek on twitter (which was at last glance populated with people rightfully complaining that it was a stupid atrocity). I think somebody posted "if your thighs touch, you're fat", and I responded with "if your thighs touch, it's your own fucking business" or something. Somebody else jumped to their defense talking about people making excuses for being fat.

    The point is, twitter does not give you enough space to explain to the world why a person is wrong, fallacious and telling lies. I agree the internet could do just fine without it, and I'm not sure what it says about humans that we use it.



  • @Gaska said:

    And that's exactly why most game developers hate garbage collection: it doesn't solve any problems.

    How did we get from "creating 50MB worth of objects per second will screw up your GC" to "GC is useless for game dev"? Is creating 50MB/s of garbage a staple of game development? Is every single line of code so critical that it just won't stand GC running around? Do you have a fetish for manually deallocating objects?


  • Garbage Person

    @trithne said:

    .NET Font objects, I'm looking at you.
    Whoa. Whoa. Whoa. References?

    My line of work involves code that allocates LOADS of Fonts for a relatively limited lifetime. This may explain the periodic "Production was running like ass, I reset the app and it went back to normal" shit the ops team pulls on me occasionally (no amount of 'I would like to take a memory dump before you do that next time' actually gets them to not rush through because PRODUCTION IS CRITICAL)



  • Don't have a reference per se, I used a GDI Object Viewer to see where I was hitting the Object Limit and realised that Fonts simply weren't being disposed of. Basically if you're making a control programmatically during Runtime and assigning a Font to it, throw a font = null into the control's Disposing event handler.

    Similarly with Event Handlers, while we're at it.

    This assumes you're disposing the control. I don't know why you're making all these Fonts, but the long and the short of it is you need to explicitly dispose of them.



  • @Medinoc said:

    This wouldn't be a problem if Java supported non-primitive value types like .Net does.

    Moral of the story: object fetishism is lame and harmful to the [s]health[/s]sanity of you and your codebase.



  • @dookdook said:

    Let's give Minecraft 4 GB of RAM to play with. This would need a PC with at least 8 GB RAM (as the real memory usage is almost double the memory visible in Java). If the VM decides to use all the memory, then it will increase the time between the garbage collections (20 sec instead of 4), but it will also increase the garbage collection time by 4, so every 20 seconds there will be one massive lag spike.

    This one is not exactly true. The performance of the garbage collector depends on the number of live objects, not the size of the garbage. So increasing the available memory may in fact lover the time spent on gc, because the amount of live objects may be lover. (The longer time between gc, the higher chance that a temp object is not alive once gc start).


  • Banned

    @Maciejasjmj said:

    How did we get from "creating 50MB worth of objects per second will screw up your GC" to "GC is useless for game dev"? Is creating 50MB/s of garbage a staple of game development? Is every single line of code so critical that it just won't stand GC running around? Do you have a fetish for manually deallocating objects?

    When you have 16 milliseconds for 3D drawing, physics simulation, network communication and disk read/write, you start to think differently. Even on Core i7. Also, 50MB means little less than 1MB per frame - and not even in one block, but rather hundred of thousands of few-byte blocks, with no more than few hundred used at once at any moment - it certainly could be done on the stack instead on the heap. And stack allocations are virtually free (no syscalls needed).

    GC is useless for gamedev because the problem it solves (automatically freeing memory of unused objects to make room for new objects) doesn't happen in gamedev (where almost no heap allocations happen after initial setup). Unless you're shitty developer like Notch and don't care about performance at all - then Java, with its undisableable garbage collector, is perfect match for you.



  • Sounds like someone over at Mojang recently got their hands on a copy of Effective Java.


  • ♿ (Parody)

    @Shoreline said:

    I agree the internet could do just fine without it, and I'm not sure what it says about humans that we use it.

    It says that you don't like to communicate that way. Probably because your thighs touch, or something.



  • @Martin_Tilsted said:

    The performance of the garbage collector depends on the number of live objects, not the size of the garbage.

    @Gaska said:

    Also, 50MB means little less than 1MB per frame - and not even in one block, but rather hundred of thousands of few-byte blocks, with no more than few hundred used at once at any moment - it certainly could be done on the stack instead on the heap. And stack allocations are virtually free (no syscalls needed).

    GC is useless for gamedev because the problem it solves (automatically freeing memory of unused objects to make room for new objects) doesn't happen in gamedev (where almost no heap allocations happen after initial setup).


    On one hand, this is an excellent point: GC is useless in an environment when most of your objects are long-lived. You're better off using a pooled approach that allows you to junk short-lived objects quickly, while long-lived stuff gets allocated at startup and stays around until you have to do something drastic, say changing maps.

    @Gaska said:

    When you have 16 milliseconds for 3D drawing, physics simulation, network communication and disk read/write, you start to think differently.

    On the other hand, this attitude can get toxic at times, with folks swearing off proper abstractions in favor of spaghetti code and not having any trust at all in their compilers. It's much easier to optimize what your profiler is telling you is a hotspot when you can isolate that hotspot from the rest of your code!

    Filed under: Writing 1998-era C++ in 2014 is a bad idea


  • Grade A Premium Asshole

    No sign of @blakeyrat in here talking about how shitty Java is and how much better C# is in every way? Also how Minecraft has shitty graphics, looks horrible and is only used by people for stupid things like building computer games inside of computer games?

    Needs more blakeyrant.



  • This is going to sound completely crazy, but I'm beginning to think Java might not be the best language to make videogames.



  • HERESY! BURN THE HERETIC!


  • Discourse touched me in a no-no place

    @Medinoc said:

    This wouldn't be a problem if Java supported non-primitive value types like .Net does.

    It is if you're using a stupendous number of immutable objects for a fraction of a second and then throwing them away.

    One of the MC dev's comments included something along the lines of creating 5 10MB chunks of data on the chance there'd be enough cores for the rendering thread(s) to not be starved of data. That doesn't seem like an optimal way to run a railroad.


  • Discourse touched me in a no-no place

    @Maciejasjmj said:

    How did we get from "creating 50MB worth of objects per second will screw up your GC" to "GC is useless for game dev"?

    That is, perhapsprobably an unjustified leap.

    If you're making 50-200MB of objects a second only to throw them immediately away, to the point that you're GCing every 4 seconds and killing the framerate, you are, in fact, Doing It WrongTM.

    I think the MC devs even realize that but for some reason haven't fixed it yet. Obviously it's a solvable or at least reducible problem because Optifine absolutely improves the situation.


  • Discourse touched me in a no-no place

    @Martin_Tilsted said:

    This one is not exactly true. The performance of the garbage collector depends on the number of live objects, not the size of the garbage.

    Whatever the issue actually is, it's observably and measurably real. Without enhancements like Optifine, Minecraft will absolutely do what you quoted.



  • @tarunik said:

    GC is useless in an environment when most of your objects are long-lived. You're better off using a pooled approach that allows you to junk short-lived objects quickly, while long-lived stuff gets allocated at startup and stays around until you have to do something drastic, say changing maps.

    To be fair, GCs are generational for this reason. Stuff should only be getting collected when it's fallen out of scope. Keep long-lived objects in scope and the GC shouldn't touch them. But when every frame counts, having the GC pop up at an undesirable moment and eat a bunch of frames isn't really optimal.


  • Discourse touched me in a no-no place

    @Gaska said:

    GC is useless for gamedev because the problem it solves (automatically freeing memory of unused objects to make room for new objects) doesn't happen in gamedev (where almost no heap allocations happen after initial setup).

    GC is useless for the kind of thing Minecraft gets wrong, absolutely. I bet you could fix the problems, though, primarily depending on how you manage your object lifetimes.


  • Discourse touched me in a no-no place

    @Eldelshell said:

    Oh! And I was blaming the amount of mods my daughter has to those 10fps. Guess she'll have to go back to 7.10 or 7.2 now.

    Or get her a good video card? Check settings and see if turning on VBOs help--for most people, vanilla 1.8 will have a significantly higher FPS than vanilla 1.7.

    DO NOT GO BACK from 1.8 to 1.7. It will destroy your world. (actually 1.7.10 might be safe, but TAKE A BACKUP of your world first!!) Actually, what will happen is all your chests will be emptied out.

    Optifine 1.8 should be out before too much longer.



  • @Intercourse said:

    No sign of @blakeyrat in here talking about how shitty Java is and how much better C# is in every way?
    @blakeyrat's at XNA's funeral, he's not available for comment. I'm skipping that because XNA killed my baby (Managed DirectX).

    Also, you forgot how it dumps all its files in roaming AppData, which means if he plays at work his user profile implodes.

    And the MC devs really need to learn about object pooling. Like, desperately.@Gaska said:

    Unless you're shitty developer like Notch and don't care about performance at all
    Notch does. The Bukkit guys, especially Dinnerbone, don't.



  • XNA's funeral has been going on for how long now?

    I don't even care. I'm keeping XNA's head in my refrigerator for a future time.


  • Banned

    @tarunik said:

    On the other hand, this attitude can get toxic at times, with folks swearing off proper abstractions in favor of spaghetti code

    No abstraction doesn't mean spaghetti code, or vice versa. Actually, I've seen much more abstraction-heavy spaghetti than no-abstraction spaghetti. I blame my young age (wrote my very first hello world years after C# 2.0). Structure of Arrays component approach really saves you CPU cycles, and isn't any less flexible than classic polymorphism (well, for the stuff you do in games - wouldn't work too good in business applications I think; and it's hard to get right - that's important too).

    @tarunik said:

    It's much easier to optimize what your profiler is telling you is a hotspot when you can isolate that hotspot from the rest of your code!

    It's not that easy if you end up with hotspots everywhere.

    @FrostCat said:

    Or get her a good video card? Check settings and see if turning on VBOs help--for most people, vanilla 1.8 will have a significantly higher FPS than vanilla 1.7.

    If disabling VBOs increases performance, I don't want to ever, ever, ever see their codebase. Ever.

    @TwelveBaud said:

    Notch does. The Bukkit guys, especially Dinnerbone, don't.

    Correct me if I'm wrong, but wouldn't that mean that vanilla MC without any mods whatsoever shouldn't take more than couple dozen megs of memory?


  • Discourse touched me in a no-no place

    @Gaska said:

    If disabling VBOs increases performance, I don't want to ever, ever, ever see their codebase. Ever.

    I did say enable, just in case you misread me. I assume anyone for whom disabling them made it worse had some kind of defective display adapter.


  • Discourse touched me in a no-no place

    @Gaska said:

    ake more than couple dozen megs of memory?

    I bet Java Hello World doesn't take only a couple dozen megs of memory.


  • Banned

    @FrostCat said:

    I did say enable, just in case you misread me. I assume anyone for whom disabling them made it worse had some kind of defective display adapter.

    Oh, sorry - read your post as "check settings and see if VBOs are turned on". My apologies.

    Still don't want to see their codebase. Mostly because of Java. I programmed in it for two days, and don't want anymore ever in my life.


  • Discourse touched me in a no-no place

    @Gaska said:

    Oh, sorry - read your post as "check settings and see if VBOs are turned on". My apologies.

    Well, sadly, it's actually worth checking since a small number of people supposedly DO see that fix things.

    I don't have any idea what they're using. Probably a JeForce card or something.


  • Banned

    @FrostCat said:

    I bet Java Hello World doesn't take only a couple dozen megs of memory.

    10708kB exactly. Measured on JDK8u20 Windows 7 x64 using Task Manager.


  • Discourse touched me in a no-no place

    @Gaska said:

    10708kB exactly.

    Snort. I'm kind of tempted to write a .asm version, or something in IL, just for comparison.


  • kills Dumbledore

    @Gaska said:

    VBOs

    Visual Basic Objects?


  • Banned

    @Jaloopa said:

    Visual Basic Objects?

    Vertex Buffer Objects. Close, but no cigar.


  • I survived the hour long Uno hand

    @Gaska said:

    Correct me if I'm wrong, but wouldn't that mean that vanilla MC without any mods whatsoever shouldn't take more than couple dozen megs of memory?

    Notch, the creator of Minecraft, left the project somewhere around the 1.0 release; I believe Jeb is the current creative director for the product. Most of the current devs were Bukkit developers who were hired into Mojang to expand the team. Performance has gone steadily downhill with every new release since then.


  • Banned

    Just downloaded beta 1.8.1. Takes 40% of my CPU time (Core i5, one of the newest) and 870MB memory after one minute of playing.


  • Discourse touched me in a no-no place

    @Yamikuronue said:

    Notch, the creator of Minecraft, left the project somewhere around the 1.0 release

    I wanna say 1.3. Apparently ever since 0x10c was cancelled he's just been faffing around; with his share of the buyout apparently he's going to officially do that full-time.


  • I survived the hour long Uno hand

    I think he started pulling back as soon as 1.0 dropped, but he might not have left officially for a few versions



  • @tarunik said:

    On the other hand, this attitude can get toxic at times, with folks swearing off proper abstractions in favor of spaghetti code

    Most spaghetti code I've seen in the last 5-10 years has been specifically due to abstraction. Dozens of pointless levels of unnecessary abstraction which make it impossible to just grab a debugger and step through what the code is actually doing.



  • @RaceProUK said:

    The .NET Font object is a wrapper round a native GDI HFONT, so it's no surprise if it's not auto-GC'd. Then again, that type of object is usually very long-lived, so wouldn't be GC'd anyway.

    A generic error occurred in GDI+.

    Yeah, I love seeing that.



  • No way man, she's already running a dedicated 1GB NVidia. It used to run quite fine until 1.8.

    Hell, that card moves Skyrim without any problem!


Log in to reply