Shuffling off this mortal coil


  • Discourse touched me in a no-no place

    One of our (thankfully now former) developers was writing some code to randomly distributed things over a grid. That developer was someone who loved to write what he thought were super-fast algorithms that were simultaneously as Pythonic as possible. He never met an advanced feature of a programming language that he didn't want to work into what he was doing, whether or not it was useful, and Python is a language that gives you a lot of rope for mischief indeed. Why would you use classes when you could use something custom? Why would you write a simple sort when you could use a complex list comprehension instead? His code was often extremely short and had a beautiful surface reading, yet would open great chasms of bewilderment once you poked under the covers.

    Now, the grid was of reasonable size, but 256×256 really isn't exactly massive. The only real nuance was that the things that were being distributed would sometimes be much fewer than the number of grid cells, and other times would end up needing to cover the whole grid; a few orders of magnitude of difference in the number of items.

    Now, there's a well known algorithm for doing random placing of a dense collection: shuffle the items or the places and just match them up against each other. It's been around forever and is pretty simple: shuffle the grid points in a big list and then match off against the set of things to be placed. It takes linear time; nice and efficient, no surprises. The only time you ought to consider picking anything else is when the number of items to place is much much smaller than the number of points you could use, when you'd pick random places and hope that there were no collisions. That'd happen some of the time with our application, but often there'd be a lot more loading.

    But the developer in this little story just couldn't see past the fact that that algorithm would need to quite a bit of work in the case that there were very few points, so he decided to use the random selection algorithm for all the items. He'd just kept a set of locations that he'd already used and retried placing if the point had already been used.

    Sure, his code worked fine with the test cases he ran, and was extremely fast. But as the number of points increased, things would go strange; the amount of time to place the items would increase massively. (What was happening was that as the number of used points increased, the probability of choosing an unused point would decrease and the number of iterations would increase. Massively so, as the target loading factor approached 1.) This was unacceptable! Being under a bit of pressure due to his impending departure for pastures new, and unwilling to admit that an up-front shuffle would be better, he came up with a “clever” adjustment to his algorithm to stop the infinite looping problem. He put an upper bound on the number of iterations used. That pesky apparently-infinite loop gone: the amount of work required was now strictly bounded and everything was fine!

    Our (enormously more competent) intern had a good look at this code yesterday, and noticed that it would tend to not place all the items; a random selection would simply get forgotten. She realised that the code, which could have been using that classic shuffle and been working correctly with a few hours coding, had instead started out using a Bad algorithm (which would produce a right answer eventually) and then, under mild pressure, had been converted to a Fail of a non-algorithm that didn't produce even a vaguely correct answer.

    I don't remember our intern actually facepalming before. 😆

    Our former developer's code is gradually being converted into things that work with more conventional code and less crazy, and in the process is becoming often orders of magnitude faster. Smartass coding is all very well, but paying attention to what costs and what you can cache actually works better.


  • ♿ (Parody)

    @dkf Sometimes it's fun to do crazy shit like that, but you better not be counting on it. I've personally found that they are good learning experiences and often the lessons become useful at a later date, but rarely do they turn into production code.


  • Discourse touched me in a no-no place

    @boomzilla Agreed. But then I don't go round crowing that my experimental code makes other people's production code obsolete.

    Of course, I'm still employed here too, so there's that… ;)


  • FoxDev

    It seems that former dev forgot the most basic principle of coding:
    0_1505224495915_7785a6b7-e66e-410f-8ded-da2c4432150d-image.png



  • @dkf said in Shuffling off this mortal coil:

    He'd just kept a set of locations that he'd already used and retried placing if the point had already been used.

    What? That doesn't even make sense. Couldn't you just shuffle the list of all points once, and then continuously draw points from that list for everything? Are you suggesting he ran the shuffle over and over for each item and also kept a list of used-up points and didn't try removing used-up points from the shuffled list? :wtf:


  • Discourse touched me in a no-no place

    @lb_ said in Shuffling off this mortal coil:

    That doesn't even make sense.

    I'm not claiming that this was a good way to do it…



  • @dkf said in Shuffling off this mortal coil:

    shuffle the grid points in a big list and then match off against the set of things to be placed

    Reminds me of a funky piece of code in Red Alert 2.
    Goal: when the map is loading, assign a "secret production option" to specific buildings on the map, so that if a player captures that building, they can build something not usually available.
    Implementation (paraphrased):

    std::vector<TechnoType*> secretOptions; // a list of all possible "secret production options", populated previously
    std::vector<Building*> buildingsWithSecrets; // a list of all those capturable buildings currently on the map, populated previously
    
    std::vector<int> numbers;
    for (int i = 0; i < secretOptions.size(); ++i) {
        numbers.push_back(i);
    }
    
    for (int idxLab = 0; idxLab < buildingsWithSecrets.size(); ++idxLab) {
        int idxBonus = Random_Ranged(0, numbers.size() - 1);
        numbers.erase(numbers.begin() + idxBonus); // remove element at randomly selected index
        TechoType* bonus = secretOptions[idxBonus]; // !!! use the randomly picked index (not the removed element) to pick a bonus
        buildingsWithSecrets[idxLab]->SecretProduction = bonus;
    }
    

    If the last eligible bonus doesn't get picked in the first iteration, it will become "out-of-bounds" for all further iterations.



  • @lb_ said in Shuffling off this mortal coil:

    What? That doesn't even make sense.

    I'm hoping that @dkf meant that, for each item, he picked a random set of coordinates, checked listOfUsedCoordinates[] to see if the grid location they described was used, re-pick coordinates if so, add to listOfUsedCoordinates[] and place item if not. Both because that's the algorithm I would use if I wasn't aware that there'd be a near-1 load factor (since TIL about Fischer-Yates), and because I have no idea why things would be "extremely fast" if he did what you described.



  • @raceprouk said in Shuffling off this mortal coil:

    It seems that former dev forgot the most basic principle of coding:
    [...]

    To rock and roll all night, and part of every day?



  • @dcoder said in Shuffling off this mortal coil:

    std::vector<TechnoType*> secretOptions; 
    std::vector<Building*> buildingsWithSecrets;
    

    Careful. If you were to post this on StackOverflow, someone would immediately complain, "Raw pointers inside STL containers?! My eyes!"

    I'm not one of those people, as I've used similar constructions in my own code, and there are cases where you don't necessarily want everything to be a shared_ptr/unique_ptr.


  • ♿ (Parody)

    @groaner said in Shuffling off this mortal coil:

    Careful. If you were to post this on StackOverflow, someone would immediately complain, "Raw pointers inside STL containers?! My eyes!"

    Motherfucking modern compiler privilege is what that is! 😡



  • @boomzilla said in Shuffling off this mortal coil:

    @groaner said in Shuffling off this mortal coil:

    Careful. If you were to post this on StackOverflow, someone would immediately complain, "Raw pointers inside STL containers?! My eyes!"

    Motherfucking modern compiler privilege is what that is! 😡

    Indeed. My project is still on VS2010, and while that lacks a lot of C++11 features, boost fills in most of the cracks. At some point, I'm going to upgrade to 2015 or beyond, but that means rebuilding all my dependencies (and probably upgrading them as well to more recent versions, where possible). I expect this to be a hassle, but I've done it once before for 2008->2010. There's also the issue of closed-source APIs that are built against a particular version of VS, and certain vendors like to not have builds for more recent versions.


  • ♿ (Parody)

    @groaner

    $ g++ --version
    g++ (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18)
    Copyright (C) 2010 Free Software Foundation, Inc.
    This is free software; see the source for copying conditions.  There is NO
    warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
    


  • Go Intern!!!!!!!



  • @groaner To be fair, this is just my transcription of reverse-engineered third-party code, which was originally written in 2000/2001.


  • BINNED



  • @twelvebaud said in Shuffling off this mortal coil:

    I have no idea why things would be "extremely fast" if he did what you described.

    I don't claim that it would be super fast - the initial shuffle would take some time based on the number of possible locations - but after that shuffle, you just pick the first element from the list, use it, and then remove it from the list. When the list is empty, all positions are occupied.


  • Discourse touched me in a no-no place

    @twelvebaud said in Shuffling off this mortal coil:

    @lb_ said in Shuffling off this mortal coil:

    What? That doesn't even make sense.

    I'm hoping that @dkf meant that, for each item, he picked a random set of coordinates, checked listOfUsedCoordinates[] to see if the grid location they described was used, re-pick coordinates if so, add to listOfUsedCoordinates[] and place item if not. Both because that's the algorithm I would use if I wasn't aware that there'd be a near-1 load factor (since TIL about Fischer-Yates), and because I have no idea why things would be "extremely fast" if he did what you described.

    Yes. That's very quick for picking a small number of places. But it's rotten once you want to pick a large fraction of the available locations, since you end up having to back off and try again more and more often. Since we were planning to use this as kind of allocation algorithm for placing work items onto our (deeply weird) supercomputer's processors, the possibility of putting a lot of work items into it is very real; the code was taking a long time to do what should have been pretty rapid. I think we're talking about something like an hour or more when we were getting toward 100% load, and even if it is running in Python that's just ridiculous. (We have other algorithms that legitimately take that long because they're doing complex optimisations.) I'm guessing that once we fix it, it'll take well under a second.

    We're probably not going to end up using (the corrected version of) this as our main workhorse for this code placement problem; we've got other alternatives based on conventional flood fills and hilbert curves that we think will probably do better in practice, but since we're a university, we need to actually test that hunch out against real code and measure which does best. There's some non-obvious consequences elsewhere which might make a random algorithm surprisingly good.

    But not the code as it stands. It is a poster child for Just Plain Wrong™. (The intern probably won't finish the fix this week because we've given her another higher-priority task that involves her doing international travel.)


  • Discourse touched me in a no-no place

    @lb_ said in Shuffling off this mortal coil:

    I don't claim that it would be super fast - the initial shuffle would take some time based on the number of possible locations

    It's linear in the number of locations. That's rather good in practice, as it really doesn't take long to generate even a few million items in shuffled order. I guess I could get some real timing figures, but it was “near instant” when I tried in an interactive shell so I'm content to not bother. 😁



  • @dcoder Where did you find this code? RA2 is one of my favourite games, would love to peruse it's inner workings.



  • @aapis said in Shuffling off this mortal coil:

    @dcoder Where did you find this code? RA2 is one of my favourite games, would love to peruse it's inner workings.

    I found it inside the executable :D Basically, we disassembled the game using IDA and read (parts of) the code therein to see how it worked. We converted some of those findings into C++ headers (here), created a way to insert your own DLLs into the game (here), and started our own DLL to fix bugs and add features for mods (here).

    The full source was not published, though we documented a lot of what we saw in various modding forums for RA2. I started translating bits and pieces to something-resembling-C++ here but that was long ago and I soon gave up on that.



  • I remember running afoul of this problem when I first tried to implement a random grid filling on my Amstrad CPC (though instead of keeping a list of visited locations, I directly checked the destination matrix to check if there was already something there). I noticed the slowness at the end, but since it was a throwaway program, I didn't dig further.

    I did remember the problem years later, and actually did the "shuffle a list of places" thing that time. Only instead of a Fisher-Yates, due to the multi-list sorting functions readily available on the platform I was using (a TI-89), I instead filled a second list with random values and sorted both lists according to it, turning a linear operation into a n log n one. Good thing my grid was only 6×6...


  • Discourse touched me in a no-no place

    @medinoc O(NlogN) isn't too bad unless you've got utterly massive amounts of data, as the log N term is basically the number of digits in your input value (i.e., even for 25 million it's still only around 8‌).



  • @groaner said in Shuffling off this mortal coil:

    @boomzilla said in Shuffling off this mortal coil:

    @groaner said in Shuffling off this mortal coil:

    Careful. If you were to post this on StackOverflow, someone would immediately complain, "Raw pointers inside STL containers?! My eyes!"

    Motherfucking modern compiler privilege is what that is! 😡

    Indeed. My project is still on VS2010, and while that lacks a lot of C++11 features, boost fills in most of the cracks. At some point, I'm going to upgrade to 2015 or beyond, but that means rebuilding all my dependencies (and probably upgrading them as well to more recent versions, where possible). I expect this to be a hassle, but I've done it once before for 2008->2010. There's also the issue of closed-source APIs that are built against a particular version of VS, and certain vendors like to not have builds for more recent versions.

    I've just done an upgrade from 2010-Pro to 2017-Community. I strongly recommend two things:

    • Remove 2010 before you start installing 2017.
    • After you do the install, you will undoubtedly have intense failures trying to #include silly things like <windows.h> or <stdio.h> in native C and C++ code. The paths to these files are not migrated correctly in the "default include paths" thing, and of course that's now read-only in the VS UI. The fix is to remove the files that contain the defaults from your per-user LocalAppData folder, specfically [LocalAppData]\Microsoft\MSBuild\v4.0 . Mercilessly(1) delete these files after the install.

    (1) You can be merciful if you want, but however you do it, get rid of them. ;)

    EDIT: 2017 also seems to prefer source control by Team Doodah Wotsit, and failing that by Git(Hub). If you prefer something else (e.g. Perforce(2), like me), you'll have to reinstall the VS integration component after all this, AND select it in the source control part of the preferences.

    (2) Git didn't exist when I started using Perforce at home, and I see no reason to undergo the sort of brain-strain associated with migrating it, especially not to git.(3)

    (3) I don't like git. I see why people use it, but any version-control system where a thing called "history rewriting" is considered a normal part of ordinary workflows is suspect in my view.



  • @dkf said in Shuffling off this mortal coil:

    He put an upper bound on the number of iterations used. That pesky apparently-infinite loop gone: the amount of work required was now strictly bounded and everything was fine!

    So he just dropped them? Didn't even append them sequentially into the remaining empty spots?



  • @boomzilla said in Shuffling off this mortal coil:

    I've personally found that they are good learning experiences and often the lessons become useful at a later date, but rarely do they turn into production quality code.

    FTFY. Unfortunately those clever tricks end up as crucial parts of the application all too often, with a cancerous mass of code grown around them...



  • @groaner said in Shuffling off this mortal coil:

    Careful. If you were to post this on StackOverflow, someone would immediately complain, "Raw pointers inside STL containers?! My eyes!"

    Ugh. Yeah. :-/

    and there are cases where you don't necessarily want everything to be a shared_ptr/unique_ptr.

    Yeah, for instance in an acceleration structure. I have this one code base, where its main developer decided that shared_ptr is a perfectly good replacement for (essentially) all pointers everywhere, because you don't need to worry about leaking memory. Instead, all data is neatly kept alive by all the shared_ptrs, and the working set of objects keeps on growing as more data is streamed in. It then keeps on growing until you shut down the application (which it does via exit() or similar, so at least you don't have to wait for all the shared_ptrs to be chased down and destroyed -- that would be adding insult to injury).

    But that's not the problem. The problemOne of The Problems is that the main acceleration structure for managing scene data (LOD, culling, drawing etc) based on an octree also uses shared_ptrs. Lot's of them. Normally, you'd worry about chasing pointers a bit, maybe even pack the octree into an array (pointerless), or something. Instead, here, you reference the children by shared_ptr, the various elements by shared_ptr, their data by shared_ptr, their data's resources by shared_ptr etc. To make things even worse, the shared_ptrs are passed around by value. You don't often see atomic adds from shared_ptr turn up in the top of the profiler, but ... yeah. :-(


  • Discourse touched me in a no-no place

    @maciejasjmj said in Shuffling off this mortal coil:

    Unfortunately those clever tricks end up as crucial parts of the application all too often, with a cancerous mass of code grown around them...

    That's why you get an intern to poke around and look for stuff that looks weird and broken. (After all, if it looks bad to them, it's pretty certain to be actually bad even if you personally think otherwise because you had fun writing it.)



  • @cvi said in Shuffling off this mortal coil:

    shared_ptr ... you don't need to worry about leaking memory.

    Where did he ever get that idea?

    Sure, if your data structures form a perfect "directed acyclic graph" (tree or not), you'll always be able to pull the whole thing apart by just releasing the "root" shared_ptrs, but as soon as there's even one loop,(1) you're SoL, especially since there's no garbage collector. (Plain C++ doesn't have enough introspection to allow an automatic garbage collector. C++/CLR does because it's .NET under the covers, but not plain C++.)

    And of course a structure that can be cleaned up if you release it correctly won't be released if you decide not to release it, which is reasonable, except that if the conditions in the program mean that you will not ever find your way to it (e.g. one of yesterday's "keep for a day" things that you cannot look for because it isn't for today), it's leaked just as surely as the objects in and referenced by that loop.

    (1) There are methods to mitigate the problem of loops without a garbage collector AND with all the pointers being at least partly "strong". (That is, if a smart_ptr<doodad> is NOT "NULL" it can always be dereferenced, unlike a weak_ptr.)



  • @steve_the_cynic said in Shuffling off this mortal coil:

    Where did he ever get that idea?

    Don't ask me. 🤷♂

    Pretty sure the tree mentioned above contains cycles (via parent pointers), though I'd have to check to be sure (then again, I'd rather not check). Wouldn't be surprised if there are some fairly hidden cycles via some roundabout ways elsewhere (i.e., widget has a pointer to thing, thing to frob, frob to blarf, blarf back to widget).



  • @cvi said in Shuffling off this mortal coil:

    @steve_the_cynic said in Shuffling off this mortal coil:

    Where did he ever get that idea?

    Don't ask me. 🤷♂

    Pretty sure the tree mentioned above contains cycles (via parent pointers), though I'd have to check to be sure (then again, I'd rather not check). Wouldn't be surprised if there are some fairly hidden cycles via some roundabout ways elsewhere (i.e., widget has a pointer to thing, thing to frob, frob to blarf, blarf back to widget).

    If "blarf back to widget" really is back to widget, you are a gnat's whisker away from an effective solution. (Not necessarily the best solution, but an effective one.)

    Each refcountable object contains two counts, one for "forward" ("owner") references, and one for "backward" ("response") references.

    The owner count is allowed to go from 0 to 1 once and once only. If it ever goes to zero, it is an error to go back to one.

    The response count has no such restriction. Instead, it is not allowed to go from 0 to 1 unless there is a non-zero owner count. This implies that the first reference must be an owner reference, which is entirely reasonable.

    When the owner count becomes zero, it causes a callback into the (formerly) owned object to tell it that it has no owners. It is supposed, then, to release all ownership references it has.

    When both counts become zero, the object is deleted. More specifically, it deletes itself.

    An actual cycle must, by implication, contain at least one response reference. ("blarf back to widget" sounds like a response reference because of that "back".)

    When I worked on a project that had this, we (I was one of the people guilty of designing and building it) also provided a variety of tools to aid in debugging it:

    • Each object managed in this way had to provide a debug function std::string getDescription() const.
    • The base-class objects containing the counts were linked together by an old-school doubly-linked list.
    • The base class provided a method to build a string containing the description strings of all the managed objects.
    • The lifetime-management services provided a means for this method to be called on "external" demand. Push the "doorbell" and a file would be written containing the output of the method.
    • The method would also be called just before main() returned, with output going to std::cout. This was intended to be empty. If it wasn't, you had a leak, probably because you'd left a reference intact when you lost your last owner. It tended to be the case that people leaked nothing or they leaked thousands of objects.

    When we provided the doorbell function and the dump-after-main, the rest of the team almost fell down and worshipped us straight away because it let them see what had leaked and also run tests to bring the system up, press the doorbell, run some test or other, and then press the doorbell again to see what got created, what got deleted, and what got left behind.


  • Impossible Mission - B

    @steve_the_cynic That debug output sounds a lot like what you get for free (ie. without having to implement a refcount or a getDescription() method on everything) with Delphi's FastMM FullDebugMode memory manager.


  • area_pol

    @dkf said in Shuffling off this mortal coil:

    what he thought were super-fast algorithms that were simultaneously as Pythonic as possible

    These 2 goals seem to be in conflict.



  • @steve_the_cynic said in Shuffling off this mortal coil:

    any version-control system where a thing called "history rewriting" is considered a normal part of ordinary workflows is suspect in my view.

    That isn't considered a normal part of ordinary workflows. That's considered a part of unusual-but-surprisingly-common workflows used by people who prefer rebasing over merging, which isn't normal or ordinary. At least, not from the perspective of the original intent of git's features. People will always find ways to abuse things to their liking...

    @groaner said in Shuffling off this mortal coil:

    Careful. If you were to post this on StackOverflow, someone would immediately complain, "Raw pointers inside STL containers?! My eyes!"

    Well it's true in the particular code you quoted. Using raw pointers there only increases the surface area for mistakes and confusion. Though I agree that raw pointers do have some limited use in special circumstances.



  • @steve_the_cynic said in Shuffling off this mortal coil:

    @groaner said in Shuffling off this mortal coil:

    @boomzilla said in Shuffling off this mortal coil:

    @groaner said in Shuffling off this mortal coil:

    Careful. If you were to post this on StackOverflow, someone would immediately complain, "Raw pointers inside STL containers?! My eyes!"

    Motherfucking modern compiler privilege is what that is! 😡

    Indeed. My project is still on VS2010, and while that lacks a lot of C++11 features, boost fills in most of the cracks. At some point, I'm going to upgrade to 2015 or beyond, but that means rebuilding all my dependencies (and probably upgrading them as well to more recent versions, where possible). I expect this to be a hassle, but I've done it once before for 2008->2010. There's also the issue of closed-source APIs that are built against a particular version of VS, and certain vendors like to not have builds for more recent versions.

    I've just done an upgrade from 2010-Pro to 2017-Community. I strongly recommend two things:

    • Remove 2010 before you start installing 2017.
    • After you do the install, you will undoubtedly have intense failures trying to #include silly things like <windows.h> or <stdio.h> in native C and C++ code. The paths to these files are not migrated correctly in the "default include paths" thing, and of course that's now read-only in the VS UI. The fix is to remove the files that contain the defaults from your per-user LocalAppData folder, specfically [LocalAppData]\Microsoft\MSBuild\v4.0 . Mercilessly(1) delete these files after the install.

    (1) You can be merciful if you want, but however you do it, get rid of them. ;)

    Thanks, I'll keep in mind. I foresee the bigger challenges will be around rebuilding and upgrading the dependencies (being that every day I have to pop open cmake is a bad day and all, and that one dependency might build fine, but you're treated to tens of "unresolved symbol DoStuffzxzxzzfgxdf" errors when you try to build the next dependent dependency).

    EDIT: 2017 also seems to prefer source control by Team Doodah Wotsit, and failing that by Git(Hub). If you prefer something else (e.g. Perforce(2), like me), you'll have to reinstall the VS integration component after all this, AND select it in the source control part of the preferences.

    TFS is the bee's knees, but I use TortoiseSVN for this project and can live without VS integration if need be.

    (3) I don't like git. I see why people use it, but any version-control system where a thing called "history rewriting" is considered a normal part of ordinary workflows is suspect in my view.

    Ain't no reason to have acyclic directed graphs when it's one developer committing at a time from a single or, rarely, two machines.


  • Discourse touched me in a no-no place

    @adynathos said in Shuffling off this mortal coil:

    These 2 goals seem to be in conflict.

    This is quite supremely true. If you want your code to go fast, don't write it in Python unless you are able to take advantage of one of the tricks to accelerate it (e.g., if you can do the actual computations in numpy, which just uses Python as the management and coordination layer and puts the computation mostly in C).


  • Discourse touched me in a no-no place

    @groaner said in Shuffling off this mortal coil:

    Ain't no reason to have acyclic directed graphs when it's one developer committing at a time from a single or, rarely, two machines.

    Depends on whether they are able to finish each bugfix or feature before moving onto the next one. I find it very useful to be able to keep changes on their own working branch before merging to a primary branch (either the main development branch or a long-term-support branch) since it means that I can checkpoint work in progress easily.



  • @masonwheeler said in Shuffling off this mortal coil:

    @steve_the_cynic That debug output sounds a lot like what you get for free (ie. without having to implement a refcount or a getDescription() method on everything) with Delphi's FastMM FullDebugMode memory manager.

    That's as maybe, but since this project was C++, a Delphi feature doesn't really help. And of course the getDescription() method could be tailored to show interesting information (that is, introspective information) about the object in question. I'd also suggest that you've confounded two different things, although admittedly that's probably my fault for describing them together. The refcounts and the description method were implemented together, but aren't required for the other to work.

    And of course the "debug" output was also available in release builds, whence the raw-pointer (fast) linked list.


  • 🚽 Regular

    @dcoder said in Shuffling off this mortal coil:

    funky piece of code in Red Alert 2

    I thought it transpired that EA had lost the source when someone tried to buy it. Or did you decompile to get that?

    Edit: Dammit, you already answered that, I still can't believe they just lost the code to all of Westwood's games...



  • @cursorkeys said in Shuffling off this mortal coil:

    @dcoder said in Shuffling off this mortal coil:

    funky piece of code in Red Alert 2

    I thought it transpired that EA had lost the source when someone tried to buy it. Or did you decompile to get that?

    Edit: Dammit, you already answered that, I still can't believe they just lost the code to all of Westwood's games...

    I previously worked at ... let's just say "a large US-based provider of financial information services and news", and their main systems were a bit creaky and distinctly long in the tooth. So they had millions of lines of code still in Fortran(1), and there were a few modules that had been compiled for both Solaris and AIX, and then the source code had been lost.

    However, it wasn't until they tried to port the system to a new platform (HP/UX on Superdome if memory serves) that they discovered this, and so they created a binary-to-binary translator for the compiled .o files. And they launched a project to find all the files like this and write shiny new source code to do the same thing.

    (1) In 2005 I had reason to look in one of these and I found the remains of a manually-maintained changelog comment in the top of the file, with modifications having dates in 1992 and 1989. That sort of long in the tooth and creaky.



  • @lb_ said in Shuffling off this mortal coil:

    Well it's true in the particular code you quoted. Using raw pointers there only increases the surface area for mistakes and confusion.

    So:

    std::vector<std::experimental::observer_ptr<TechnoType>> secretOptions; 
    std::vector<std::experimental::observer_ptr<Building>> buildingsWithSecrets;
    

    🚎

    Featuring observer_ptr, the worlds dumbest smart pointer.


  • Notification Spam Recipient

    @cvi said in Shuffling off this mortal coil:

    worlds dumbest

    Wait...

    0_1505459019034_fb195dc8-2774-4464-958d-e30939af8bcb-image.png

    C++98 came before C++11??? OMG I gotta read up on that...

    Edit:

    0_1505459163424_4327027b-64e7-454c-95b8-faf47a90fb18-image.png

    Oh, it just suffers the Y2K bug.

    Wait. WTF?!?!?! :wtf: How moronic do you need to be to base your version name off of the current 2-digit year????!!!


    Filed under: Looking at you, Microsoft...



  • @tsaukpaetra said in Shuffling off this mortal coil:

    Oh, it just suffers the Y2K bug.

    Nah, it's just brillant marketing. After all, C++03 happened way before everybody started copying the idea and going back to lower numbers (like the XBox One in 2013 replacing the older XBox 360). ;-)



  • @cvi said in Shuffling off this mortal coil:

    @tsaukpaetra said in Shuffling off this mortal coil:

    Oh, it just suffers the Y2K bug.

    Nah, it's just brillant marketing. After all, C++03 happened way before everybody started copying the idea and going back to lower numbers (like the XBox One in 2013 replacing the older XBox 360). ;-)

    Could be worse. They could have decided to call it "DXBox 720". If you don't believe me, ask Goggle, although you'll have to tell them that you really meant to search for it with the "D" on there.

    (Summary for the lazy: it's the console that features in the manga/anime "BTOOOM!". The title is a reference to a MMOFPS that features in the story, and gets transformed into a LARP version that takes place on an isolated island, although the hap-deficient(1) "players" of this version find that (a) they don't have armour and (b) the weapons (grenades) are real, and (c) so is LARP death.)

    (1) Some of them aren't entirely hapless, but none of them have much hap left.



  • @cvi I would consider that code to be incorrect. Though, observer_ptr is a much clearer name, whereas * could be owning or observing or an array, etc. - so you've still improved the code's readability despite making it more incorrect.



  • @lb_ said in Shuffling off this mortal coil:

    @cvi I would consider that code to be incorrect. Though, observer_ptr is a much clearer name, whereas * could be owning or observing or an array, etc. - so you've still improved the code's readability despite making it more incorrect.

    FWIW, I do kinda like observer_ptr, at least the idea (if it survives in actual "production" use remains to be seen). As you say, a raw pointer has more ambiguity, whereas an observer_ptr (unless misused) is pretty clear on the intent.

    Regarding the code - I'm a bit curious why you think that that code is incorrect. I don't think there's sufficient context to assert either way whether or not those are simply observing pointers or actual owning. I would say the latter is rather unlikely, though, and guess that the instances would be owned by some more central game state.



  • @cvi said in Shuffling off this mortal coil:

    Regarding the code - I'm a bit curious why you think that that code is incorrect. I don't think there's sufficient context to assert either way whether or not those are simply observing pointers or actual owning. I would say the latter is rather unlikely, though, and guess that the instances would be owned by some more central game state.

    I hadn't considered that - perhaps you're right after all. I just have an aversion to that scenario because then I have to worry about object lifetimes.



  • @groaner said in Shuffling off this mortal coil:

    Careful. If you were to post this on StackOverflow, someone would immediately complain, "Raw pointers inside STL containers?! My eyes!"

    As long as they're not owning raw pointers, it's fine. Unfortunately you can't make vectors of references, so when making containers you have to use pointers. You could use gsl's "not_null" pointers to make sure no nullptrs are sneaking in, but it's a small detail.

    @cvi said in Shuffling off this mortal coil:

    I have this one code base, where its main developer decided that shared_ptr is a perfectly good replacement for (essentially) all pointers everywhere, because you don't need to worry about leaking memory.

    Ugh. Shared pointers are only really supposed to be used for sharing ownership across threads, where it's impossible to know who is the last user of a resource and so you have to share the ownership. A correctly placed unique_ptr should fit 99.9% of all use cases. Or even just having a local object. Only use unique_ptr if the object is too big for the stack, or needs to be nullable (at least until optional becomes standard) or some other similar reason. And even when you do need shared_ptr, a single copy should be enough. You then pass the object to functions that use it by reference (reference to the object, not reference to the pointer).

    @tsaukpaetra said in Shuffling off this mortal coil:

    Wait. WTF?!?!?! How moronic do you need to be to base your version name off of the current 2-digit year????!!!

    It's the informal name. The full name is the name of the specification: ISO/IEC 14882:2014. But that's a mouthful. I find it funny that the table says "to be determined" for the 2017 and 2020 versions, the versioning scheme seem's pretty clear.

    @cvi said in Shuffling off this mortal coil:

    As you say, a raw pointer has more ambiguity, whereas an observer_ptr (unless misused) is pretty clear on the intent.

    No ambiguity: make it a rule that you don't use raw pointers for owning resources, which is easily enforced by running a search for "new" in the code base and flagging any ocurrences, and every raw pointer in the codebase is now not owning.


  • Notification Spam Recipient

    @kian said in Shuffling off this mortal coil:

    @tsaukpaetra said in Shuffling off this mortal coil:

    Wait. WTF?!?!?! How moronic do you need to be to base your version name off of the current 2-digit year????!!!

    It's the informal name. The full name is the name of the specification: ISO/IEC 14882:2014. But that's a mouthful. I find it funny that the table says "to be determined" for the 2017 and 2020 versions, the versioning scheme seem's pretty clear.

    I can't wait until 2098 release of c++...



  • @tsaukpaetra said in Shuffling off this mortal coil:

    I can't wait until 2098 release of c++...

    I'd feel bad for anyone that speaks of C++98 in the 2100's and is faced with the ambiguity of referring to the recent spec or the over a century old one they still use at his workplace. I hope it's not a common enough problem that people will need to disambiguate.


Log in to reply