Versioning every class separately



  • Background: This is a medium to largish (about 100kloc) C++/Qt project that uses static libraries throughout.

    While I was on vacation our technical lead (by seniority) decided that he needed to be able to query the versions of libraries used in our software inside the installed software. So far, so normal, there are a lot of ways to do that.

    His solution: every single class in our application gets two new public static methods: version() and versions()

    version() returns a version string generated by the build system during compile time. versions() recursively calls version() on every class that is referenced by the current class and returns a version string containing all of them. This is to be written manually for every single class.

    I offered to add some code to the build system to generate version information for each library as it's being built and some aggregation so executables can output the version for the libraries they use. That was not good enough, he does not trust the people working here to use our build system properly and wants to guard against the case when some object file somewhere is not the same version as the rest of the library. Also, he had already started doing it and writing tasks for everyone to do it.

    This is a place where people were already reluctant to break 2000+ line classes into multiple smaller classes because creating a new class seemed like too much work. With this added I expect to see even more ridiculously oversized classes. And I fully expect this scheme to become outdated very quickly as people forget to update the versions() method when they make changes.

    To be scrupulously fair: His distrust isn't completely unwarranted. We have people here who will compile some code they haven't committed yet on their system in the IDE without doing a full build and install single executables created that way (the whole project has 20) directly on a production or staging system from their dev machine. And some dev machines do not have the same environment as other dev machines. But this is not the way to solve that problem.



  • @witchdoctor said:

    But this is not the way to solve that problem.
    Yes.

    Shooting people solves the problem.



  • @El_Heffe said:

    @witchdoctor said:

    But this is not the way to solve that problem.
    Yes.

    Shooting people solves the problem.

    I'm going with the less murderous option and started looking for something else. Better for my sanity that way.



  • @witchdoctor said:

    I'm going with the less murderous option

    Pussy.


  • Discourse touched me in a no-no place

    @witchdoctor said:

    To be scrupulously fair: His distrust isn't completely unwarranted. We have people here who will compile some code they haven't committed yet on their system in the IDE without doing a full build and install single executables created that way (the whole project has 20) directly on a production or staging system from their dev machine. And some dev machines do not have the same environment as other dev machines. But this is not the way to solve that problem.
    The way to fix that problem is a CI server that emails round when someone breaks the build (from a clean checkout) and a policy of never installing anything anywhere from a developer's local build. Sounds over-simplistic, but works quite well.



  • @dkf said:

    @witchdoctor said:
    To be scrupulously fair: His distrust isn't completely unwarranted. We have people here who will compile some code they haven't committed yet on their system in the IDE without doing a full build and install single executables created that way (the whole project has 20) directly on a production or staging system from their dev machine. And some dev machines do not have the same environment as other dev machines. But this is not the way to solve that problem.
    The way to fix that problem is a CI server that emails round when someone breaks the build (from a clean checkout) and a policy of never installing anything anywhere from a developer's local build. Sounds over-simplistic, but works quite well.

    That's what I told him. We even already started setting up a CI server (already builds binaries for one of our three target environments, the others need their own servers). First CI build was on Monday, I noticed this version madness today. The problem we have is that policies are never enforced properly. Our team lead will start writing scripts directly on the staging system and not archive them anywhere for example. Our tech lead (different from the team lead) said to my face that he doesn't trust people to follow policy and that he doesn't trust management to enforce policy.

    Also, getting approval to buy just the one CI server (just the hardware) took 3 months. Plus 2 weeks to actually make a mostly sane build system first (still missing some things but a lot better than the shell script with the hardcoded build order).


  • Discourse touched me in a no-no place

    @witchdoctor said:

    That's what I told him. We even already started setting up a CI server (already builds binaries for one of our three target environments, the others need their own servers). First CI build was on Monday, I noticed this version madness today. The problem we have is that policies are never enforced properly. Our team lead will start writing scripts directly on the staging system and not archive them anywhere for example. Our tech lead (different from the team lead) said to my face that he doesn't trust people to follow policy and that he doesn't trust management to enforce policy.
    That's OK, you just enforce them anyway without telling anyone. Just wipe the test system and reinstall from a standard image and those clean builds from the CI server. Configure things so that this happens regularly (once a day, assuming that the current daily build is successful). Yes, the team lead's work will just evaporate, but that's the only way to make some people learn. When he complains, just say “but it's only a test system and anyway all significant changes are committed to the repository” with your absolutely best poker face on. As a plus side, this also stops “productionization creep” as nobody sane will want to persist data on the system when it is going to be rebuilt from scratch within the next 24 hours. (If you're feeling nice, take a backup before he first time you do this.)

    There's nothing quite like a computer for rigidly enforcing a policy. Claim it's for ISO 9000 compliance or something. Management like that sort of thing.

    But don't do this immediately before a critical demonstration. You should seek to teach a lesson, not cause heart failure.



  • @dkf said:

    @witchdoctor said:
    That's what I told him. We even already started setting up a CI server (already builds binaries for one of our three target environments, the others need their own servers). First CI build was on Monday, I noticed this version madness today. The problem we have is that policies are never enforced properly. Our team lead will start writing scripts directly on the staging system and not archive them anywhere for example. Our tech lead (different from the team lead) said to my face that he doesn't trust people to follow policy and that he doesn't trust management to enforce policy.
    That's OK, you just enforce them anyway without telling anyone. Just wipe the test system and reinstall from a standard image and those clean builds from the CI server. Configure things so that this happens regularly (once a day, assuming that the current daily build is successful). Yes, the team lead's work will just evaporate, but that's the only way to make some people learn. When he complains, just say “but it's only a test system and anyway all significant changes are committed to the repository” with your absolutely best poker face on. As a plus side, this also stops “productionization creep” as nobody sane will want to persist data on the system when it is going to be rebuilt from scratch within the next 24 hours. (If you're feeling nice, take a backup before he first time you do this.)

    There's nothing quite like a computer for rigidly enforcing a policy. Claim it's for ISO 9000 compliance or something. Management like that sort of thing.

    But don't do this immediately before a critical demonstration. You should seek to teach a lesson, not cause heart failure.

    I would love to do this, but the staging system contains unarchived scripts and components that are required for the software to work at all. And no documentation so I don't know what I need to add to the build to make this solution work.

    Also, our team lead has final say on what happens with the project. I am certain that he would tell us to turn that off and write up a reprimand for doing it.


  • Discourse touched me in a no-no place

    @witchdoctor said:

    I would love to do this, but the staging system contains unarchived scripts and components that are required for the software to work at all. And no documentation so I don't know what I need to add to the build to make this solution work.
    That's why I suggested you take a backup. Even better, do a dry run in a VM (yeah, won't scale worth shit, but at least you can make sure that you're not doing something dumb). The key is that while things are in a sorry state now, not doing anything about it won't make it get better magically. Take action.

    It will also greatly improve things when it comes to rolling into production/sending to customers, as that's when you require a documented installation process and deployment configuration. If you're never going to roll into production, you've got an expensive but useless toy. Nothing more.



  • @witchdoctor said:

    We have people here who will compile some code they haven't committed yet on their system in the IDE without doing a full build and install single executables created that way (the whole project has 20) directly on a production or staging system from their dev machine.

    Fire these people right now. Why? Unless it is an emergency, the changes should go through a code review, a qa cycle or two, and a PLANNED deployment to production. Anything else risks taking down production, and the costs of that happening are probably much higher than the cost of finding a new software engineer who knows what they're doing.

    Someone told me that about 25% of code you write is run for the very first time by an end user. If these cowboys are allowed to continue, that number might go to 50% on your production system. Scary.



  • @dkf said:

    @witchdoctor said:
    I would love to do this, but the staging system contains unarchived scripts and components that are required for the software to work at all. And no documentation so I don't know what I need to add to the build to make this solution work.
    That's why I suggested you take a backup. Even better, do a dry run in a VM (yeah, won't scale worth shit, but at least you can make sure that you're not doing something dumb). The key is that while things are in a sorry state now, not doing anything about it won't make it get better magically. Take action.

    It will also greatly improve things when it comes to rolling into production/sending to customers, as that's when you require a documented installation process and deployment configuration. If you're never going to roll into production, you've got an expensive but useless toy. Nothing more.

    I don't know that I agree with your strategy here. Doing a "See, I told you so" kind of demonstration is just going to piss them off. Here's what I do when I come into a new place and need to stop this shit (and I can't fire the developers, which is always my first move for idiots who won't use version control or who develop in staging, and I can't simply say "This is the build system, use it" which is my second move, and I can't quit which is my third move):

    Build the deploy system, then start migrating projects into it. Come up with a simple workflow your coworkers can use. As part of the deploy, have it make a backup of the files, then overwrite them. At first, this will break shit. People will come bitching to you. Point them to where the backups are stored. Show them how to get it into the build system so it will be deployed.

    Now this is where it's important that you always backup the existing files to their own, separate directory each time you deploy. Because people will continue to make changes in production and staging, so you'll always want a full history of what those changes were. However, slowly, they will adapt to using the system.

    The important part is to do it in stages. Don't do it for every project at once. Start with the project it will be least-disruptive to; if your deploy system loses $100k contract, you're going to be in hot water, no matter how correct your ideas are. You want to get people used to the idea of your workflow. Let them give feedback, and make an effort to incorporate their suggestions so long as they aren't idiotic. You want them to feel like they are part-owners of the process. This even works with very stubborn coworkers; they may refuse to use your process, but they will loudly trumpet the benefits of your process plus some minor step I added.

    When selling it to new people, mention these user-contributed improvements. First, it exploits their innate desire to follow the herd, so hearing that others have contributed and are using it will drive their interest. Second, it links the people who are using it with the process--both in the minds of people outside the process, but also to the people mentioned themselves. The best way to get someone to defend your process is to make them believe it's their process, and the surest way to do that is to get other people to think that. So Bob might start out hating the process, but if Mark starts telling people that Bob contributed to the process and uses it sometimes, then Bob will literally engage in a knife fight to defend the honor of the process.

    Ultimately, the goal is to manipulate people's egos. You want people arguing over the process itself, and not whether it should exist in the first place. It's like asking your religious parents "Should we play The Beegees or The Who at my gay wedding?" You want to direct them to the argument you want them to have, not the one they want to have.



  • @DrPepper said:

    Fire these people right now.

    I must have missed the part where the OP said he was a manager who had the ability to fire engineers.



  •  If you are in an area where people commute to work by car and leave them in a company lot...then a valve stem puller is a wonderful incentive device....Just leave the stem on their windshield wiper.  If it is a minor transgression, only do one...they can use the spare...otherwise 2-4 as appropriate...



  •  @TheCPUWizard said:

     If you are in an area where people commute to work by car and leave them in a company lot...then a valve stem puller is a wonderful incentive device....Just leave the stem on their windshield wiper.  If it is a minor transgression, only do one...they can use the spare...otherwise 2-4 as appropriate...

    The trick is to take the valve cores you removed and tape them on the windshield directly in front of the driver. Use a big 'X' of duct tape.

    Oh, and wear gloves. 



  • @witchdoctor said:

    This is a place where people were already reluctant to break 2000+ line classes into multiple smaller classes because creating a new class seemed like too much work. With this added I expect to see even more ridiculously oversized classes.

    Another reason why C# is so much better: partial classes. 2000 lines is too much? Then you split that in 2 x 1000 lines and put all the boring stuff in the same half so you don't see it. It's like having two kids but one of them you don't like that much so you put his room in the basement "to foster his autonomy".



  • @witchdoctor said:

    @El_Heffe said:
    Yes.

    Shooting people solves the problem.

    I'm going with the less murderous option and started looking for something else. Better for my sanity that way.
    You could always outsource it. There are probably some areas in your town/city where this service is offered for a reasonable fee. But don't offshore it, because then the, um, let's call it "project", would get delayed considerably.

     



  • @Ronald said:

    It's like having two kids but one of them you don't like that much so you put his room in the basement "to foster his autonomy".

    My sister had a room upstairs, but my room was in the basem--HEY!


  • Discourse touched me in a no-no place

    @witchdoctor said:

    he does not trust the people working here to use our build system properly
    There are two solutions to that particular problem. One of them is to change the programmers that insist on fucking up. It seems your lead chose the more expensive path by trying to code for it instead.



  • @morbiuswilters said:

    <Snip calm description of reasonable approach which acknowledges and (gasp!) accommodates human foibles...>

    Hey, what have you done with Morbs?!?


  • Considered Harmful

    @Ronald said:

    Another reason why C# is so much better: partial classes. 2000 lines is too much? Then you split that in 2 x 1000 lines and put all the boring stuff in the same half so you don't see it. It's like having two kids but one of them you don't like that much so you put his room in the basement "to foster his autonomy".

    This is one of my favorite features as well. It also allows code generation tools (used in ASP.NET, LINQ-to-SQL, Entity Framework, et al) to keep their code separate, so you can add class members without the tool blowing away your changes on the next codegen.



  • @morbiuswilters said:

    @dkf said:
    @witchdoctor said:
    I would love to do this, but the staging system contains unarchived scripts and components that are required for the software to work at all. And no documentation so I don't know what I need to add to the build to make this solution work.
    That's why I suggested you take a backup. Even better, do a dry run in a VM (yeah, won't scale worth shit, but at least you can make sure that you're not doing something dumb). The key is that while things are in a sorry state now, not doing anything about it won't make it get better magically. Take action.

    It will also greatly improve things when it comes to rolling into production/sending to customers, as that's when you require a documented installation process and deployment configuration. If you're never going to roll into production, you've got an expensive but useless toy. Nothing more.

    I don't know that I agree with your strategy here. Doing a "See, I told you so" kind of demonstration is just going to piss them off. Here's what I do when I come into a new place and need to stop this shit (and I can't fire the developers, which is always my first move for idiots who won't use version control or who develop in staging, and I can't simply say "This is the build system, use it" which is my second move, and I can't quit which is my third move):

    Build the deploy system, then start migrating projects into it. Come up with a simple workflow your coworkers can use. As part of the deploy, have it make a backup of the files, then overwrite them. At first, this will break shit. People will come bitching to you. Point them to where the backups are stored. Show them how to get it into the build system so it will be deployed.

    Now this is where it's important that you always backup the existing files to their own, separate directory each time you deploy. Because people will continue to make changes in production and staging, so you'll always want a full history of what those changes were. However, slowly, they will adapt to using the system.

    The important part is to do it in stages. Don't do it for every project at once. Start with the project it will be least-disruptive to; if your deploy system loses $100k contract, you're going to be in hot water, no matter how correct your ideas are. You want to get people used to the idea of your workflow. Let them give feedback, and make an effort to incorporate their suggestions so long as they aren't idiotic. You want them to feel like they are part-owners of the process. This even works with very stubborn coworkers; they may refuse to use your process, but they will loudly trumpet the benefits of your process plus some minor step I added.

    When selling it to new people, mention these user-contributed improvements. First, it exploits their innate desire to follow the herd, so hearing that others have contributed and are using it will drive their interest. Second, it links the people who are using it with the process--both in the minds of people outside the process, but also to the people mentioned themselves. The best way to get someone to defend your process is to make them believe it's their process, and the surest way to do that is to get other people to think that. So Bob might start out hating the process, but if Mark starts telling people that Bob contributed to the process and uses it sometimes, then Bob will literally engage in a knife fight to defend the honor of the process.

    Ultimately, the goal is to manipulate people's egos. You want people arguing over the process itself, and not whether it should exist in the first place. It's like asking your religious parents "Should we play The Beegees or The Who at my gay wedding?" You want to direct them to the argument you want them to have, not the one they want to have.

    Sadly, this wouldn't work here. The person coding/scripting/configuring directly in test or production would just write workarounds which would hunt through the backups for his important bits and copy them to where they need to go.



  • @witchdoctor said:

    And some dev machines do not have the same environment as other dev machines
     

    Shouldn't all dev environments match production?

    @witchdoctor said:

    The problem we have is that policies are never enforced properly.

    Percussive Management. Works wonders.

    @witchdoctor said:

    Our team lead will start writing scripts directly on the staging system and not archive them anywhere for example.

    So when they're lost because they can't be found in the archive, he'll need to be fetched out of bed at 3am to rewrite them.

    @witchdoctor said:

    ...he doesn't trust people to follow policy and that he doesn't trust management to enforce policy.

    If policy isn't enforced, there's no incentive to follow it. Yeah, there are other reasons for following outside of management - but laws are useless without police and jails.

     



  • @witchdoctor said:

    the staging system contains unarchived scripts and components that are required for the software to work at all.
     

    Then find who's in charge of the staging system and club them with a cluebat whilst screaming "CONFIGURATION MANAGEMENT, BITCH!".

    @witchdoctor said:

    And no documentation so I don't know what I need to add to the build to make this solution work.

    Two options:

    1. don't add anything until the documentation is forthcoming for fear of breaking something
    2. add something and document it, but don't worry about other things breaking in the meantime. When something breaks, shrugging with "no documentation warned me not to do it" is a cop-out. Then pass over your documentation so you've got something that needs to be addedonce the next prod->staging refresh has occured.



  • @zelmak said:

    Sadly, this wouldn't work here. The person coding/scripting/configuring directly in test or production would just write workarounds which would hunt through the backups for his important bits and copy them to where they need to go.

    So remove their write permissions so they can't copy them back. Get management buy-in for doing this. Run a short series of 30 minute meetings on "Why configuration management and CI/CD are great and you'll love it". Start hitting people with the clue bat.



    If you continue to do nothing then it's a self-fulfilling prophecy; it won't get better, because no one is trying to make it better.


  • Trolleybus Mechanic

    @Cassidy said:

    @witchdoctor said:

    And some dev machines do not have the same environment as other dev machines
     

    Shouldn't all dev environments match production?

     

    If you do your dev directly on production, they'll always match.

     



  • @Lorne Kates said:

    @Cassidy said:

    @witchdoctor said:

    And some dev machines do not have the same environment as other dev machines
     

    Shouldn't all dev environments match production?

     

    If you do your dev directly on production, they'll always match.

     

    What if your production environment doesn't match your production environment?


  • Considered Harmful

    @Ben L. said:

    @Lorne Kates said:

    @Cassidy said:

    @witchdoctor said:

    And some dev machines do not have the same environment as other dev machines
     

    Shouldn't all dev environments match production?

     

    If you do your dev directly on production, they'll always match.

     

    What if your production environment doesn't match your production environment?

    Then you must be using too many threadsgoroutines.



  • @Vanders said:

    If you continue to do nothing then it's a self-fulfilling prophecy; it won't get better, because no one is trying to make it better.

    That's not a self-fulfilling prophecy. You are like this guy on Storage Wars who keeps saying this is the wow factor in the wrong context.



  • @zelmak said:

    Sadly, this wouldn't work here. The person coding/scripting/configuring directly in test or production would just write workarounds which would hunt through the backups for his important bits and copy them to where they need to go.

    That seems like more work than just using the system. You could always hide the backups, or encrypt them or something so he can't get to them.



  • @Cassidy said:

    Shouldn't all dev environments match production?

    Staging environments should, but dev environments don't usually need to. For example, your production environment might use dozens of servers but dev will just run all of those services on one box. It will also usually be configured to have more verbose logging and possibly newer versions of particular software.



  • @morbiuswilters said:

    @Cassidy said:
    Shouldn't all dev environments match production?

    Staging environments should, but dev environments don't usually need to. For example, your production environment might use dozens of servers but dev will just run all of those services on one box. It will also usually be configured to have more verbose logging and possibly newer versions of particular software.

    But what about companies that sell development environments.





  • @Vanders said:

    @zelmak said:
    Sadly, this wouldn't work here. The person coding/scripting/configuring directly in test or production would just write workarounds which would hunt through the backups for his important bits and copy them to where they need to go.

    So remove their write permissions so they can't copy them back. Get management buy-in for doing this. Run a short series of 30 minute meetings on "Why configuration management and CI/CD are great and you'll love it". Start hitting people with the clue bat.



    If you continue to do nothing then it's a self-fulfilling prophecy; it won't get better, because no one is trying to make it better.

    (1) I'm a contractor. I can suggest, cajole, imply, nudge, etc., but I can't make or enforce policy.

    (2) The perpetrator(s) ARE "management".



  • @Ronald said:

    @morbiuswilters said:
    @Cassidy said:
    Shouldn't all dev environments match production?

    Staging environments should, but dev environments don't usually need to. For example, your production environment might use dozens of servers but dev will just run all of those services on one box. It will also usually be configured to have more verbose logging and possibly newer versions of particular software.

    But what about companies that sell development environments.

    /bitchslap



  • There is an additional layer of WTF at work here.

    Our team lead is effectively management for this project. That is, this is a product that belongs to this team and he has full control over the project and authority to reprimand and fire. And he is the one who is not doing configuration management.



  • The last improvement on this scale was fixing our version control: from 150 git repositories to 14 and replacing the build script with its hardcoded build order by rake. Took 3 1-hour meetings and one week of work for me to set up.

    It took me almost a year to get approval for that and only after a horribly botched deployment (We had to travel to the customer three separate times with 3 engineers and bugfixes until the specified functionality actually worked). To add to that improvements of this type are discouraged as gold plating and instantly shot down. Then I get to argue from that position with the attitude being "get back to doing work" as if that wasn't also work and important.

    And that improvement also did not make it easier for doing further improvements. It's as if our technical lead has given up on improving things and instead just fires off knee-jerk objections to every suggestion.

    Sorry for ranting like this but this is fucking annoying.



  • @witchdoctor said:

    There is an additional layer of WTF at work here.

    Our team lead is effectively management for this project. That is, this is a product that belongs to this team and he has full control over the project and authority to reprimand and fire. And he is the one who is not doing configuration management.

    Last year I was working for a client who has a bunch of morons managing the infrastructure. That team has the final say for hardware and operating systems configuration and every "architecture" decision they make is always motivated either by convenience or by a sudden pet project of one of the sysadmins. All virtual machines are the same: Windows 2008 server with 4GB RAM and a 75GB C: drive. All virtual machines go on the same ESX host until enough people complain about performance, then a new host is added to the cluster and the new virtual machines are added on that one (this is called an "eventually load balanced" system as the new host will soon be as slow as the other ones). You need a file server? One vanilla 4GB/75GB virtual machine for you. You have a huge ERP java app that requires Weblogic, Oracle 11g and a TIBCO queue? One vanilla 4GB/75GB virtual machine for you. Etc.



    That's what you get when you combine too much power and too little accountability.


  • Discourse touched me in a no-no place

    @Lorne Kates said:

    If you do your dev directly on production, they'll always match.
    That's called “no real customers”.



  • @dkf said:

    @Lorne Kates said:
    If you do your dev directly on production, they'll always match.
    That's called “no real customers”.

    I used to work for a company that was providing billing and logging software for telcos. On the first day on the job I had to update a report to fix a typo on a live server, and as I was nervous doing it my boss told me: "you know here we work directly on production systems all the time so that's normal to make a mistake and cause a lot of damage once in a while, just do your best and don't worry too much". I never knew how much money the company was making but my salaray was twice what I made at my previous job and they were giving a Christmas bonus that was the equivalent of 2 months of salary every year, so they must know what they do...


Log in to reply