Well-reasoned anti-Git article



  • @powerlord said:

    Each of those 3 is a link that, when clicked, updates the repository url above it with the appropriate url type for the project.

    Fucking protocols. How do they work?


  • FoxDev

    @tar said:

    Fucking protocols. How do they work?

    Normally, alcohol is involved.


  • ♿ (Parody)

    @RaceProUK said:

    Normally, alcohol is involved.

    RAPE CULTURE



  • @Jaime said:

    The workflow you are describing assumes that all useful work happens on the trunk and branches are just tools to isolate changes until they are stable. Some companies will use a branch to collect many changes that will go through QA, then get merged and re-QA'ed before being released. Others create a branch when they are ready to release and QA the branch, backporting any fixes to main as they are found, then releasing code build from the branch.

    I have seen workflows (in AccuRev, btw) which used a good half a dozen to dozen layers of hierarchy between AccuRev's equivalent of the trunk and the branches people worked in -- but it worked, and it had a fair bit of automation surrounding it, so it wasn't all that onerous in practice.



  • None of those elements involve having the full history while offline....like you really need the individual changes that were made a decade ago.


  • FoxDev

    @TheCPUWizard said:

    like you really need the individual changes that were made a decade ago.

    Never say never…


  • FoxDev

    Don't.

    Tempt.

    Murphy.



  • @RaceProUK said:

    Never say never…

    He didn't say you'll never need them, he said you'll never need them while offline. In 2015, when you can just spend the extra ten minutes to connect to the office and put up with the reduced speed, it's not a big deal.


  • Discourse touched me in a no-no place

    @Jaime said:

    He didn't say you'll never need them, he said you'll never need them while offline. In 2015, when you can just spend the extra ten minutes to connect to the office and put up with the reduced speed, it's not a big deal.

    You don't want to know how much software I've written in airports (where networking is almost always miserable) and flying across the Atlantic (where there is nothing at all unless you've got an acoustic coupler and God's own expense account).



  • @dkf said:

    You don't want to know how much software I've written in airports (where networking is almost always miserable) and flying across the Atlantic (where there is nothing at all unless you've got an acoustic coupler and God's own expense account).

    Congratulation for being one of the seven people for whom a DVCS is ideally suited. Please stop suggesting it's the right things for the rest of us. The point is that centralized systems are more appropriate for most non-open-source developers.


  • Discourse touched me in a no-no place

    @Jaime said:

    seven people

    Hyperbole much?



  • @dkf said:

    Hyperbole much?

    Never... wait... always



  • @Jaime said:

    In 2015, when you can just spend the extra ten minutes to connect to the office and put up with the reduced speed, it's not a big deal.

    All having the history in your pc costs you is hard drive space. Hard drive space is cheaper than time. So it makes more sense to sacrifice hard drive space and save the ten minutes.



  • @Kian said:

    All having the history in your pc costs you is hard drive space.

    That's not all it costs. It also costs the following:

    • I can't fetch a partial repository. If I want a small part of a big repo, then that is a huge time difference.
    • I can't allow a user to work on code without exposing all of the code that has ever been in the repository to him.
    • The user has the impression that he committed something when all he did was make a watermark on his private repo. Corporate environments want commits to go to their repo, not yours. Even all the ugly intermediate work that doesn't quite work yet.

  • FoxDev

    @Jaime said:

    I can't fetch a partial repository. If I want a small part of a big repo, then that is a huge time difference.

    If you're doing that regularly, you're Doing It Wrong™.
    @Jaime said:
    The user has the impression that he committed something when all he did was make a watermark on his private repo.

    Then said user needs training.


  • ♿ (Parody)

    @RaceProUK said:

    Then said user needs training.

    It's not really different than the user thinking he's properly committed something by clicking the save button on his editor.


  • Discourse touched me in a no-no place

    @Jaime said:

    I can't fetch a partial repository. If I want a small part of a big repo, then that is a huge time difference.

    You do that once. And then you've got it.

    (Factoid: the biggest DVCS repository I've got is about 220MB. That's freaking huge, but it is a history going back the best part of 20 years of active development; the product is older, but the older history got lost during a transfer from SCCS to CVS, and that can't be helped.)



  • @RaceProUK said:

    If you're doing that regularly, you're Doing It Wrong™.

    In all my years of using source control (about twenty), I never fetched an entire repo until I found that it was the only choice with git.

    Why would I need all of the products that I'm not currently working on? Before you say "each product should have its own repo", note that security is defined at the repository level. I don't want to keep defining that stuff over and over again. Of course git doesn't have this problem because git doesn't have security. Fixing the security problem (which is a prerequisite of me using it) will introduce the configuration problem.


  • FoxDev

    @Jaime said:

    Before you say "each product should have its own repo", note that security is defined at the repository level. I don't want to keep defining that stuff over and over again.

    So… you don't want to do your job properly? :wtf:



  • @Jaime said:

    @Jaime said:
    The user has the impression that he committed something when all he did was make a watermark on his private repo.
    Then said user needs training.

    After you have successfully trained the user, you now have a two-step commit - commit then push. That's another piece of kindling on the roaring fire of git's poor usability.



  • @RaceProUK said:

    So… you don't want to do your job properly?

    I don't want to use a product that requires me to do the same thing over and over again. That's yet another piece of kindling on the roaring fire of git's poor usability.



  • @dkf said:

    You do that once. And then you've got it.

    You've never worked in the same corporate America I have. Every project has around ten new contractors, they only have project length contracts, then they're gone.


  • FoxDev

    @Jaime said:

    you now have a two-step commit - commit then push. That's another piece of kindling on the roaring fire of git's poor usability.

    Name a DVCS that doesn't have those two steps.
    @Jaime said:
    I don't want to use a product that requires me to do the same thing over and over again.

    Like committing code to the repo?
    @Jaime said:
    That's yet another piece of kindling on the roaring fire of git's poor usability.

    I hear an echo…



  • @RaceProUK said:

    Like committing code to the repo?

    Are you going to have a sensible conversation, or just be contrary?



  • @Jaime said:

    Corporate environments want commits to go to their repo, not yours. Even all the ugly intermediate work that doesn't quite work yet.

    It still goes to the central repo eventually in a push/pull model -- but there are cases when even online, you want to commit without pushing-- this allows you to save commits and keep working without making your half-baked work immediately visible to everyone else. Once you have it all working, all the commits get merged with any other work pulled down and pushed back up in one big lump.


  • Discourse touched me in a no-no place

    @Jaime said:

    Why would I need all of the products that I'm not currently working on? Before you say "each product should have its own repo", note that security is defined at the repository level.

    That's massively wrong when you're doing stuff with any of the modern DVCS solutions. They're not designed to work that way at all, and many of their assumptions will cause you grief.
    @Jaime said:
    I don't want to keep defining that stuff over and over again. Of course git doesn't have this problem because git doesn't have security. Fixing the security problem (which is a prerequisite of me using it) will introduce the configuration problem.

    Are we talking about authentication, authorization, or something else? Using a common authentication solution makes sense, but common authorization is more often a sign of someone busy pounding in some threaded nails with an old shoe.

    Git doesn't handle it because it delegates the entire problem to a lower level of the system (e.g., to ssh…)


  • FoxDev

    @Jaime said:

    Are you going to have a sensible conversation, or just be contrary?

    Answer my questions, and you'll find out.



  • @Jaime said:

    Why would I need all of the products that I'm not currently working on? Before you say "each product should have its own repo", note that security is defined at the repository level. I don't want to keep defining that stuff over and over again. Of course git doesn't have this problem because git doesn't have security. Fixing the security problem (which is a prerequisite of me using it) will introduce the configuration problem.

    @dkf said:

    Git doesn't handle it because it delegates the entire problem to a lower level of the system (e.g., to ssh…)

    Is there something wrong with delegating authentication and authorization to your system's facilities for that task instead of reinventing the wheel?



  • @tarunik said:

    Is there something wrong with delegating authentication and authorization to your system's facilities for that task instead of reinventing the wheel?

    You delegate authentication, not authorization. The infrastructure has no idea what you should have access to, only who you are.



  • @RaceProUK said:

    Answer my questions, and you'll find out.

    That wasn't really a question.


  • Discourse touched me in a no-no place

    @Jaime said:

    That wasn't really a question.

    That wasn't really an answer.


  • FoxDev

    @Jaime said:

    That wasn't really a question.

    It ended with a ❓



  • @RaceProUK said:

    It ended with a

    So does this..

    Are you an idiot?

    However, I assure you that every time it's been uttered, it wasn't a question.


  • FoxDev

    Y'know, you could just admit you don't know what you're talking about…


  • Discourse touched me in a no-no place

    He appears to be working somewhere where they sort of half trust the developers. They don't trust them to remember to push the changes upstream, but they do trust them when they say that the commit has been done. Which seems to be just an odd combination; nobody cares what gets committed on the developer's own machine, only what gets pushed upstream, and nothing counts until it has been pushed.


  • Grade A Premium Asshole

    @Jaime said:

    You delegate authentication, not authorization. The infrastructure has no idea what you should have access to, only who you are.

    :wtf:?

    Edit: Seriously, am I missing something? Only allow people to authenticate for a resource if you want them to be authorized for it.

    TDEMSYR



  • @dkf said:

    He appears to be working somewhere where they sort of half trust the developers. They don't trust them to remember to push the changes upstream, but they do trust them when they say that the commit has been done. Which seems to be just an odd combination; nobody cares what gets committed on the developer's own machine, only what gets pushed upstream, and nothing counts until it has been pushed.

    Why would I care what's committed on the developers machine? That code is in constant peril of being taken down in a hard drive failure. As soon as it's on the well-maintained server, it's protected. It's not only for my protection - if one of my staff developers has an equipment problem, they just log on to a virtual desktop and check out their code and pick up where they left off.



  • @Polygeekery said:

    Edit: Seriously, am I missing something? Only allow people to authenticate for a resource if you want them to be authorized for it.

    You are missing something - authorization isn't black and white. Some people have full access to the entire repo and some have access to parts of it. A typical access case is to have write access to the project you are working on and read access to the collection of shared libraries. We don't want random contractors changing code that 35 applications rely on. They can create a branch and work on that, but they can't directly change the trunk or merge into it.



  • @Jaime said:

    So does this..

    Are you an idiot?

    However, I assure you that every time it's been uttered, it wasn't a question.

    Albeit a rhetorical one, it is a question nonetheless.


  • Grade A Premium Asshole

    Is your Git repo directly linked to production?


  • Discourse touched me in a no-no place

    @Jaime said:

    You are missing something - authorization isn't black and white. Some people have full access to the entire repo and some have access to parts of it. A typical access case is to have write access to the project you are working on and read access to the collection of shared libraries. We don't want random contractors changing code that 35 applications rely on. They can create a branch and work on that, but they can't directly change the trunk or merge into it.

    The more you tell us, the more I think you guys have dug yourselves into a massive hole. The rest of us really don't want to jump down there after you!

    The sane thing to do is to share the authN problem and use simple authZ on each repo (i.e., everyone is either no access, read access or read-write access) where the repos correspond to the smallest domains you need such granularity for. It also means that it should give you sane units to version, and sane units to tag, and sane units to understand in one go. Lots of people do that, and it works really well.



  • @Jaime said:

    you now have a two-step commit - commit then push.

    There are options in the command, and tools like SourceTree simplify it into a checkbox, that let you do both in a single step.



  • @Jaime said:

    A typical access case is to have write access to the project you are working on and read access to the collection of shared libraries.

    Submodules

    You know, you could just use the tools designed to do the job.



  • @RaceProUK said:

    If you're doing that regularly, you're Doing It Wrong™.

    Explain.

    If our company runs 2 different APIs (say, one internal and one external, for example) which all use the same business logic (and thus are all in the same repository), why should I be forced to check-out the project I'm never going to need to touch? Just to waste my time waiting for already-slow Git to manage yet more files?

    I'll also note that this is one of those "every goddamned VCS before Git had the damned feature, so how the fuck does GIt justify removing it!?" things. (The answer is: Git was coded by dumbshits who didn't care if they had feature-parity with competing products.)



  • @tarunik said:

    It still goes to the central repo eventually in a push/pull model -- but there are cases when even online, you want to commit without pushing-- this allows you to save commits and keep working without making your half-baked work immediately visible to everyone else.

    Right; that's why corporations use systems like Stash, which work-around this misfeature by creating a branch of each Jira ticket.

    Then you come here and talk about how dumb a workaround it is, and Git fans tell you you're doing it wrong Atwood-style.



  • @dkf said:

    The sane thing to do is to share the authN problem and use simple authZ on each repo (i.e., everyone is either no access, read access or read-write access) where the repos correspond to the smallest domains you need such granularity for. It also means that it should give you sane units to version, and sane units to tag, and sane units to understand in one go. Lots of people do that, and it works really well.

    Except all the projects have to be in the same repo because they all refer to the same business logic layer, implemented as a C# library which has to be included in each project's solution to ensure each is using identical business logic.

    Your "solution" here ignores a lot of the reality of software development.

    Either that, or you're proposing using something like Git Submodules, which don't fucking work.


  • Grade A Premium Asshole

    @blakeyrat said:

    Then you come here and talk about how dumb a workaround it is, and Git fans tell you you're doing it wrong Atwood-style.

    At least you're not bitter...


  • FoxDev

    @blakeyrat said:

    Explain.

    You fetch the big repo once; every pull after that is just changesets.


  • Discourse touched me in a no-no place

    @blakeyrat said:

    Except all the projects have to be in the same repo because they all refer to the same business logic layer, implemented as a C# library which has to be included in each project's solution to ensure each is using identical business logic.

    You're putting build artefacts in the repo? I hope not, because that'd be loopy.

    In the Java world, you'd just have the CI service (e.g., Jenkins) trigger on the repository changes and push verified builds straight to the build repository (e.g., Artifactory) and then developers can just pull everything they're not working on from the shared store (which itself can be rebuilt because it's all built from known versions of everything) by the power of Maven. While it's a little tricky to set up to start out with, it makes for really effective working after that (reliable too) and it's a technique that's been used for years now. Release builds are no more difficult either (though they often have more complex packaging steps).



  • @dkf said:

    That's massively wrong when you're doing stuff with any of the modern DVCS solutions. They're not designed to work that way at all, and many of their assumptions will cause you grief.
    That's kind of @Jaime's point, and as big of a fan of Git as I am, this is a big complaint I have about it (that applies to other DVCSs as well, as people have said).

    At the risk of making a bunch of people go "your code base is TRWTF", we use Subversion at my place of employ, and everything goes into one big repository. We check out the parts we need for whatever part of the code we're working on. In my opinion, for a company scenario, this is the right way to go. Why? Because even though different projects use different parts of the code, there's still a lot of overlap.

    What's the typical DVCS answer? Split things up into multiple repositories and pull down what you need. In my case, that'd be at least four or five, and possibly as many as a couple dozen depending on how fine-grained you wanted them to be. Now I'll admit to not using submodules a lot, but in my experience with both submodules and svn:externals, that sort of splitting up of repositories sucks. You can potentially argue it sucks less than a monolithic repository, but it still sucks.

    At least in my opinion, the "break things up into multiple repositories" is a workaround, not a solution for the partial checkout thing.


Log in to reply