Boost::Fuck! (the git command)


  • Winner of the 2016 Presidential Election

    @Gaska said:

    No, the syntax is hell because people write compile-time code generators using type declaration syntax only. There's no way for this to not go wrong.

    QFT. If you need to generate code, write a fucking script and include that in your build process. Much faster and a thousand times more readable than template metaprogramming.



  • @Magus said:

    Shelfsets

    Shelfsets are just branches that are somehow tagged as temporary.
    Cute but unnecessary, just branch and delete the branch when you're done with it - or keep it if it turns out to be useful.

    Blakey's actual problem is that he's trying to comply with nonsense rules, and for some reason is using a GUI that has even fewer features than the barebones one that ships with git, and is possibly broken as well.



  • @Jaime said:

    I've never had a VCS operation slow enough that it impeded my work. You can't sell me solutions to problems I don't have.

    The VS2010 Perforce plugin is OMGSLOW in default configuration. It makes VS completely unusable - can't even scroll.


  • Discourse touched me in a no-no place

    @Groaner said:

    How did this turn into a C++ flamewar? Have the requisite 26.37 days passed since the last one?

    Those are discodays, so their relation to real ones is extremely hazy at best…


  • Banned

    Discourse uses NTC timezone internally.



  • @tarunik said:

    The problem is someone has to take those pitfalls for you, then. Fortunately, those folks are usually specialists at it, but they too can get them wrong...

    Oh by all means, that stuff is necessary for sure. Just keep it away from as many people as possible :)



  • @Kian said:

    I've toyed with the idea of creating a tool that will append every source file in a project, then compile the whole thing as one (maybe split it into a few, to make use of multiple cores). Most projects would probably get huge gains from that.

    I'm sure such tools are readily available. sqlite3 does that:
    http://opensource.apple.com/source/Heimdal/Heimdal-323.12/lib/sqlite/sqlite3.c
    and state it produces upto 5% better performance.
    Obviously you don't need to keep making changes and recompiling unlike what you'd do while in active development of your own project (which is why I split files once they grow too large or contain things they weren't originally intended for).


  • ♿ (Parody)

    @blakeyrat said:

    If Linux assholes pulled their heads out of their asses for a few minutes and stopped giving a shit about being so 1337, they might realize: "hey wait, that's a really good idea."

    If I told you again that they've already done that, would you ignore it again?


  • ♿ (Parody)

    @blakeyrat said:

    That's because Linux is powered by Stockholm Syndrome. Everybody using it has basically gone through "wow this CLI is so great!" brainwashing.

    Yes, we're all terribly plagued by the troubles you imagine we have. This is certainly the simplest explanation.


  • Discourse touched me in a no-no place

    @Nprz said:

    Obviously you don't need to keep making changes and recompiling unlike what you'd do while in active development of your own project (which is why I split files once they grow too large or contain things they weren't originally intended for).

    There are also linkers that can do cross-module optimisation. They're not particularly common yet, but I expect that will change. (The key is retaining sufficient type information and annotated metadata so that the optimiser can figure out WTF is going on. That's where bytecoded systems like Java and C# have an advantage: they've always retained the metadata.)



  • @Kian said:

    I find it useful as a "save point". I can try a bunch of things, easily return to one of many stages of development if something doesn't work quite right, etc. Maybe I'm blocked on one task and want to switch to another in the meantime. Centralized solutions always make that workflow more painful

    No they don't. Just create a developer branch and do exactly the same thing. When you use git commits as a "save point" without pushing, you get the following disadvantages:

    1. Your work is still on your workstation.
    2. The team (and your employer) isn't aware of what you are doing.

    @Kian said:

    Also, they're more of a pain to set up. To date, I haven't worked in one where everything worked correctly. There's always some functionality that the team has essentially given up because the server doesn't work.

    The only reason git seems easier is that it grew up in the "age of the cloud". Git is easy to set up because GitHub does it for you for free.

    @Kian said:

    Also, I work "overseas" (yay cheap labor!) and servers are often back in the US, which means connection times are awful for even trivial operations.

    That makes a DVCS ideal for you. It doesn't make DVCS better than CVCS for people with decent network connectivity.


  • Discourse touched me in a no-no place

    @Jaime said:

    The team (and your employer) isn't aware of what you are doing.

    You can always have a separate repository for telling other people about what you're doing without needing to push to the truly shared central repo. 😈 🚎

    I don't really recommend this. Most tooling doesn't cope very well with having large numbers of remote repositories, leading to very confusing personal workflows.



  • @Jaime said:

    No they don't. Just create a developer branch and do exactly the same thing. When you use git commits as a "save point" without pushing, you get the following disadvantages:

    Your work is still on your workstation.
    The team (and your employer) isn't aware of what you are doing.

    The way the central source control systems I've used were set up, I was not allowed or it was too much of a hassle to create developer branches. And I couldn't upload until I had the whole feature complete, so they couldn't see what I was working on anyway.

    You might argue that the problem is the source control was set up wrong or misused, and I would agree, but that doesn't really help me. With git, even if the central repo is set up wrong, my local instance gives me access to everything I want to do.

    @Jaime said:

    The only reason git seems easier is that it grew up in the "age of the cloud". Git is easy to set up because GitHub does it for you for free.
    Forget about the server. All the server needs to let me do is clone, push and pull. Git is easier to set up because once I install it in my machine, I'm up and running.


  • BINNED

    @sloosecannon said:

    At least jquery is (usually) the right tool for the job

    From "Stupid shit I saw on SO, Vol.1".

    Yes, the proper answer has been provided. TRWTF is even providing the jQuery "alternative".


    Filed under: INB4: TRWTF is @Onyx looking for this, I forget shit I don't use often, OK?


  • FoxDev

    StackOverflow - We Guarantee You'll Always Get The Fourth-Best Solution



  • @Kian said:

    You might argue that the problem is the source control was set up wrong or misused, and I would agree, but that doesn't really help me. With git, even if the central repo is set up wrong, my local instance gives me access to everything I want to do.

    Of course it was set up wrong. Any VCS would have met your needs just fine. Installing the SubVersion binaries and creating a local repo is just as trivial as it is in git. So, this doesn't indicate a benefit of git or DVCSs in general, rather a necessary workaround because the people you work for are dysfunctional.

    @Kian said:

    Forget about the server. All the server needs to let me do is clone, push and pull. Git is easier to set up because once I install it in my machine, I'm up and running.

    The server is all that matters. The fact that one remote worker can locally version his files is irrelevant in the grand scheme of writing software. The only reason you need to care about this is because you work for incompetents.

    You are not the ideal case and none of your experiences reflect the good or bad feature of any product. All you have shared is how screwed up the people you work with are and how you have managed to cope. No one should ever replicate what is happening to you. Therefore, I could conclude that since you are in a horrible situation and you are using git, then git has contributed to your horrible situation. Of course, this isn't a correct conclusion, but it is just as valid as concluding that git is a useful product based on your experiences.



  • @Jaime said:

    The server is all that matters. The fact that one remote worker can locally version his files is irrelevant in the grand scheme of writing software. The only reason you need to care about this is because you work for incompetents.

    I dunno about that -- not having to muck about with a server daemon to version-control a solo operation is rather nice...

    Of course, there's always situations on the other end of the scale, where you have what basically amounts to N-tier version control.



  • @tarunik said:

    I dunno about that -- not having to muck about with a server daemon to version-control a solo operation is rather nice...

    Sure, for a solo operation. However...

    @Kian said:

    Also, I work "overseas" (yay cheap labor!) and servers are often back in the US, which means connection times are awful for even trivial operations.

    ... he isn't a solo developer.



  • He falls in one of the other DVCS usecases (namely, unreliable server access)...

    All said and done, though, I'd rather have a DVCS in almost any environment, simply for the additional flexibility and scalability a good one provides; while I can put up with SVN, I'd expect it to be problematic once you start getting into massively branchy project structures. (Merging is one of the things Git really gets right, FWIH -- it's something other tools *cough* ClearCase *hack* could learn from.)



  • @tarunik said:

    He falls in one of the other DVCS usecases (namely, unreliable server access)...

    I already mentioned that. That's why his opinion of whether a DVCS can be good in a CVCS workflow is irrelevant. Due to his connectivity issues, he will never be happy with a centralized solution. He should stick with a DVCS product and a DVCS workflow.

    Remember, he chimed in on the topic of "Git is a better CVCS than any CVCS product". How could someone with poor connectivity have relevant experience with CVCS workflow?



  • @Jaime said:

    That's why his opinion of whether a DVCS can be good in a CVCS workflow is irrelevant. Due to his connectivity issues, he will never be happy with a centralized solution. He should stick with a DVCS product and a DVCS workflow.

    I agree; however, you made it sound like DVCSes were irrelevant...which clearly isn't the case.


  • :belt_onion:

    @Onyx said:

    http://stackoverflow.com/questions/2076284/scaling-images-proportionally-in-css-with-max-width

    From "Stupid shit I saw on SO, Vol.1".

    Yes, the proper answer has been provided. TRWTF is even providing the jQuery "alternative".


    Filed under: INB4: TRWTF is @Onyx looking for this, I forget shit I don't use often, OK?

    Hence the usually 😄

    To be fair though, that's more of a community WTF - the answer-poster did say that was for a specific case. The community should've picked the non-jquery one.



  • @Jaime said:

    None of this overcomes the primary obstacle that I can't checkout a small portion of a big repository.

    Yeah you can

    Filed under: thank you dicksource for notifying me about my own post



  • @gravejunker said:

    None of this overcomes the primary obstacle that I can't checkout a small portion of a big repository.

    Yeah you can
    Not in the way that Jaime means; in git terminology, that would be you can't clone a small portion of a big repository.

    You could get in the same general ballpark kind of by doing a shallow clone than a sparse checkout (in Git terms), but there is still nothing that rivals what you get with a traditional CVCS on this front.


  • Discourse touched me in a no-no place

    @EvanED said:

    Not in the way that Jaime means; in git terminology, that would be you can't clone a small portion of a big repository.

    The key is that the principal unit of versioning is the entire checked out source tree. It might be possible to avoid this by making appropriate assumptions about what to do with the rest of the (now partially virtual) checkout… but disk's cheap so there's really not that much point in going to all the bother.


  • ♿ (Parody)

    @dkf said:

    but disk's cheap so there's really not that much point in going to all the bother.

    Right, but the complaint isn't about disk but network. Well, it could be about disk, but more about the time it takes even for a local clone than the ultimate footprint.


  • Discourse touched me in a no-no place

    @boomzilla said:

    Right, but the complaint isn't about disk but network.

    You don't do the initial clone on a slow network (except in Milwaukee). Not a big deal really.


  • ♿ (Parody)

    @dkf said:

    You don't do the initial clone on a slow network (except in Milwaukee)

    Bullshit. I mean, ideally, yes.

    @dkf said:

    Not a big deal really.

    To you, perhaps.


  • Discourse touched me in a no-no place

    @boomzilla said:

    To you, perhaps.

    Look, the only times when git has real problems with speed come when you're stuffing binaries in the repository. Don't do that. (I've done it a few times; really, don't do that!)



  • Only if said binaries are both incompressible and can't be deduplicated.
    In which case, any storage system will struggle!

    One of my git repositories contains a 70MB png image stream which adds and changes some content each commit.
    The complete repo is 80MB.

    It handles it because while the stream is incompressible, large sections are unchanged between versions so it dedupes really well.

    Without GCing it'd be multi GB.


  • Discourse touched me in a no-no place

    @lightsoff said:

    Only if said binaries are both incompressible and can't be deduplicated.

    We were putting DOCX files and PDFs in. Both are compressed already (so incompressible) and were not meaningfully deduplicatable. So really just a dead binary weight that had to be just transferred directly every time. Don't do that.

    (I know how a smarter algorithm could have improved on it, by treating the ZIP archive as basically a directory full of stuff that the other algorithms can work on, but that's not what was happening. The world of possible smartness is infinitely larger than the world of actual smartness.)


  • FoxDev

    With DOCX, it is possible to do proper diffs; Word's OneDrive integration does it to minimise the amount of data transferred


  • Discourse touched me in a no-no place

    @RaceProUK said:

    With DOCX, it is possible to do proper diffs

    As I parenthetically noted, it's quite possible to do diffs. It just doesn't actually happen when you stuff the document into git (with things as they currently stand) so the possibility wasn't much f'ing use to me.



  • @dkf said:

    We were putting DOCX files and PDFs in. Both are compressed already (so incompressible) and were not meaningfully deduplicatable. So really just a dead binary weight that had to be just transferred directly every time. Don't do that.

    The old-ish Word 2003 XML format might be more suitable for your purposes...(if you were in an ODF environment, I'd recommend the LibreOffice "flat ODF" option, but Word/... apparently don't support that).


  • Discourse touched me in a no-no place

    @tarunik said:

    The old-ish Word 2003 XML format might be more suitable for your purposes...

    I don't know how it handles embedded images, of which there's a fair number. The images don't change between versions, so any properly-aware diff tool would take them in its stride.

    The PDFs are just miserable.


Log in to reply