I could spend hours and hours and hours just ranting about what an awful experience dealing with git is



  • @blakeyrat said:

    Have you used TFS recently? I admit it was shit 5 years ago. Now it's pretty good-- it's just annoying about locking files.

    Our company tried TFS before moving to Mercurial, and let me tell you, EVERYONE resented the file-locking "feature"; in particular, I hated that you could bypass the lock and edit the file on your local machine, but then TFS wouldn't pick it up as changed. It's a mechanism to prevent merges, which is commendable, but ultimately futile, because really, who knows exactly what files they're going to be working on BEFORE they start? And if you're "in the groove", having to go back to check out another file(s) to work on just kills your workflow. And of course, if someone else is editing the file you need to edit and they've gone home or are sick or otherwise unavailable, you're SOL.

    But back to Git, or more importantly, the people bitching that the CLI is good enough: it isn't. You lost that argument before it started. I bet you also think usability is something that should be thought about at the end of a software project, if at all.


  • BINNED

    @blakeyrat said:

    @PedanticCurmudgeon said:
    Which means that you shouldn't have to learn anything new to use git.

    How do you get from what I said to that? WTF?

    No, it means git should be discoverable-- if there's something I need to learn to use git, it should guide me in the right direction to ensure I learn it. Right now there's no guidance. Like I posted on Twitter yesterday, the error message would have been just as useful if it read, "didn't work, fuck you".

    That's reasonable, but not what you said. This is what you said:
    What I hate is diving into a something and hitting a brick wall it's impossible to pass without hours and hours and hours of research. Learning things because they make me more productive (like new program features or shortcuts)? That's fun. Learning things because I literally can't progress without the knowledge? That's not fun.
    Note the last two sentences.
    @PedanticCurmudgeon said:
    I haven't used git extensively, but from what I've read here, you obviously have no idea what you're doing with it and are suffering as a result.

    I know. You don't consider that a bad thing?

    I don't. What I consider to be a bad thing is that you've spent more time bitching about git than it would have taken you to learn to use it properly (see boomzilla's example).

    And because idiots like you don't fucking read what I typed anyway, as evidenced in the post I'm replying to.

    And because you don't think it's hypocritical to falsely accuse others of not reading your posts while at the same time accusing them of making up shit about you.


  •  @Vanders said:

    What's your usage model where Git so awesome across a three computers and one developer?

    One computer is "web/production" and suffers from a lot of hotfixes both from me and from "powerusers" who are little unorganized. Usually it is in the web part, like typos, better formulated sentences, better names for columns, better colors in CSS and such "only text enhancements" which they need have done "now or even sooner". So if it works, it need to be collected and back-projected to the "central repozitory (the second computer). Sometimes somebody also add something there from his computer, or download new version for testing/enhancing/studying ... And I have "my computer", where I work on improvements, new features and such. It is little chaotic, but they pay for the right being chaotic and so I go with them. (And they have no bad intents, so problems are spare and simple solvable). Something similar goes for some more projects too, not all on the same "central repository" and same "web/production". But the projects are separated, so no extra more complexity here.

    I sometimes go on more than one "new way/feature" at time, so sometimes I have like 3, or even 10 branches active, until they convert together again. Sadly manytimes is better to pick something from one branch to other(s) in middle of work. But with git it is simple to manage.

     

    Sometimes something went wrong and is there need to go back in history and use older version till it is solved. And ofcouse it is good to see, who and when did some change (and maybe even why).

     

    It is not rocket science, nothing like kernel, but also its is not the simplest case. Git is good tool to manage the mess. It is overkill, but it does not eat much resources and it is simple to use, so i do not care.

    @Vanders said:

    Assuming you're smart and are actually using the decentralised nature of Git, then if it were me I'd personally use Mercurial.

    I met git first and learned it fast. It solves everything i need, so i have no urge to learn other tools. I know that Hg exists and is something like git, but i did not tried it yet. I am not saying, which is better, only that git suits me well and i am lazy to learn other tools just now, when there is so much paid work.

    @Vanders said:

    I'm not even going to bother with your perception that Git is more logical than Subversion (good grief no), and I've never once seen Subversion "destroy all your work". Oh and branching & merging in Subversion was never a huge problem, and they made it even better in what, Subversion 1.7? That's so long ago it's not even worth arguing about.

    I worked couple of years with Subversion before git  and there was still some problems, difficulties and glitches. Maybe itwas also affected by the administrator and processes at company, where i worked with it, but i did not find way to use it such fluently and effortlessly as git. Years with subversion gave me less power than weeks with git, so i switched to git, when i had the chance. Never looked back.

    @Vanders said:


    Like I said, unless you're managing hundreds of branches within branches, and dealing with potential commits from thousands of developers, you do not need all the fancy bells and whistles that Git provides. You won't ever even use them.

    That is true. I do use only split of total git potencial. But that split solves all my needs. And git is cheap, easy to instal (just "emerge git" and I am done), easy to use and does not eat much of resourses.

     

    @Vanders said:

    Simple branching and merging, which is what the vast majority of developers do (even teams of developers working on commercial software) does not require the thermonuclear Git warhead. Use the right tool for the job.[/quote ]

    I rather use thermonuclear warhead, which is easy and cheap to use than stone fist lap, which needs a lot of effort to reach just part of I need. Even if it means I do not use it to its full potencial. I am paid for results, not for taking maximum from tools.

     

    [quote user="Vanders"]

    Unless you're Linus, Git probably isn't designed to solve the problems you have, so it's the wrong tool.

    Git is not designed to solve my problems. But it is designed to solve category of problems, which  contains all of my problems, so it solves my probles too. And it solve much more problems, that i do not have, but there is no extra price for me and i am not forced to use such parts, so no problem for me.So it is good tool for me. I can affort few MB of disc space wasted on features i do not nee, it it means i have everything i need in convenient package.

    @Vanders said:

    The Git command line interface is hateful. I'll say it again: it operates on the principle of most surprise. It is not discoverable. Its output is ugly and confusing. Error reporting is so laughable it'd be funny if it weren't so dangerous.

    Maybe for you, but not for me. For me the git CLI is easy  to use and intuitive. When I need to know more, there is easy way to find it. Many times i told to myself "in a good system should be solution for this new problem.Git is good system, so it surely have it and considering what i already know, it should look something like this" and then i looked to documentation at the specific place and found, that the solution there already is and looks like i thought it would look. Maybe use some other namefor that function or slightly different (but better) syntax. But basically i needed something, i imagined where it should be and how it should look and then went and found that at the place looking just like i imagined. So i am very happy with it - it thinks the same way as i do :)

     

    And the output looks logical to me too - at least at the commands i regularry use. And the commands even suggest, what you should do next, or how to correct usual problems you did.



  • @blakeyrat said:

    @Renan said:
    I suggest using some good GUI, like Tower.

    I'd love to. Know of one for the platform 95% of the people on Earth use? (Github for Windows? Not good. Before you suggest that.)

    Git Extensions is ugly as hell and somewhat clumsy but functional. No need to ever use a CLI. The error and warning messages remain Git's and therefore incomprehensible but the UI's visual cues are helpful enough that you can safely ignore Git core's babbling. Don't think you would have run into your troubles using this UI. Been using it for three years without ever reading any fucking manual and quite happy about it.



  • @blakeyrat said:

    @PedanticCurmudgeon said:
    I haven't used git extensively, but from what I've read here, you obviously have no idea what you're doing with it and are suffering as a result.

    I know. You don't consider that a bad thing?

     

     

    Somehow I see it as good thing :D

     


  • Considered Harmful

    @JvdL said:

    Been using it for three years without ever reading any fucking manual

    There's a manual for that?



  •  @joe.edwards said:

    @JvdL said:
    Been using it for three years without ever reading any fucking manual

    There's a manual for that?

    man git

    git <command> --help

     

    and lot of online manuals, like  http://git-scm.com/book

     


  • Considered Harmful

    @gilhad said:

     @joe.edwards said:

    @JvdL said:
    Been using it for three years without ever reading any fucking manual

    There's a manual for that?

    man git

    git <command> --help

     

    and lot of online manuals, like  http://git-scm.com/book

    I couldn't find the section on fucking, which version do you have?



  • @joe.edwards said:

    @gilhad said:

     @joe.edwards said:

    @JvdL said:
    Been using it for three years without ever reading any fucking manual

    There's a manual for that?

    man git

    git <command> --help

     

    and lot of online manuals, like  http://git-scm.com/book

    I couldn't find the section on fucking, which version do you have?

     

    I do not know. Some fucking version ...

     



  • Protip: If you're going to use Git, whether by choice or by force, you might as well do yourself a favor and learn what it really does.

    I found Git pretty mystifying until I learned how it really works. It's simpler than you might think, it's just not obvious. There are some good articles and videos that explain it:

    http://sitaramc.github.com/gcs/index.html

    https://www.youtube.com/watch?v=ZDR433b0HJY

    (Or go find others that are more your style.)

    Once you get the gist of these, it becomes obvious how to solve problems with Git, or even other DVCSs.

    Maybe someday Git will have good error messages or an awesome GUI, but until then, go read / watch / learn / de-stress.



  • Jumping in without reading the rest of the thread because I want to leave my opinion.

    Git was designed for individuals to have a source control system without depending on a central server. It was not designed for collaboration, and the aspies who set up Github didn't know that. Now everyone and their dog uses Github because of "hurr durr free code hosting" (Google Code and Sourceforge are still things).

    The way works is that you don't "check out" a Git repository, because that implies the repo and working copy are on separate machines. Instead, you "clone" a repository, the SVN equalvalent of copying the repository folder (with conf, dav, db, format, hooks, and locks). You "push" and "pull", which in Git terms is "replace the original copy with your local copy" and "replace the local copy with the original copy". God help you if someone replaces the original copy with a different local copy after you made a commit; the only way to fix that is to delete the entire local copy and clone a new one.

    In closing, centralized servers GOOD, different copies of the repository BAD.



  • @MiffTheFox said:

    Git was designed for individuals to have a source control system without depending on a central server. It was not designed for collaboration, and the aspies who set up Github didn't know that.

    I've never used git before (we use SVN and other than Xcode's built-in SVN client being broken beyond all reason we rarely have any issues) so I have to ask: if git was designed to operate without a central server, what exactly is github?



  • @superjer said:

    It's simpler than you might think, it's just not obvious.

    Thanks for the link, the description there helped to explain how one would actually use Git.  Though it does sound like something where a company using it on a big project may need someone to manage it.


  • ♿ (Parody)

    @MiffTheFox said:

    [Git] was not designed for collaboration, and the aspies who set up Github didn't know that.

    I can't talk for any "aspies" with github accounts (I'm neither an aspie nor an owner of such an account, no matter how much blakey wants either to be so), but in fact, git was created expressly to make collaboration on a massive scale easy to do. Perhaps their aspie-hood knows something you don't.

    @MiffTheFox said:

    God help you if someone replaces the original copy with a different local copy after you made a commit; the only way to fix that is to delete the entire local copy and clone a new one.

    You mean, if they start over completely with an entirely new repository? Does this actually happen?

    @MiffTheFox said:

    In closing, centralized servers GOOD, different copies of the repository BAD.

    It's good to have a canonical, centralized server for a particular project, true, but your last assertion is...bazaar. You clearly haven't drunk the DVCS Kool-Aid. You should. Join us.



  • @MiffTheFox said:

    You "push" and "pull", which in Git terms is "replace the original copy with your local copy" and "replace the local copy with the original copy". God help you if someone replaces the original copy with a different local copy after you made a commit; the only way to fix that is to delete the entire local copy and clone a new one.

    What the hell are you on about? "push" and "pull" never replace anything ever. All they do is add new history. You don't know what the hell you're talking about.

     


  • ♿ (Parody)

    @mott555 said:

    I've never used git before (we use SVN and other than Xcode's built-in SVN client being broken beyond all reason we rarely have any issues) so I have to ask: if git was designed to operate without a central server, what exactly is github?

    Basically, instead of checking out a working copy (svn), you clone the entire repository locally. So everyone has a full copy of the repo, so you can commit, branch, merge, etc locally. A "centralized" repository is just a convention that a particular project can establish as the official repository. Typically, it will have fairly restrictive permissions, and you may end up with a gatekeeper who reviews submissions from external sources before they are put into the central, canonical repository (e.g., Linus Torvalds for the Linux kernel).

    Github provides a convenient place for people to host a centralized repository where people can collaborate. You could set something similar up on your own server, just like you could with, say svn. Alternatively, everyone working on a project could just email patches around and keep their own repositories.

    One of the obvious benefits of DVCS is that you don't need network access to be able to work as normally. Obviously, you can't push your changes out to others in this situation, but you can commit changes as you go, branch, merge, etc. Of course, the longer you work in isolation, without synching up with other people's work, the more merging you'll have to do when you eventually do. But that's a normal part of DVCS workflow, and not a major problem, unless you're a spaz who can't figure out how to merge changes.



  • @superjer said:

    @MiffTheFox said:

    You "push" and "pull", which in Git terms is "replace the original copy with your local copy" and "replace the local copy with the original copy". God help you if someone replaces the original copy with a different local copy after you made a commit; the only way to fix that is to delete the entire local copy and clone a new one.

    What the hell are you on about? "push" and "pull" never replace anything ever. All they do is add new history. You don't know what the hell you're talking about.

     

    Okay, fine. What's in the repository besides history? Code? Push and pull add that too.

    Although they don't replace, that's true. If there's a breaking change (they pull, you pull, they commit and push, you can't push anymore because the other person already decided the "canonical" history), congratulations, you made a fork that's now completely separate from the "canonical" project!



  • @MiffTheFox said:

    Okay, fine. What's in the repository besides history? Code? Push and pull add that too.

    No. The history is the history OF the code. Push shares history and has no effect on any working copy (what I assume you mean by "code"). Fetch is the opposite of Push, and pull is just for convenience: it does a fetch, then a merge. Working copies are irrelevant. You can change your working copy to match any point in history at any time. And Git will not screw with your working copy if it has uncommitted changes.

    @MiffTheFox said:

    Although they don't replace, that's true. If there's a breaking change (they pull, you pull, they commit and push, you can't push anymore because the other person already decided the "canonical" history), congratulations, you made a fork that's now completely separate from the "canonical" project!

    Only people with permission can decide the "canonical" history, so that "problem" is by your own design. It's just one possible workflow anyway. And if you end up in that situation, all you have to do is pull, so you are up to date, and then you can push just fine. I don't know what else you would even expect.

     


  • ♿ (Parody)

    @MiffTheFox said:

    Although they don't replace, that's true. If there's a breaking change (they pull, you pull, they commit and push, you can't push anymore because the other person already decided the "canonical" history), congratulations, you made a fork that's now completely separate from the "canonical" project!

    What are you talking about? Why can't you push any more? I think maybe you should ask blakeyrat how this all works. He seems to have a better handle on it all.



  • @boomzilla said:

    @MiffTheFox said:
    Although they don't replace, that's true. If there's a breaking change (they pull, you pull, they commit and push, you can't push anymore because the other person already decided the "canonical" history), congratulations, you made a fork that's now completely separate from the "canonical" project!

    What are you talking about? Why can't you push any more? I think maybe you should ask blakeyrat how this all works. He seems to have a better handle on it all.

    Well you do have to pull again

    AND ALSO IF SOMEONE ELSE PUSHES AGAIN THE WORLD WILL EXPLODE DEAR GOD DON'T DO IT



  •  @MiffTheFox said:

    Although they don't replace, that's true. If there's a breaking change (they pull, you pull, they commit and push, you can't push anymore because the other person already decided the "canonical" history), congratulations, you made a fork that's now completely separate from the "canonical" project!

    What the crap is this?

    You pull, git performs a merge, and then you push the merged version. How bloody difficult was that?



  • @smariot said:

     @MiffTheFox said:

    Although they don't replace, that's true. If there's a breaking change (they pull, you pull, they commit and push, you can't push anymore because the other person already decided the "canonical" history), congratulations, you made a fork that's now completely separate from the "canonical" project!

    What the crap is this?

    You pull, git performs a merge, and then you push the merged version. How bloody difficult was that?

    Nope. I pull, Git shits it's pants because it can't figure out how to merge a binary file, and nothing of use can be done.



  • @MiffTheFox said:

    Nope. I pull, Git shits it's pants because it can't figure out how to merge a binary file, and nothing of use can be done.

    "Nothing of use can be done," huh?  Did you put 3 seconds of effort in, even?

    You could just do exactly what it says and "fix the conflicts and then commit the result."

    Do you expect a VCS to be able to merge arbitrary binary files?

    If you like, you can configure a custom merge driver to have Git run the appropriate automatic or manual tool based on file type. But I have no idea why you'd expect Git to just "guess" how to merge your binary files. That would be insane. That's why it stops and asks you to do it, and let it know when you're done. If doing the only sane thing counts as "shitting its pants" then I'm not sure what to say.

     



  • @superjer said:

    @MiffTheFox said:

    Nope. I pull, Git shits it's pants because it can't figure out how to merge a binary file, and nothing of use can be done.

    "Nothing of use can be done," huh?  Did you put 3 seconds of effort in, even?

    You could just do exactly what it says and "fix the conflicts and then commit the result."

    Do you expect a VCS to be able to merge arbitrary binary files?

    If you like, you can configure a custom merge driver to have Git run the appropriate automatic or manual tool based on file type. But I have no idea why you'd expect Git to just "guess" how to merge your binary files. That would be insane. That's why it stops and asks you to do it, and let it know when you're done. If doing the only sane thing counts as "shitting its pants" then I'm not sure what to say.

     

    Once Git entered it's borked state, I couldn't view the changed or the original files. I except it to stop and ask me to do it and let me know when it's done, bit I also expect to have a way to tell it "ignore my changes and pull the latest one from the canonical branch" and a way to tell it "alright, I'm done, push away!"



  • @MiffTheFox said:

    Once Git entered it's borked state, I couldn't view the changed or the original files. I except it to stop and ask me to do it and let me know when it's done, bit I also expect to have a way to tell it "ignore my changes and pull the latest one from the canonical branch" and a way to tell it "alright, I'm done, push away!"

    For your reference, consider the likes of, oh... what would it be for a merge... "git merge --abort" I think? If it still thinks you're merging things, anyway ... then "git fetch; git reset --hard origin/master" (or whatever the branch you want is named. or its sha.) Use "git reflog" if you've had a good checked-in state recently and lost it and need to find it.

    Then if you need to push *exactly that ref* and clobber any changes which are not on your branch, you can push with --force. If that binary's the only difference, and you don't mind the implications of lost history on that branch, then that'll work. If not, you should check out the right binary from history (I don't recall how to do that off the top of my head but assume you can figure it out), commit THAT as a change and push the change.



  • @MiffTheFox said:

    Once Git entered it's borked state, I couldn't view the changed or the original files. I except it to stop and ask me to do it and let me know when it's done, bit I also expect to have a way to tell it "ignore my changes and pull the latest one from the canonical branch" and a way to tell it "alright, I'm done, push away!"

    If you want it to ignore ALL your changes, you can git reset --hard origin/master

    (assuming origin is the remote repo and master is the branch)

    If you want to complete the merge by ignoring your changes in just the binary file then git checkout --theirs yourfile.bin and commit.

    Googling for "git binary conflict" or somesuch leads you to these and many other possible resolutions.

    Your situation sounds unusual because you want to both throw away (some of) your changes AND push them...

     



  • I wanted to do the git equivilant of `rm changed-file; svn up; svn commit`



  •  @MiffTheFox said:

    I wanted to do the git equivilant of `rm changed-file; svn up; svn commit`

    If you have not committed your changes to changed-file yet, you can do almost the same thing: rm changed-file; git pull; git push

    If you had already added changed-file, you should un-add it first with git reset HEAD changed-file

    (As suggested by `git status`.) Git will not automatically throw away any work you've added.

    If you've already committed your changes to changed-file, you'll need to either revert the change in a new commit, or delete/amend the offending commit from your history. If it is a large or embarassing file, you should definitely do the latter. You can edit your commit history with git rebase --interactive <parent-of-bad-commit> and follow the instructions.

     

     



  • Dare I venture a guess that the only source control blakeyrat likes is the first one he learned and that it happens to be SourceSafe?


  • Discourse touched me in a no-no place

    @boomzilla said:

    This is the equivalent of opening up a GUI program and looking at what shows up on the screen.
    <whine>but typing's hard, and I shouldn't have to do it...</whine>



  • @mott555 said:

    if git was designed to operate without a central server, what exactly is github?
    Cloud storage.

    Distributed VCS means you don't need a central server. But any "local" copy can act like a central repository if people agree on it.



  • @boomzilla said:

    I think maybe you should ask blakeyrat how this all works. He seems to have a better handle on it all.
     

    ACHIEVEMENT GET: DUAL WIELDING BURN



  • @veggen said:

    Dare I venture a guess that the only source control blakeyrat likes is the first one he learned and that it happens to be SourceSafe?
     

    No, that's me.



  • @The_Assimilator said:

    Our company tried TFS before moving to Mercurial, and let me tell you, EVERYONE resented the file-locking "feature"; in particular, I hated that you could bypass the lock and edit the file on your local machine, but then TFS wouldn't pick it up as changed. It's a mechanism to prevent merges, which is commendable, but ultimately futile, because really, who knows exactly what files they're going to be working on BEFORE they start? And if you're "in the groove", having to go back to check out another file(s) to work on just kills your workflow. And of course, if someone else is editing the file you need to edit and they've gone home or are sick or otherwise unavailable, you're SOL.

    But back to Git, or more importantly, the people bitching that the CLI is good enough: it isn't. You lost that argument before it started. I bet you also think usability is something that should be thought about at the end of a software project, if at all.

    1) With ANY VCS you can always bypass the locking mechanism and directly efit the file [the methodology will vary]. Specifically for TFS with Server Workspaces, the only time a file is "locked" (with the default settings) is if it is a binary file - which is usually non-mergable, so this is a good thing.

    2) You should always know what you are working on before you start - an assigned task (even if self assigned) with a specific set of goals/requirements.

    3) Making *ANY* change that is not directly related to #2 is poor practice [aka Drive-By Coding]


  • ♿ (Parody)

    @TheCPUWizard said:

    2) You should always know what you are working on before you start - an assigned task (even if self assigned) with a specific set of goals/requirements.

    3) Making *ANY* change that is not directly related to #2 is poor practice [aka Drive-By Coding]

    Yeah, sort of. But it seems pretty common to make some changes and in the process discover that it broke something you hadn't noticed before, requiring that you change something else that you weren't originally planning on changing. So, like all tragedy of the commons, the solution is to get really aggressive in checking out files, and eveybody loses. Even if you are properly parsimonious with your check outs, that's a poor reason to block other people from working on some of the same files.

    The check out - check in paradigm of version control should die a deserved fiery death.



  • @veggen said:

    Dare I venture a guess that the only source control blakeyrat likes is the first one he learned and that it happens to be SourceSafe?

    The first source control I used was CVS. Unless you count "Track Changes" in Office.


  • :belt_onion:

    @boomzilla said:

    @bjolling said:
    Git really is painful to learn when coming from a source control system like TFS. Branching, merging, conflict solving are very intuitive in TFS.

    Could you go into more detail about this? I agree that gits commands seem like weird ass choices, but could you describe the differences between merging in TFS and git?

    That was not the point. I didn't have to investigate anything in order to start using branching and merging on TFS. I just right clicked on the main branch and then selected 'create branch'. When I started on Git I was far more experienced with the whole branching and merging philosophy than when I started on TFS and still I had a big learning curve to go through. Even though (or maybe because) I chose Github because I thought it would be closer to TFS by being a central code repository. The documentation of Git of course emphasizes it's distributed nature.


  • ♿ (Parody)

    @bjolling said:

    @boomzilla said:
    @bjolling said:
    Git really is painful to learn when coming from a source control system like TFS. Branching, merging, conflict solving are very intuitive in TFS.

    Could you go into more detail about this? I agree that gits commands seem like weird ass choices, but could you describe the differences between merging in TFS and git?

    That was not the point.

    Well, it's definitely related. I was just curious.

    @bjolling said:

    I didn't have to investigate anything in order to start using branching and merging on TFS. I just right clicked on the main branch and then selected 'create branch'.

    That seems pretty much the same as git (accounting for the CLI vs GUI aspect). I guess I was more interested in the merging and resolving conflicts in TFS, especially because git's way of dealing with merge conflicts seemed to be such a surprise to blakeyrat, even though it seems like a pretty standard convention with SCM tools.



  • @boomzilla said:

    Yeah, sort of. But it seems pretty common to make some changes and in the process discover that it broke something you hadn't noticed before, requiring that you change something else that you weren't originally planning on changing. So, like all tragedy of the commons, the solution is to get really aggressive in checking out files, and eveybody loses. Even if you are properly parsimonious with your check outs, that's a poor reason to block other people from working on some of the same files.

    The check out - check in paradigm of version control should die a deserved fiery death.

     1) "Checkout" does not mean EXCLUSIVE. It simply meens that the repository (and therefore other users) are AWARE. Multiple people can checkout the same file (quite common), unless one has ALSO added an Exclusive lock.
      2) Editing a file in the IDE automatically does a checkout (non exclusive, unless a binary file).

    Given these two facts, I can not make any sense of your assertion that "the solution is to get really aggressive in checking out files".



  • @TheCPUWizard said:

    1) "Checkout" does not mean EXCLUSIVE. It simply meens that the repository (and therefore other users) are AWARE. Multiple people can checkout the same file (quite common), unless one has ALSO added an Exclusive lock.
    2) Editing a file in the IDE automatically does a checkout (non exclusive, unless a binary file).

    Yeah that was throwing me too, but I can't reply directly to Boomzilla because he's a horrible person and if he knows I'm reading his posts, he'll start asking me a series of retarded questions he already knows the answers to.

    Regardless of the "theory" of how it works, the day-to-day reality is that you double-click a file to edit it, it silently gets non-exclusively checked-out for you, and when you commit it silently merges the changes together without bothering you about them. 99.99% of the time. At least as well as SVN does.

    I'm guessing Boomzilla's either never used it, or used one of the old shitty pre-2008-or-so versions that sucked. I know open source people will be amazed at this, but it turns out that software can improve over time!

    Edit: or it just occurred to me, maybe he was using it with Vim or Emacs or some other shitty broken editor that doesn't work.


  • ♿ (Parody)

    @TheCPUWizard said:

    1) "Checkout" does not mean EXCLUSIVE. It simply meens that the repository (and therefore other users) are AWARE. Multiple people can checkout the same file (quite common), unless one has ALSO added an Exclusive lock.

    2) Editing a file in the IDE automatically does a checkout (non exclusive, unless a binary file).

    Given these two facts, I can not make any sense of your assertion that "the solution is to get really aggressive in checking out files".

    Indeed. Those were facts I did not have. I was drawing conclusions from blakeyrat's statement about being annoyed by locked files and subsequent discussion. The non-exclusive check out doesn't seem to have much purpose (is making others aware of your check out terribly valuable?). How does it deal with multiple people with nonexclusive check outs and modifications?


  • ♿ (Parody)

    @blakeyrat said:

    I'm guessing Boomzilla's either never used it, or used one of the old shitty pre-2008-or-so versions that sucked.

    I've never used it, which is why I was asking about it.

    @blakeyrat said:

    Regardless of the "theory" of how it works, the day-to-day reality is that you double-click a file to edit it, it silently gets non-exclusively checked-out for you, and when you commit it silently merges the changes together without bothering you about them. 99.99% of the time. At least as well as SVN does.

    This is a major reason why I avoid svn whenever possible. I don't want any sort of silent merging going on like that. Let me make my changes, commit them, and then I'll worry about merging. In a completely separate action / changeset / revision.

    @blakeyrat said:

    Edit: or it just occurred to me, maybe he was using it with Vim or Emacs or some other shitty broken editor that doesn't work.

    TDEMSYR



  • @boomzilla said:

    Indeed. Those were facts I did not have. I was drawing conclusions from blakeyrat's statement about being annoyed by locked files and subsequent discussion. The non-exclusive check out doesn't seem to have much purpose (is making others aware of your check out terribly valuable?). How does it deal with multiple people with nonexclusive check outs and modifications?

    Jesus shit you dumb fuck. Ok I'm breaking my own rule.

    Locking files is annoying because it has to do a server round-trip, and when you're "in-the-zone" coding away and your source control server happens to be a Pentium 200 in Chicago, like mine was at my last job, it's like a 2.5-3 second wait between you opening the file and being able to edit it. You didn't even fucking bother to ask what about it was annoying, you just assumed you knew EVERYTHING about TFS from reading one sentence in one forum post and went off charging down the race track you monumental douche. Goddamned. Are you the reincarnation of Hitler? I can't imagine any other way you could be such a horrible person.

    In the future, could we have the forum just pre-pend all of Boomzilla's posts with a boldface warning:

    WARNING: this poster has a history of not having any fucking clue what he's talking about. You're more likely to get intelligent conversation from your pet hamster, and he communicates primarily by shitting on things.


  • Considered Harmful

    @blakeyrat said:

    Are you the reincarnation of Hitler?


  • ♿ (Parody)

    @blakeyrat said:

    Locking files is annoying because it has to do a server round-trip, and when you're "in-the-zone" coding away and your source control server happens to be a Pentium 200 in Chicago, like mine was at my last job, it's like a 2.5-3 second wait between you opening the file and being able to edit it.

    OK. That sucks, too. I guess I shouldn't bother reading what you wrote without getting extra clarification. And then you could get mad about how I can't read your mind or discern "painfully obvious" details. Sometimes I forget how poorly you communicate.

    @blakeyrat said:

    You didn't even fucking bother to ask what about it was annoying, you just assumed you knew EVERYTHING about TFS from reading one sentence in one forum post and went off charging down the race track you monumental douche.

    Once again, you're supremely unfamiliar with what you wrote. But that's OK. We expect it. You said that locking files was annoying. You didn't say anything at all about server latency, which is a completely non-generic issue, though it's certainly valid. I get why you are so mad. It's embarrassing to realize how poor your writing ability is.

    @blakeyrat said:

    Are you the reincarnation of Hitler? I can't imagine any other way you could be such a horrible person.

    I know. You've established over and over and over and over and over that you don't have much of an imagination, aren't careful about or particularly interested in details or discussions (video games and ponies excepted, of course!).

    @blakeyrat said:

    WARNING: this poster has a history of not having any fucking clue what he's talking about. You're more likely to get intelligent conversation from your pet hamster, and he communicates primarily by shitting on things.

    That would make an excellent sig for you. Or maybe we can get Alex to make an exception for your profile bio?



  • @TheCPUWizard said:

    2) You should always know what you are working on before you start - an assigned task (even if self assigned) with a specific set of goals/requirements.

    3) Making *ANY* change that is not directly related to #2 is poor practice [aka Drive-By Coding]

     

     

    Yes, typical example: "The client send us misspelled company name, please edit it everywhere in the web pages. Also please check all their products against their official site and edit all mispelled"

    The changed files are any *.html and any *.inc in any subdir, which contains some misspelled name (from unknow yet range). But not all files contains such errors.

     

    Yes, I can make search for them, then check them all out, then edit, then merge it back, but much faster is to do it along the way - find 1. mispelled, edit all ocurencies, look for other products, edit all and so on, maybe in course of some days.Commit changes as I do them so they are not lost and they are revocable. Then, when everything is completed just merge it with current state of project. While others can work on the same files for other reasons and be not bothered by my editing it too.Then they mergetheir changes and bets are this will go smoothly as long, as they do not edit the same words as me.

     



  • @gilhad said:

    Yes, I can make search for them, then check them all out, then edit, then merge it back, but much faster is to do it along the way - find 1. mispelled, edit all ocurencies, look for other products, edit all and so on, maybe in course of some days.Commit changes as I do them so they are not lost and they are revocable. Then, when everything is completed just merge it with current state of project. While others can work on the same files for other reasons and be not bothered by my editing it too.Then they mergetheir changes and bets are this will go smoothly as long, as they do not edit the same words as me.

    Exactly. As you edit each file it is checked out (not exclusive, does not block anyone, merely flags other developers you are working on the file). When you get to a convenient point, you commit your changes to the feature or developer branch (depending on the model you are using). When you are done the assigned task you merge your branch with the next hier level in the hierarchy (possibly main if the team is small).

    I do this every day (often many times in a single day) with TFS and have absolutely no problems.



  • @blakeyrat said:

    Locking files is annoying because it has to do a server round-trip, and when you're "in-the-zone" coding away and your source control server happens to be a Pentium 200 in Chicago, like mine was at my last job, it's like a 2.5-3 second wait between you opening the file and being able to edit it.
    The presumption is that you were not living in Chicago and you were having to work over a WAN link.  Please clarify since you don't want us to assume anything.

    If you were working over a WAN link, 2.5 - 3 seconds is probably pretty good.  The latency was likely the bigger factor than the Pentium 200 acting as a server.  What were you going to do with those 2.5 - 3 seconds anyway?  If it was really that bad, they should have considered a different tool instead.

    If you were on a LAN link, I still want to know what you would've done with that extra 2.5 - 3 seconds anyway.  If you opened 100 files a day, that's still only 5 minutes.

    Finally, you're always bitching about complaints that "Windows 95 did this" or "Office 97 did that" or "Windows XP broke this" -- "Why the HELL are you living in the PAST??!?!?!?!  THIS IS 2012?!?!??!  WINDOWS 7!!!!   ALL THOSE PROBLEMS ARE SOLVED!!!!!!"  But you're bitching about a previous job.  You no longer have to deal with it.  It was probably a suboptimal setup using the wrong tool.  So I ask:  WHY ARE YOU LIVING IN THE PAST?!?!?!?!?!?!? 



  • Goddamned you're stupid.



  • @nonpartisan said:

    What were you going to do with those 2.5 - 3 seconds anyway? [tantrum] [tantrum]

    I can't speak for everyone, but I would have spent the 2.5-3 seconds NOT losing my train of thought.

     


Log in to reply