50 Versions of Shed Control Wars


  • Discourse touched me in a no-no place

    @RaceProUK said:

    Isn't that every episode?

    Well, yes, and it's almost like every blakey post ever, too, not coincidentally.


  • Java Dev

    I've had a huge argument with a colleague once. He was advocating git, and as a main support quoted being able to have private branches, such that your development copy worked significantly different from the master copy.

    My point was that, in a corporate environment, allowing such a set-up, far from being a desirable feature, is more like a bug.

    I'm not sure if I ever managed to convince him.



  • I used to work on a game where the rendering team would routinely work with audio disabled, and then wonder why they kept blowing the game's memory budget with, like, every third checkin they made... :<erter>/


  • Discourse touched me in a no-no place

    @blakeyrat said:

    Why isn't there one?

    Because there's no good way to make it work. You can't have a lock without enforcement of that lock, and the enforcement messages need to propagate in a way that doesn't work with the distributed repositories model.

    In a classical Centralised VCS (e.g., SVN) you've got one repository that is the store of Truth. It can ensure that if a lock is set, nothing can get written to the repository in the locked part except by the lock owner (or an admin, if necessary).

    In a Distributed VCS (e.g., Git) there are many repositories that are all peers (including one per user). One might be nominated by management as Truth store, but at the software level there's no such thing. Thus, to enforce a lock, you've got to propagate a message to all those peers that the lock exists. This is a Known Hard Problem because the peers don't communicate all that much. It's not a problem for the content of the repositories since they used a clever mechanism so that files are addressed by the cryptographic hash of the content, and an eventual consistency model for synchronization. But that doesn't work for locks, where you need to distribute the lock policy immediately in order to prevent the users of the peers from writing.

    Now, if you were able to guarantee that all the repositories are online (hint: not true in reality) then you could use something like the Paxos Consensus protocol for locking (or rather for agreeing that a commit does not violate the current locking policies) but that's a pretty high grade algorithm and we don't have the online guarantee anyway. Developers really do work offline (and the ability to do so is one of the good things about a DVCS). It turns out to be easier to use different working practices instead, with everyone working on their own branch (hence no chance of immediate collisions) and then only needing to resolve things at the point the branch is merged. There's plenty of industrial experience that suggests that for most programming, the pain of merging isn't too bad most of the time.

    It helps that for normal source code, git is better at auto-merging than svn ever was. Doing merging in svn is a painful process, so people put it off and that makes it even more painful when finally they have to do it anyway. Which is deeply stupid.

    @tar said:

    In this particular scenario, it seems like your admins are TRWTF.

    Overworked too, I believe. I can't go back and ask them; it was a good long time ago (and at least some of them are now dead. [spoiler]At least one heart attack, and another due to cancer. Smoking really fucks your health over.[/spoiler])


  • Garbage Person

    I mean, for people doing high end experimental refactor/rebase type work, a private branch makes some semblence of sense.

    But, well, you can also accomplish that with... A branch.



  • @dkf said:

    But that doesn't work for locks, where you need to distribute the lock policy immediately in order to prevent the users of the peers from writing.
    I don't necessarily think locks belong in Git, but to play devil's advocate: SVN has this same problem. When I svn lock foo.txt, there's nothing to stop other users from editing foo.txt; even though Subversion is "online", there's nothing that actually propagates to the clients.

    (Nitpicker's corner: svn:needs-lock doesn't really contradict what I said; that just tells Subversion to tell the file system to mark the file read-only. There's nothing to stop the user from chmod +w foo.txt and continuing, and, of course, Git could do the same thing.)

    What "enforces" the lock is a protocol that says that, whenever anyone needs to edit foo.txt, they will first svn lock foo.txt. At that point there's a message to the server.

    Git could do the same thing. Each repository keeps track of what files are locked and a token for each existing lock. When you want to work on foo.txt, you have to contact your upstream repository, or ignore the lock protocol at your own risk. If you're offline, so be it, you have to work at your own risk. If you're online, things work the same as Subversion.

    (One difference is that with Git, you might want to have it support locking multiple repositories -- e.g. when you say git lock foo.txt it contacts your upstream repository, and the upstream repo says "well it looks like foo.txt isn't locked to me, but you also need to lock with my upstream repo https://...whatever... to be sure.")



  • You are WAY over-thinking this.

    Git needs some way of marking a file as "locked". Before it does that, it needs some way of asking a server whether the file is already locked. That's... that's pretty much the entire feature.

    Yes, I know the ability to work offline is a good feature of Git. I GET IT. Shut up about that. But there's a difference between "being able to work offline" and "being forced to work offline", and Git only does the latter.


  • Java Dev

    Another thing you could do is only enforce the lock upstream - developers are still allowed to work on foo.txt, but they cannot push upstream because the file (or the entire repository) is locked there.

    In our inhouse system, all work is on branches. File-level locks are used to serialize merges to master - prevent the problem where someone changed the file while you were busy merging it. Repository-level locks are used in the larger products when large projects need to be merged, often referred to as 'stack landings'. This typically involves the entire product being locked down on friday afternoon, and unlocked on sunday/monday/... after the merge is completed and tests have passed post-merge.

    However, I don't see much of a point in willy-nilly 'you cannot edit this file because I want to have sole rights on it right now'. Try talking to your colleagues instead.



  • @PleegWat said:

    Another thing you could do is only enforce the lock upstream - developers are still allowed to work on foo.txt, but they cannot push upstream because the file (or the entire repository) is locked there.

    That's fine AS LONG AS THEY ARE NOTIFIED OF THE LOCK WHEN THEY BEGIN WORK ON THE FILE.



  • I actually agree with Blakeyrat to some extent here: a lock that doesn't prevent you from pushing is barely more useful than no lock at all. (I could make an argument it's less useful.) That's just flipping around who has to deal with the conflict. The point of the lock is to avoid the conflict in the first place, which means you need to be able to request the lock before beginning work.

    Edit: you'd also want to block pushes that include a locked file, of course. It should not be surprising that this is what Subversion does.



  • Yeah, I guess my line of thinking was that there isn't actually a good tool to do this specific job (collaborating on non-diff-able files), which is interesting, because it doesn't seem that difficult to solve: 95% of the work is done anyway, just by having a good vcs. If this thread is anything to go by, the problem is not technical, but ideological. And that's bullshit.


  • Discourse touched me in a no-no place

    @blakeyrat said:

    asking a server whether the file is already locked.

    Well, which one? The whole point of a dvcs is there's no authoritative server that could say that.

    I'm just sitting here, literally eating popcorn, wondering how long it's going to take for everyone to realize you know this and quit wasting their own time arguing with you.



  • @FrostCat said:

    Well, which one? The whole point of a dvcs is there's no authoritative server that could say that.

    That is the part that would have to be added, you fucking piece of shit. We've been over this like twice already. Of course I know you read at like a 3rd grade level, so you probably just didn't comprehend any of it.


  • Discourse touched me in a no-no place

    @blakeyrat said:

    you fucking piece of shit.

    Awww, you're so grump! Here's a video to cheer you up.
    https://www.youtube.com/watch?v=mXtD3gPJgJU#t=222



  • @FrostCat said:

    Well, which one? The whole point of a dvcs is there's no authoritative server that could say that.

    That's OK; there just wouldn't be an authoritative notion of who has locked a file. If you try to sync with or take out a lock from an upstream repo where the file is locked, it'll fail. If you do the same with an upstream repo where the file is unlocked, it'll work.

    The same thing is true of normal files. The fact that there's no authoritative notion of what the contents of garble.c are doesn't prevent me from editing garble.c. If I sync with a repository with which my changes are compatible, then my changes work. If I sync with a repo with which my changes are incompatible, then I get a conflict or have to back them out.

    If you're in an organization where you have a by-fiat authoritative repository, then solutions to both of these fall out exactly the same.


  • Discourse touched me in a no-no place

    @EvanED said:

    If you try to sync with or take out a lock from an upstream repo where the file is locked, it'll fail. If you do the same with an upstream repo where the file is unlocked, it'll work.

    Great, it's Schrodinger's repository. No thanks.



  • @FrostCat said:

    Great, it's Schrodinger's repository. No thanks.
    When you don't have a by-fiat central repository (and from some points of view, sometimes even when you do), git is already Schrodinger's repo. How would locks be any different?


  • Discourse touched me in a no-no place

    @EvanED said:

    How would locks be any different?

    Conceptually, it just seems worse.

    I don't mind the idea of not being able to lock, anyway, but I'm used to working on projects that have multiple branches anyway, so it's not often it's even an issue, because the odds are if someone else wants to edit the same file, he's going to be on a different branch, and someone will wind up merging later anyway.



  • It's so fucking simple, ok:

    Binary files are not stored in the main repo. Instead there is a placeholder file, specifying the location of the current owner of the file, as well as information about any active intents to edit, according to a specified protocol. There is a tool that can convert this placeholder into a usable file, either via an arcane sequence of commands (for nerds), or automatically and transparently (for plebs), by adding an intent to edit to the placeholder file (ideally with a reason and an expiry time), syncing with the file owner's repo, and giving you the file to edit, if available. Users would be free to subvert the process if they wanted, but if anyone decides to be a dickhead about it, it'll be right there in the diff for anyone to see.

    I mean, I'm sure this isn't the best way to do it, but the point is: why isn't there something better already? Surely it would be better to have an established way to do this common thing than having to house-rule it on to every project, and force humans to do a bunch of pointless accounting that would be better left to computers? Or to waste pointless hours telling people why the best currently available vcs just isn't for them?



  • @Buddy said:

    I'm sure this isn't the best way to do it

    QFT?



  • Prove It



  • Sure. I am as sure as you are.
    (Why are we conversing in bold?)



  • I will be a whole lot surer when I have seen evidence of a better way to do it.



  • Here's an idea: track the binary files under version control like (I think) a sane person. Open up wiki.example.com/LockedFiles and add your name with the file you want to lock.

    (As far as I'm concerned, not tracking isn't a solution.)



  • Take two developers working on source (textual) code. First one does a ReSharper to re-org the file into one format and then begins making changes. Second does the same thing but a different re-org style....Now without a lock, things are hosed.

    Compare to a VCS (like TFS) that does NOT put exclusive locks on files be default, but does flag them as being edited by someone else (allowing the marge paradigm).

    Either developer would upgrade their lock to full before doing the ReSharper thing. If someone else has already started an edit, then this lock would fail and stop the developer from doing something that would be hard to merge.



  • Binaries should be under version control, except 1) Not stored as deltas. 2) Previous and intermediate versions shouldn't be downloaded unless specifically requested.



  • Blakeyrat is right that it is dumb that there's no way to set a repo as "main". It may be very nice and socialist to declare every repo an equal peer of every other repo, but in 90% (statistic I made up because it gives heft to my argument) of use cases there is a central repo everyone else syncs against. The design of git however doesn't seamlessly integrate the most common use case. You can fake it, which solves most issues, but passing the effort of doing something common onto the users is poor design.

    That aside, the solution is simple. Which makes it's absence an even greater design fault. It's not in the design not because it's hard to implement, but because it goes against the designers' ideology.

    As things are now, you just need a "locks" file checked in the central repo (keep it in a separate branch, not on master, so you don't pollute it's history with a bunch of commits about locks). Then you just add the path to the file you want to lock there, push with a message describing how long you intend to hold the lock or whatever, and when you are done you remove the path. Anyone coming after you who'd want to lock the file would have to pull the lock file first, would see your lock, and could then email you to hurry up.

    You could write a tool to handle this automatically with a single command, so you don't have to do things by hand.

    You'd still have to rely on your devs not forgetting to lock files before editing them, but you could add checks into the system that don't allow them to push files that should have been locked. You could add policies about what kinds of files have to be locked or not, etc, but that's beside the point.

    Yes, this means that you need to have access to the main repo to lock a file, and have to have the discipline not to mess with files you don't have a lock on since the computer won't stop you from editing them. I imagine this is also true in centralized systems, though. And really, what are the chances that you don't have access to the main repo in the office, or even in your house? Most people can't code without access to SO to copy code from.

    Also, text files are binary files too. The only difference is you have automatic diff and merge tools already on hand.


  • Discourse touched me in a no-no place

    @Buddy said:

    Binaries should be under version control, except 1) Not stored as deltas. 2) Previous and intermediate versions shouldn't be downloaded unless specifically requested.

    Binaries can be under version control just fine (e.g., the icons for your application). What shouldn't be under version control (normally) are derived artefacts. Most binaries are derived, so shouldn't be there. The only exception to this rule I've got in my repositories is for documentation where the tool for generating the distribution version of the docs isn't something that I've figured out how to script in our build environment. I could figure it out with a lot of work if it was critical, but keeping the built version is easier.


  • Discourse touched me in a no-no place

    @Kian said:

    Blakeyrat is right that it is dumb that there's no way to set a repo as "main". It may be very nice and socialist to declare every repo an equal peer of every other repo, but in 90% (statistic I made up because it gives heft to my argument) of use cases there is a central repo everyone else syncs against.

    Yes, but that doesn't mean that the main repo would accept your messages about locking in the first place, much less do so in a timely fashion. The “distributed” is really important; it means that “main” is a different concept to what you're used to.

    However, the real key is that commits do not conflict. They just form separate branches. In git, the branches are basically unlabelled — it's particular commits that carry the labels — but with the right options you can see the branch history as it actually is, in all its gory detail. Other DVCSs bake the branch labels directly into the nodes, but can end up with multiple branches with the same label at once, which you have to then resolve. “Have to resolve” is a general feature of a DVCS. There isn't an option to completely prevent it: you can get divergence if there are two people working at once even if they're both online (if they happen to commit at the same time). However, true conflicts only happen when you're merging (git's rebasing is a kind of merging) and not at any other time, i.e., when you expect and might have time to deal with it.

    The locks branch won't work well. What happens if two people push their updates to it at once? Well, in a DVCS you get a fork in the branch, so now you've got to deal with the problem you had in the first place, except in a proxy for the real thing. Added complexity, solved nothing: are we winning yet?

    @blakeyrat said:

    You are WAY over-thinking this.

    No. The reasons why things don't work is rather deep. The point about having multiple peer repositories is that this allows other ways of working. For example, git allows integration of peer review with the repositories: you can have a staging repository that you can push changes to, and then have the transfer of the changes to the main repository gated by a service like gerrit. If you do that, the repository that people pull from is the main one, yet the one which people push to is the staging one: this works.

    Where would the locks go in that scenario? ;)

    Git DVCSs aren't well suited to people who are used to using locking as part of their workflow. People who are highly productive with a DVCS (many examples about) don't use locking anyway, and their lives seem to still go on fine.



  • @dkf said:

    Binaries can be under version control just fine (e.g., the icons for your application). What shouldn't be under version control (normally) are derived artefacts. Most binaries are derived, so shouldn't be there.

    That's true, though I still think that files that don't generate meaningful diffs don't need to be delta-encoded.

    @dkf said:

    Yes, but that doesn't mean that the main repo would accept your messages about locking in the first place, much less do so in a timely fashion.

    This other post is just pure trolling though, right? Have you heard of GitHub?

    @dkf said:

    What happens if two people push their updates to it at once? Well, in a DVCS you get a fork in the branch

    Um, in my opinion, what would happen if two people tried to push to the same repo at once is that one push would be accepted and the other would be rejected.
    The person whose push was rejected could choose to create and push a whole new branch in which they are the one true holder of the lock, not that other pretender, but why would they do that?

    @dkf said:

    >You are WAY over-thinking this.

    No. The reasons why things don't work is rather deep.

    :trollface:

    @dkf said:

    The point about having multiple peer repositories is that this allows other ways of working.

    I feel like I should leave answering this to blakey. He really loves having to point out blatant stupidity like this over and over and over again. “The reason we can't allow a different way of working is because we already allow different ways of working”.

    @dkf said:

    If you do that, the repository that people pull from is the main one, yet the one which people push to is the staging one: this works.

    Where would the locks go in that scenario?

    So, you're saying that optional functionality cannot be added to a program because it would conflict we haven't yet considered how it could integrate with some other optional behavior? Top shelf blakeybait. High five.

    @dkf said:

    Git DVCSs aren't well suited to people who are used to using locking as part of their workflow. People who are highly productive with a DVCS (many examples about) don't use locking anyway, and their lives seem to still go on fine.

    I feel like you're using file locking as a straw- and/or bogeyman. Nobody here wants to put files under lock and key. Preventing people from accessing files isn't the only way to solve this problem, nor is it a good way. A good way to solve this problem would result in a minimum of hassle for users wanting to edit files that don't generate meaningful diffs.



  • @Buddy said:

    Binaries should be under version control, except 1) Not stored as deltas. 2) Previous and intermediate versions shouldn't be downloaded unless specifically requested.
    Git has you covered with (1) (believe it or not, it never stores deltas). (2) no. And that would be bad if you have lots of very large assets; Git wouldn't be a good choice then. But binary != big, and for smaller binary files (like most Word docs or whatever) (2) isn't a practical impediment.

    @Kian said:

    Blakeyrat is right that it is dumb that there's no way to set a repo as "main". It may be very nice and socialist to declare every repo an equal peer of every other repo, but in 90% (statistic I made up because it gives heft to my argument) of use cases there is a central repo everyone else syncs against. The design of git however doesn't seamlessly integrate the most common use case. You can fake it, which solves most issues, but passing the effort of doing something common onto the users is poor design.
    This is an objection I only barely understand. At least from my perspective, git does let you set a repo as main; it's origin and whatever you cloned from. You can push and pull from that repo with just git pull/git push, with no need to specify the repository. Now, from another viewpoint this isn't really setting the repository as main, because if you clone a clone then the first clone will (until you change it) appear as origin, but this seems like a minor quibble.

    Said another way, what actual behavior do you want git to have that it doesn't? "Set a repo as main" isn't it... that's not a useful task on its own. That setting has to affect something that is actually useful. So what is it?

    @dkf said:

    hat happens if two people push their updates to it at once? Well, in a DVCS you get a fork in the branch, so now you've got to deal with the problem you had in the first place, except in a proxy for the real thing.
    No. Merging these two changes

      foo.png -- Fred
     +bar.png -- Judy
      baz.png -- George
    

    and

     foo.png -- Fred
    +norf.png -- Amy
     baz.png -- George
    

    is a 20 second task. If norf.png were also bar.png, then Amy seeing that "oh, I can't take out a lock on bar.png after all is also a 20 second task. (Forget that probably .png aren't your source image format. Whatever.)

    Amy doing two hours of work on bar.png, pulling, and then seeing the conflict is two hours down the drain, because there's probably no way to merge them.

    This is the entire point of locks in the first place for that type of file.

    @dkf said:

    Where would the locks go in that scenario?
    There are multiple options.

    • Grab locks all the way up the chain. (You'd need commit rights for this of course, but there are scenarios where this is realistic.)
    • Grab locks from a couple "likely hits" repos, e.g. whatever lieutenant is most connected to the component you want to change.
    • Don't use locks in that scenario.

    Seriously, it is not hard to envision very realistic scenarios where Git could provide pretty good locking support, and others where it could provide okayish locking support. Sure, it's not for everyone. But I was a daily git user for a few years, never using git format-patch. I don't go around declaring format-patch is useless.

    (Now, all that said, I'm not convinced that putting the locks in the repo is the right place for it instead of an outside channel. I just think that it would work reasonably well.)



  • I blame @accalia for making me read the most stupid flame war in history. People defending locks, binary files in code repos, atomic checkouts... How much stupidity. No wonder many struggle with git/hg/bzr


  • FoxDev

    @Eldelshell said:

    binary files in code repos

    There are a few excusable exceptions though, like program icons and other small resources that rarely if ever change.



  • Yes, we can agree with that just because there are no better alternatives.



  • We have binary inputs for test cases in the repository. Why? Because that's where they would go ideally, and they are small enough that the drawbacks don't nearly outweigh the benefits from it.

    Probably a lot more than those too, those are just what comes to mind.

    @Eldelshell said:

    Yes, we can agree with that just because there are no better alternatives.

    I don't even see it as a bad thing. You don't get to take advantage of automatic merging or diff two versions, but that is only one of a pretty list of things you get from version control, and the rest of the list that I can think of applies.



  • @EvanED said:

    We have binary inputs for test cases

    I don't know what those are, but there's probably a better place/way. VCS are for code.

    @EvanED said:

    I don't even see it as a bad thing.

    Until every remote operation takes 10 minutes of downloading/uploading/comparing stuff.



  • @EvanED said:

    if you have lots of very large assets, Git wouldn't be a good choice

    What would be a good choice?



  • @dkf said:

    Yes, but that doesn't mean that the main repo would accept your messages about locking in the first place, much less do so in a timely fashion.

    I know you can't always make that guarantee. The point is that the most common use case is that you can. Good design makes common uses easy. Bad design tells user to implement common functionality themselves.

    [Quote]However, the real key is that commits do not conflict. They just form separate branches. In git, the branches are basically unlabelled — it's particular commits that carry the labels — but with the right options you can see the branch history as it actually is, in all its gory detail.[/quote]I don't care about implementation details. "It can't be that way because that's not how it is" is not an argument. Reminds me of a conversation i had when i noticed some unexpected behavior in an app. "That's a bug", I said. "No, the program is doing what it was coded to do" i was told.

    In any case, i don't know the internal details of git, but i know that if one of my changes conflicts, I'm not allowed to move the head of a branch until i fix it. In the case i described, if your lock request conflicts when you try to merge, you don't get the lock.

    [Quote]The locks branch won't work well. What happens if two people push their updates to it at once? Well, in a DVCS you get a fork in the branch[/quote]You resolve it the same way every other conflict is solved. One commit is the head and the other isn't. Whoever is the head wins. As far as i know, operations on git are atomic. I imagine a parallelized design solved that as the first order of business.

    [Quote]Where would the locks go in that scenario? [/quote] You don't sound like you know a lot about git. Are you sure you've used it? Obviously, the locks bypass peer review and go directly to the main repo. Alternatively, they go through peer review and you don't get the lock you asked for until it's past the staging server.

    [Quote]DVCSs aren't well suited to people who are used to using locking as part of their workflow. People who are highly productive with a DVCS (many examples about) don't use locking anyway, and their lives seem to still go on fine.[/quote]"Our design hasn't killed our users! Huzzah!" That's high praise. The problem is that yes, git is very flexible and you can use different mechanisms to effectively reproduce the effects you want. But common use cases should be already considered. Defending git's design is like arguing the assembly should be good enough for everyone. Why do you need function parameters? You can push and load!



  • Blakeyrat is right that it is dumb that there's no way to set a repo as "main".

    There is another way to look at it.... For 10% of the cases, there is no reason to have a "main", and GiT works very well here.

    It is a subset of my normal comparison of CVCS vs. DVCS (regardless of actual tool). If you get benefit from having a central point (which typically implies having reliable connectivity) then go with a CVCS. I find this to be "the truth" for most business/corporate environments.



  • @dkf said:

    What shouldn't be under version control (normally) are derived artefacts. Most binaries are derived, so shouldn't be there.

    Not here. We use a lot of specification documents (Excel) that are parsed automatically to generate source and/or test code (Verilog, SystemVerilog or, for some test code, C). The released versions of these documents are in our document control system — checked by doc control to make sure all the i's are dotted and t's crossed, multiple layers of management approval, etc. — and the working copies are in Perforce, configured so that all edits are locked.

    The generated code files, as well is hand-written code, is stored in a different VCS, which is really a wrapper around a Perforce back-end. In this system, you can't check it a file you haven't told the repository you're editing, but the file isn't locked. Other developers are advised that you're editing it when they try to do so, but they aren't blocked — "Joe's editing this file, too, but my change will only take a couple of minutes. I'll try to check in before he does; then the burden of merging will be on him. Bwahaha!"



  • @TheCPUWizard said:

    It is a subset of my normal comparison of CVCS vs. DVCS (regardless of actual tool). If you get benefit from having a central point (which typically implies having reliable connectivity) then go with a CVCS.

    It's not so clear cut. The distributed approach has benefits, and you might decide that the pain when you need to perform synchronous operations is a worthwhile tradeoff. It's like multithreaded code. Lock free algorithms and atomics are great, but sometimes you need to use mutexes. Making a language that doesn't include a mutex because you can roll one yourself is dumb. Saying "if you need a mutex then atomics aren't for you" is oversimplifying. Sometimes i need one, sometimes i need the other.


  • ♿ (Parody)

    @Buddy said:

    What would be a good choice [for large assets]?

    I can't speak for @EvanED, but there are lots of "document management" programs out there for tracking and versioning things that aren't text files. Seems like you could link them with something in your repo (tags? changeset hash?).

    Of course, then you have the issue of reintegrating the documents back with your code when you need to do that.

    @Eldelshell said:

    I don't know what [binary inputs for test cases] are, but there's probably a better place/way. VCS are for code.

    If your tests are written as code (and they probably are), it makes sense to keep the inputs there. Even if they aren't, it makes sense to keep them with the code they're meant to test in that it simplifies your configuration management.

    Your objection to keeping this sort of things sounds like a rigid ideological constraint without a lot of thought behind it. (Maybe there's thought, but it's not evident here, I mean.)



  • Kian - I almost agree....but (as previously pointed out) a robust locking pattern can not be achieved for a distributed environment. So if you need (as opposed to wanting something that might sort of work sometimes) then you need something that has a centralized enforced mechanism.


  • ♿ (Parody)

    @TheCPUWizard said:

    a robust locking pattern

    Seems like there's no good way to do this if people have a working copy of stuff sitting in a local place. But if you can only have what you've checked out, then you have something that's fairly useless. It's all down to conventions and where they're exposed and people following them.



  • @boomzilla... I said Robust, not Infallible.... If someone removes a read only flag, then they are deliberately sabotaging the system. Once you open that door, then any system will start to fall apart....

    Give me direct access to the file system hosting a Git Repository and I can destroy it. Same for SQL access to the DB backing TFVC. Same for....



  • @TheCPUWizard said:

    a robust locking pattern can not be achieved for a distributed environment.

    It cannot be achieved for the general case. But if you are willing to establish certain constraints, you can. And those constraints are generally already in place in most cases.


  • Discourse touched me in a no-no place

    @TheCPUWizard said:

    Give me direct access to the file system hosting a Git Repository and I can destroy it. Same for SQL access to the DB backing TFVC. Same for....

    Give you the GPS coordinates and you can blow it up with a missile launched from a hidden base deep beneath the Sierra Nevada, all while twirling your moustache and throwing underlings into tanks of piranhas! Which is a bit off topic, but a wonderful mental image!

    An advantage of a DVCS is that everyone with a checkout also has a backup. ;)


  • ♿ (Parody)

    @TheCPUWizard said:

    @boomzilla... I said Robust, not Infallible.

    Robust is pretty subjective though.

    @TheCPUWizard said:

    If someone removes a read only flag, then they are deliberately sabotaging the system.

    Fair enough. Though is you're going to make something that requires this for every change, then you're throwing out the baby with the bathwater, IMO, when you're taking about SCM.

    But if you're only selectively locking things, you have more failure modes. Not to mention things going on in different branches.



  • Can you be more specific? Remember "lock" and "checkout" do NOT imply exclusivity - unless it is an exclusive checkout lock <grin>.

    Tell the VCS that your are going to start working on a file, it removes the lock and makes a central notation that you (possibly along with others are working on the file)....

    Branches are another topic altogether. In most cases I have abandoned having multiple active branches. Set up development practices so that everything (meaning 99%+) can be safely done directly on "main".


  • FoxDev

    because 300


Log in to reply