In which Mott555 gets confused by Github


  • Grade A Premium Asshole

    @FrostCat said:

    ...this is exactly Blakey's use case.

    Blakeyrat makes a billion checkpoint commits throughout the day? You'd figure that he'd HATE SVN with a burning passion. Those server roundtrips are brutal.



  • Don't believe anybody who says stuff on my behalf, especially FrostCat. They're always wrong.


  • Discourse touched me in a no-no place

    You don't want to make a bunch of commits during the day, and then try to merge them all down to one at the end so your boss won't be upset with how many you make? It's funny, because that's what I remember you saying.



  • You mean this topic?

    https://what.thedailywtf.com/t/git-atlassian-stash-how-do-combine-commits-before-making-a-pr/47902

    Note it was originally in the "don't be a dick, be helpful" category which others failed to do even if we expect blakey to ignore the first bit. A few posts in:

    @blakeyrat said:

    @IngenieurLogiciel said:
    You can, maybe... but you shouldn't.

    Ok well I think I should. So does my boss. Thus this thread.


  • ♿ (Parody)

    @FrostCat said:

    You don't want to make a bunch of commits during the day, and then try to merge them all down to one at the end so your boss won't be upset with how many you make? It's funny, because that's what I remember you saying.

    It was more about him not knowing how to make the "stash" part of git work and forgetting that git was version control software, not backup software.



  • @mott555 said:

    I whooshed.

    I don't think so. You just proved the point of that website!


  • Discourse touched me in a no-no place

    @boomzilla said:

    It was more about him not knowing how to make the "stash" part of git work and forgetting that git was version control software, not backup software.

    But the way he was using it isn't all that dissimilar to the way @bugmenot (or whoever, I CBA to scroll that far up) works.

    We'll know by whether blakey yells some more or stops responding to what I said: the latter is his tacit way of admitting he was wrong.



  • @FrostCat said:

    his tacit way of admitting he was wrong.

    But he's never wrong. Just ask him.


  • Discourse touched me in a no-no place

    Yes, well, if you look up the word "tacit" you'll see how it applies to what I wrote.



  • I know what "tacit" means, and I do see how it applies. I was simply saying (sarcastically) that it can't be an admission, tacit or otherwise, of wrongness if he's never wrong.


  • Discourse touched me in a no-no place

    @HardwareGeek said:

    he's never wrong.

    Well, obviously, his words notwithstanding, he's not correct in that assertion.



  • @FrostCat said:

    he's not correct in that assertion.

    Thus the sarcasm.


  • Discourse touched me in a no-no place

    @bugmenot said:

    You want to move your repos to a new third party. What do you have to do, and how much time does it take to do it?

    [...]

    With SVN, not so much. Not at all, in fact. You pretty much have to either have shell access [to the old server] of some kind to do a dump of the repo in order to move it,

    Nope. svnsync can quite easily do it, though naturally it will take its time depending on the size of the repo.

    # One off
    svnsync initialize file:///var/svn/repos-mirror svn+ssh://random-wtf-name.really.example.com/svn/root_of_tree
    
    # regularly so you pick up any new stuff
    svnsync sync file:///var/svn/repos-mirror
    

    In fact I had to do exactly that last year: get a local-server copy of a remote server since that (remote) server was supposed to be moved physically from DEU to GBR and there were a few :wtf:s associated with it (the above was the workaround):

    • Why? Reasons™
    • When? Dunno.
    • How long offline? Yes.
    • Backups? LOL!
    • and a few more...)

    I may enumerate in detail them sometime.



  • @bugmenot said:

    sysadmin had to do something or other to rescue an SVN repo that would refuse [something]

    That sounds like the older db repo format. Subversion started with Berkeley-db-based repo format that had botched atomicity and could get in a “wedged” state that would need manual fix. They later created a new format using plain files that does not have that problem.

    @bugmenot said:

    Windows

    The most common problem I've seen was spurious reverts. In Windows a file can't be deleted when it's open, so sometimes updating fails when the IDE notices some file changed and starts rescanning the sources and other files are still being updated. I suspect this is what causes a file to be spuriously reverted occasionally. This problem unfortunately affects both Subversion and Git, but it is somewhat easier to spot in Git when it happens.

    @Maciejasjmj said:

    The point is, in Torvald's utopia, there's no third party, and people throw patches around to each other.

    Torvalds argues that they can do that, not that they necessarily should. Most work will still be done in a centralized way. But having support for other workflows comes in handy now and then. Often when contractors are involved to reduce hassle with permissions and VPNs. I've seen some pretty complex setups with ClearCase or CM/Synergy in my earlier job and they were slow as molasses. Subversion would struggle with the same altogether while in Git they would have been simple and fast.


  • Discourse touched me in a no-no place

    @Bulb said:

    But having support for other workflows comes in handy now and then.

    The gerrit model of code reviewing is one of those alternate workflows. Another is where you have one repository that contains your core code (which might or might not be open source) and another that contains the customisations to particular customers, some of which shouldn't be shared nearly so widely (e.g., stamping the customer's logo into the software). A third workflow is where you don't actually have a single centre of truth, but rather a cluster of systems that all synchronise with each other every hour: if one goes down, it's not a big deal because you can just switch DNS around to get everything back up and going; it's a fix of a few moments.

    All of these are easy to set up with a DVCS and precisely use the feature that there is no central source of truth. You can make a CVCS support the general scenarios, but it tends to take a lot more work and creative hackery. I prefer to save that sort of thing for actual work. ;)


  • Discourse touched me in a no-no place

    @Bulb said:

    In Windows a file can't be deleted when it's open

    I hate that feature of Windows. It's the source of all sorts of problems in lots of places. For example, it's why Windows has got the reputation of needing a lot of reboots (whether that's a fair rep or not).



  • The case I've seen was that there was a code-base on which many projects were running and they were done by separate teams in far away places and under separate companies (they were initially in the same holding and later some of them were sold off and became completely separate companies). So the work was done in isolation and then integrated by separate integration team.


  • kills Dumbledore

    @Bulb said:

    ClearCase

    @Bulb said:

    slow as molasses

    Nothing to do with any weird setup, Clearcase is doing well when it's only as slow as mole asses



  • Well, the weird setup was to make it at least as fast as mole asses. Because that is what it was when we used local replica (i.e. same building, 100 Mb/s net) and snapshot views. Before that colleagues sometimes worked with a remote server (in another city) and creating a branch was a 2 hour task...


  • Discourse touched me in a no-no place

    @Jaloopa said:

    as slow as mole asses

    According to this helpful website, the top speed of a mole (and presumably its ass) is 6km/h.


  • Winner of the 2016 Presidential Election

    But isn't the point of that website to make people whoosh?

    Filed Under: Whooshception!


  • Grade A Premium Asshole

    @Bulb said:

    That sounds like the older db repo format.

    It might have been, yeah. I have vague memories of pushing really hard to move to the sharded repo backend store, but I cannot remember if that change ever happened. I guess if it happened in the 1.6.x era, was easy to do online, or not too time-consuming offline it happened over a weekend some time.

    @Bulb said:

    In Windows a file can't be deleted when it's open..

    True. But it can be renamed in almost every case. :wtf: Thanks, Microsoft!


  • Grade A Premium Asshole

    @PJH said:

    Nope. svnsync can quite easily do it...

    True. There are tools that will -if you have the access rights- remotely read out an entire repo and make a backup of it. When I used them, they were slow as -what is it now?- Frozen mole asses?

    The one I used was unreasonably fiddly, and failed to handle errors (like dropped network connections & etc.) well. No idea why. Maybe it was someone's freshman CS project or something. 🤷

    Is there official tooling for this sort of backup, and is it speed substantially slower than one would expect for the task being performed?


  • Grade A Premium Asshole

    @Bulb said:

    Torvalds argues that they can do that, not that they necessarily should.

    Yeah. There's plenty of room in git-land (using software like gitolite) to use git almost exactly like one uses SVN. If you slapped a suitably restrictive frontend over git, you could make the user experience exactly like SVN. (Though, one would need to do something slightly clever to simulate and store SVN properties.)


  • Discourse touched me in a no-no place

    @bugmenot said:

    Is there official tooling for this sort of backup, and is it speed substantially slower than one would expect for the task being performed?

    http://tortoisesvn.net/docs/nightly/TortoiseSVN_en/tsvn-repository-backup.html

    The simplest (but not recommended) way is just to copy the repository folder onto the backup medium.

    Then goes on to talk about svnadmin hotcopy.

    Not having looked previously (I knew we did them, just not how,) it appears we do the former every Sunday, with incrementals the rest of the week.

    Full takes about 70 minutes to local storage (which then gets moved offsite) [last gzip was 19G], incrementals appear to be <1m (gzips are on the order of 15M)

    No idea how long a hotcopy would take compared to that.



  • @Kuro said:

    But isn't the point of that website to make people whoosh?

    I prefer to think :headdesk: (but close enough)



  • @dkf said:

    I hate that feature of Windows. It's the source of all sorts of problems in lots of places. For example, it's why Windows has got the reputation of needing a lot of reboots (whether that's a fair rep or not).

    On the other hand, it's impossible for a buggy DLL to get fixed on disk and remain unfixed in memory, unlike in the Linux/OS X world. So Windows dodges that particular landmine.



  • @bugmenot said:

    True. But it can be renamed in almost every case. Thanks, Microsoft!

    Files are supposed to be referenced by their ID, not their name. The name is just meta-data, like the icon or the last modified date.

    Mac Classic and its applications got this right. The Windows OS gets this right. Most Windows applications do not.


  • kills Dumbledore

    @blakeyrat said:

    On the other hand, it's impossible for a buggy DLL to get fixed on disk and remain unfixed in memory, unlike in the Linux/OS X world. So Windows dodges that particular landmine.

    Rename buggy.dll to buggy.dll.old, copy in new fixed version of buggy.dll



  • @Jaloopa said:

    Rename buggy.dll to buggy.dll.old, copy in new fixed version of buggy.dll

    Uh ok? I don't get your point.


  • kills Dumbledore

    Any new process that starts and references the DLL will get the new one, any existing process will continue to reference the old one. How is that any different to overwriting the version on disk in Linux?



  • @Jaloopa said:

    Any new process that starts and references the DLL will get the new one, any existing process will continue to reference the old one. How is that any different to overwriting the version on disk in Linux?

    But it's not at all what I said. Because you never updated the buggy one. You just renamed it and then put another version of it at the old path. And I'm pretty sure that's impossible anyway,l but I'm too lazy to check right now.


  • kills Dumbledore

    There is a fixed version of the buggy dll on disk, and unpatched ones running in memory. It's exactly what you said, even if you'd have to be a moron to actually try and do updates like that



  • @Jaloopa said:

    There is a fixed version of the buggy dll on disk, and unpatched ones running in memory.

    Yes but I meant the same DLL.

    Jesus Christ you people. You're pedantic, but in the dumbest ways.


  • Grade A Premium Asshole

    @blakeyrat said:

    Because you never updated the buggy one. You just renamed it and then put another version of it at the old path.

    This is exactly what happens on Linux. Anyone who has the old version open keeps referencing it until they go to open it by name again. Anyone else who goes to open the replaced DLL will get the new one.

    This is shockingly similar to how unlink(2), (followed by a copy of the updated file) works on *nix. Some might go so far as to call it identical.


  • Grade A Premium Asshole

    @PJH said:

    Then goes on to talk about svnadmin hotcopy.

    I guess I was unclear. That's server-side backup which (when last I used it) requires shell access.

    Is there official SVN Cabal-maintained software for doing remote backups for when you have full access to a repo, but cannot acquire shell access to the server which contains the repo?



  • @bugmenot said:

    This is exactly what happens on Linux. Anyone who has the old version open keeps referencing it until they go to open it by name again. Anyone else who goes to open the replaced DLL will get the new one.

    Duh, but on Linux, you do not know it's happening since the filesystem threw no warning or errors that the .DLL you replaced was in-use by a piece of code at the time. So you could easily think you've patched a security hole when you have not. That's what I mean by "landmine".

    Am I saying that happens frequently? No. Am I saying that's the hugest problem in computers ever and we're all going to die? No. But it is true that that is a problem that does not exist in Windows. (Or Mac Classic, for the record, since you people love to hear me talk about that.)

    (EDIT: and it's even more critical in Windows with its far superior file caching, because the second app wouldn't get the copy from disk, it'd get the same in-memory copy as the first app.)


  • Grade A Premium Asshole

    It's clear that NTFS uses inodes-in-everything-but-name to refer to files behind the scenes. The fact that you can rename an exclusively-write-locked file is a good demonstration of this fact. It's clear that you're aware of the first of these facts.

    @blakeyrat said:

    Duh, but on Linux, you do not know it's happening...

    With the proliferation of hand-rolled installer software on Windows, you as a user also have no way of knowing whether an old DLL has been renamed out of the way and replaced with an upgraded one. Is this Doing It Wrong(TM)? Maybe. But Windows Makes It Possible(TM) so some shitlord already has done it.

    @blakeyrat said:

    ...in Windows with its far superior file caching...

    Speaking as a five-year Windows dev and a more-than-five-year Linux dev, I have observed that -as a developer and a user- Windows's file caching is seriously inferior to Linux's.

    Maybe you have some benchmarks or something that show MSFT's implementation to kick the shit out of the default Linux implementation. As a user, I have to say that I don't care. Windows's file caching is objectively inferior.



  • @bugmenot said:

    Speaking as a five-year Windows dev and a more-than-five-year Linux dev, I have observed that -as a developer and a user- Windows's file caching is seriously inferior to Linux's.

    You are an idiot.

    At best, Linux is back to on-par with Windows after the HUGE improvements made in Vista.


  • Grade A Premium Asshole

    Whatever, dude.



  • @bugmenot said:

    Whatever, dude.

    OH SNAP!

    Look, I fully admit my Linux information was out-of-date. But as of last I knew, Linux had no predictive cache (or one existed but it wasn't installed by default on the most popular distros) and Linux had no ability for multiple instances of the same application to share code pages.



  • Even in Linux Kernel 1.0 (released Mar 13, 1994), an exec() map()s the loader into memory, then jumps to the _start symbol. It then reads all the symbol tables, map()s the executable into memory, map()s all of the libraries into memory, then fills the UND* symbols with their addresses from the symbol tables in the libraries. Any pages marked RO,SHARED in either the executable or library will re-use the same memory segment on any future map() from any other process.

    Linux has /always/ supported shared pages, even of non-executable data.



  • @blakeyrat said:

    you do not know it's happening

    Yes you do because you're the one who updated the system, so you know you need to reboot :wtf:



  • The main fly in the ointment in the olden days was relocations that required a write into the program code -- aka "text relocations" -- because these completely broke COW and code page sharing (a text reloc on Windows would do the same thing, if your toolchain was too dumb to avoid them that is)(. Fortunately, these are gone now due to W^X security requirements and improved PIC support (including x86-64 PC-rel).

    @bugmenot said:

    It's clear that NTFS uses inodes-in-everything-but-name to refer to files behind the scenes. The fact that you can rename an exclusively-write-locked file is a good demonstration of this fact. It's clear that you're aware of the first of these facts.

    The underlying native layer likely supports blowing away an in-use file as well a la unlink() -- otherwise, SFU would...have issues with POSIX compliance. 😛

    Wild idea: what would it take to make a loader that can "hot reload" a shared object into a running process after it's been changed on disk?



  • Not... completely gone.

    At least I'm assuming that's why srcds_linux (Valve's Source Dedicated Server for Linux) needs the texrel_shlib_t security context on libtier0.so / libiter0_srv.so to run on a Linux machine running SELinux.



  • @JazzyJosh said:

    Yes you do because you're the one who updated the system, so you know you need to reboot

    Except Linux users constantly are bragging about keeping their systems online for months at a time. Which means either they're running insecure in-memory copies of libraries, or they had to shut down every service using the library and restart them (equivalent to rebooting).



  • @tarunik said:

    Wild idea: what would it take to make a loader that can "hot reload" a shared object into a running process after it's been changed on disk?

    That seems like it would just cause instant crashing everywhere.



  • @blakeyrat said:

    @JazzyJosh said:
    Yes you do because you're the one who updated the system, so you know you need to reboot

    Except Linux users constantly are bragging about keeping their systems online for months at a time. Which means either they're running insecure in-memory copies of libraries, or they had to shut down every service using the library and restart them (equivalent to rebooting).

    One of the benefits of using a package manager (standard on Linux) is that it knows which packages are dependent on a library and can restart them because of this.

    Granted, this doesn't affect anything you've installed manually.


  • Discourse touched me in a no-no place

    @blakeyrat said:

    Am I saying that's the hugest problem in computers ever and we're all going to die? No.

    Damn, you're awfully mellow today.



  • @blakeyrat said:

    That seems like it would just cause instant crashing everywhere.

    I hotdeploy code changes all the time and it works fine :wtf:


Log in to reply