Unix Haters' Club



  • I had forgotten the inanity of the BBC keyboards on that particular score. Thanks for reminding me.



  • @JBert said:

    While I hate playing the "blame the user" game, this looks like you're doing it wrong. Why are you cleaning (deleting stuff) before committing your workusing git?

    FTFY.



  • @JBert said:

    While I hate playing the "blame the user" game, this looks like you're doing it wrong. Why are you cleaning (deleting stuff) before committing your work?

    Hey, isn't it possible that the user's doing something dumb and Git is a gigantic pimple full of shit so when you pop it shit comes out all over your bathroom mirror?

    I don't care how dumb the user is, nothing excuses -x and -X doing different shit. That only reflects on how dumb the developer is.


  • Fake News

    @mott555 said:

    @JBert said:
    While I hate playing the "blame the user" game, this looks like you're doing it wrong. Why are you cleaning (deleting stuff) before committing your work using git?

    FTFY.

    Should he then print his work, put it on a wooden table, take a picture of it and then save it inside of a Word document?

    Maybe using version control for what it is meant to be used is slightly less painful than that.

    @blakeyrat said:

    Hey, isn't it possible that the user's doing something dumb and Git is a gigantic pimple full of shit so when you pop it shit comes out all over your bathroom mirror?
    Oh no, it's a fact. But do you really have to explain it with pimples?



  • @blakeyrat said:

    Hey, isn't it possible that the user's doing something dumb and Git is a gigantic pimple full of shit so when you pop it shit comes out all over your bathroom mirror?

    I don't care how dumb the user is, nothing excuses -x and -X doing different shit. That only reflects on how dumb the developer is.

    Linus Torvalds has many, many faults.


  • ♿ (Parody)

    @blakeyrat said:

    I don't care how dumb the user is, nothing excuses -x and -X doing different shit.

    Not even backwards compatibility? That seems like the Taxing Power of development excuses.



  • From reddit: https://bugs.launchpad.net/ubuntu/+source/cupsys/+bug/255161/comments/28

    Copied here to save you a click (emphasis mine):

    What a fascinating bug!! My wife has complained that open office will never print on Tuesdays!?! Then she demonstrated it. Sure enough, won't print on Tuesday. Other applications print. I think this is the same bug. Here is my guess:

    Print to a postscript file. Observe the line:

    %%CreationDate: (Tue Mar  3 19:47:42 2009)
    

    Change "Tue" to anything else:

    %%CreationDate: (XTue Mar  3 19:47:42 2009)
    

    Save the file and it prints. Tools like evince work because they simply omit the "CreationDate" tag to begin with.

    Now something odd happens when my cups script (I am using the Brother MFC420CN) copies the file to a temp file. Some of the code is rearranged, not sure how or why, but it uses a command called "file" to identify the file as "PostScript". This check would work on the original file you printed, but by the time it runs the check on the temp file, it misidentifies. Normally it would return:

    PostScript document text conforming at level 3.0
    

    But there is another check that happens before the PostScript check. If it finds "Tue" at the fourth byte of the file, it identifies it as:

         Jan 22 14:32:44 MET 1991\011Erlang JAM file - version 4.2
    

    So it's not a problem w/ openoffice.org, cups, or the brother printer drivers. It is a bug in the file utility, and documented at https://bugs.launchpad.net/ubuntu/+source/file/+bug/248619


  • ♿ (Parody)

    Remind me to try printing something tomorrow.



  • Reddit my ass, you stealing shit from my Twitter. Where, BTW, I posted this like a week ago.


  • BINNED

    Is there a youtube channel about retweeting reddit posts from facebook I can subscribe to?


    Filed under: Might as well be, We're getting so social we have no time to get out of the house. I also Discoliked the post!


  • BINNED

    @dkf said:

    But then again, real speed comes with not moving (or doing other actions) by one character or line at a time…

    7/10. Not your best work.


  • Discourse touched me in a no-no place

    @blakeyrat said:

    I don't care how dumb the user is, nothing excuses -x and -X doing different shit. That only reflects on how dumb the developer is.

    It's not even the worst part of git. Having to understand a vastly complex model to do something simple is significantly more of a problem, as that impacts everyone, even those who know to be careful with case.

    (I mostly use git via a GUI. Then it's just not very nice. I prefer other DVCSes with good reason; their authors don't hate humanity…)



  • @Captain said:

    Emacs. Now there's a contemptible user interface



  • @HardwareGeek said:

    @Captain said:
    Emacs. Now there's a contemptible user interface


    M-x discourse-mode
    C-L 213
    C-x C-s



  • @dkf said:

    Having to understand a vastly complex model to do something simple is significantly more of a problem,

    Not sure about that. Version control, and particularly distributed version control, is a complex problem, and what might, on the surface, look like a simple thing to do is often not; the fact that people are using a full-fat DVCS for non-distributed porpoises and then running up against complexity where, in a single-user-single-machine-remote-backup scenario, there shouldn't be any complexity is not the fault of git. git is a very poor fit for the majority of people's use-cases except for the fact that github kinda requires it.



  • and what might, on the surface, look like a simple thing to do is often not;

    Good observation. Except that other dvcs are easy.

    the fact that people are using a full-fat DVCS for non-distributed porpoises and then running up against complexity where, in a single-user-single-machine-remote-backup scenario, there shouldn't be any complexity is not the fault of git

    Git makes using git as a distributed version control system a lot harder than it has to be.

    Consider this very simple case:

    You have repository A. You clone A into repository A.B. You clone A into repository A.C. You make local changes in A.B. You make local changes in A.C (that don't conflict with A.B). You commit and push both.

    OOPS.



  • @anonymous234 said:

    But there is another check that happens before the PostScript check. If it finds "Tue" at the fourth byte of the file, it identifies it as:
    Jan 22 14:32:44 MET 1991\011Erlang JAM file - version 4.2

    And yet, people will still insist that extensions are doing it wrong.



  • @Captain said:

    You have repository A. You clone A into repository A.B. You clone A into repository A.C. You make local changes in A.B. You make local changes in A.C (that don't conflict with A.B). You commit and push both.

    OOPS.

    Won't it just ask you do pull and merge the changes from A? What's wrong with that?



  • Sometimes you don't want to merge upstream changes. That's what's distributed about it...

    And no, it won't ask. It will tell you you need to rebase head on A.


  • ♿ (Parody)

    @Captain said:

    And no, it won't ask. It will tell you you need to rebase head on A.

    So, obvious follow up question. What do other DVCSes do that you believe to be better?


    Filed Under: I know what hg does



  • In the same situation (no conflicts between A.B and A.C), Darcs just sends a patch upstream. It gets applied and it just works. If there's conflicts, it tells me that there's conflicts and lets me resolve them -- either by pulling from upstream, or editing the patches so they no longer conflict. (This means that I don't have to change the local version if I don't want to)

    This is nice, because I often find myself working on feature A.C while A.B builds in its own branch/directory. I can't do that with git, because git insists on re-writing a single file system directory with the branch contents. So if I start a build, I can't change the git branch I'm in, because it would change the source out from under the compiler. And then git makes it hard-to-impossible to have parallel directories that represent different branches.



  • @Captain said:

    In the same situation (no conflicts between A.B and A.C), Darcs just sends a patch upstream. It gets applied and it just works. If there's conflicts, it tells me that there's conflicts and lets me resolve them -- either by pulling from upstream, or editing the patches so they no longer conflict. (This means that I don't have to change the local version if I don't want to)

    Darcs looks nice. Unfortunately, I never heard of anyone using it. So, due to poor adoption, we only get the positive pitch from the authors and enthusiasts. No rants about its downsides from regular users, like we get with git and svn.

    @Captain said:

    This is nice, because I often find myself working on feature A.C while A.B builds in its own branch/directory. I can't do that with git, because git insists on re-writing a single file system directory with the branch contents. So if I start a build, I can't change the git branch I'm in, because it would change the source out from under the compiler. And then git makes it hard-to-impossible to have parallel directories that represent different branches.

    I'm using multiple folders for different branches. You just have to keep syncing them through origin.

    Frankly, since I'm not using git for compiled languages, single folder is a boon for me - just need to set up one scripting environment / web server / whatever.


  • ♿ (Parody)

    @Captain said:

    In the same situation (no conflicts between A.B and A.C), Darcs just sends a patch upstream. It gets applied and it just works. If there's conflicts, it tells me that there's conflicts and lets me resolve them -- either by pulling from upstream, or editing the patches so they no longer conflict. (This means that I don't have to change the local version if I don't want to)

    So...it acts kind of like svn? Yuck. Not a fan of automatic merging.

    With mercurial (hg), the default behavior is to warn you when you end up with multiple heads in a branch. You can go ahead and push, leaving multiple heads on the remote repo's branch, too. The multiple heads are treated as anonymous branches, and the user can decide to merge them or close one of them, or keep living in ambiguity.

    It's generally considered good form to pull down the new changes and merge them with your local changes.



  • Boomzilla said:
    So...it acts kind of like svn? Yuck. Not a fan of automatic merging.

    There's nothing to merge if there's no conflicts. If there's a conflict, you will hear about it before things get changed.

    It's generally considered good form to pull down the new changes and merge them with your local changes.

    But then you're not really using distributed version control. Consider having a "branch" for your development environment. Another one for your production environment. Branch specific files (say, database configuration, hosts configuration, etc). If you have to sync, you're going to have to leave those configuration files un-managed and un-versioned.

    cartman82 said:
    Darcs looks nice. Unfortunately, I never heard of anyone using it. So, due to poor adoption, we only get the positive pitch from the authors and enthusiasts. No rants about its downsides from regular users, like we get with git and svn.

    The biggest problem is that it's slow compared to git. And uses more ram. The GHC team ended up having to switch from Darcs to Git, because it became way too slow.

    The UI is actually good. Download it and try it for a to-do list or something like that. You'll be surprised.


  • ♿ (Parody)

    @Captain said:

    There's nothing to merge if there's no conflicts. If there's a conflict, you will hear about it before things get changed.

    Then you're changing the parent changeset. Same difference.

    @Captain said:

    But then you're not really using distributed version control. Consider having a "branch" for your development environment. Another one for your production environment. Branch specific files (say, database configuration, hosts configuration, etc). If you have to sync, you're going to have to leave those configuration files un-managed and un-versioned.

    I don't think you understood, because there's nothing wrong with having however many named branches. I'm talking about multiple people working on a particular branch pushing back to a centralized repo (or to each other, I guess).

    I don't understand how you got to having things unmanaged and unversioned.



  • I don't understand how you got to having things unmanaged and unversioned.

    I think we misunderstood each other because I am coming from the assumption that I don't have "built-in" branches, like Git does, for the reasons I explained.

    In that case, a file is either tracked or its not. If you have to pull from upstream, the local branch changes get over-written (or you constantly have to "merge").


  • ♿ (Parody)

    @Captain said:

    I think we misunderstood each other because I am coming from the assumption that don't have "built-in" branches, like Git does, for the reasons I explained.

    Is there a DVCS that doesn't have "built-in" branches. I'm still really lost.

    @Captain said:

    In that case, a file is either tracked or its not. If you have to pull from upstream, the local branch changes get over-written (or you constantly have to "merge").

    Yes, in hg, you would end up merging, because you and someone else have changesets that have the same parent. Not forcing you to merge those is doing it wrong (NB: lowercase diw).



  • Is there a DVCS that doesn't have "built-in" branches. I'm still really lost.

    Yes.

    Git has the 'branch' command. And when you do that, it over-writes all of the files in the repository directory. I'll take this as an example of "having built in branches"

    Darcs does not have a branch command. You just clone the entire repository[1] into another directory. This is more usable for, say, compiled languages.

    Not forcing you to merge those is doing it wrong (NB: lowercase diw).

    I think a case can be made for not merging branch-specific configurations/keys/etc. For example, I never want to pull in another developer's keys. I don't even want to see them (for any legitimate reason, anyway)

    The point is, Darcs can keep my keys and database configuration versioned (locally) without having to share that versioning information with another repository. My keys never leave my branch. My database configuration never leaves my branch. Different sets of keys never get pulled in. I never have to decide whose keys to use (since it will always be mine)

    [1] Well, the clone can be lazy, so it doesn't take up much disk space.



  • Extensions are doing it wrong. They are wrong, wrong, wrong. There is nothing right about them. File type is immutable metadata, file name is mutable metadata, don't mix them up or pain. Extensions are historically limited to three characters which makes them pretty opaque and painful. When they get re-used as they inevitably do, pain. There is no standard list, just a bunch of conventions, what a pain. When the user (accidentally or deliberately, whatever) changes them, pain. Browsers think they know something about content they're downloading from what they guess (How? What do they know?) is a filename extension in the URL and then pain! Extensions cause nothing but pain.

    Heuristic file type analysis is a response to this pain. It's also badly wrong.

    What is needed is metadata stored in the filesystem. It's not hard to do, it's just not widely used. But the world persists in causing itself pain.



  • @another_sam said:

    What is needed is metadata stored in the filesystem. It's not hard to do, it's just not widely used. But the world persists in causing itself pain.

    Yeah, it being "not widely used" is what makes it painful. If support is inconsistent and crappy, suddenly things like network transfers and removeable storage become far more painful than they need to be.

    As terrible as file extensions are, they actually work.



  • Except when they don't, for all the reasons I explained and more.



  • So file extensions are good because Unix systems which originally ran the Internet had shitty-ass filesystems designed by shitty dumbasses, so us poor Mac Classic users had to give up our resource forks because fuck you.

    Yes I have been drinking. BUT STILL!



  • Unix inode based filesystems are awesome, but how good or bad ancient filesystems were isn't relevant here. I don't know much about Mac resource forks but what I do know, I like. Modern systems have varying levels of support for metadata, all seem more than sufficient for the task. There's no excuse in 2014 for not storing Content-Type as a minimum when downloading a file.



  • @blakeyrat said:

    So file extensions are good because Unix systems …

    Nope. Unix systems have pretty much always had the "file" command1, which examines file contents to see what they actually contain. Yes, it gets it wrong sometimes, but mostly it works. Determining file type by extension is pretty much verboten under Unix; if nothing else, one file could have many hard references with different names.

    The reason we're stuck with fucking file extensions is, again, that fucking pox on the software industry, fucking Microsoft software; designed, as you say, by "shitty dumbasses". What's most ironic about this is that, from the introduction of NTFS, MS had a file system that was capable of multi-fork files2, so they had no reason to stick with their fucking noddy DOS3-based file handling disaster area. But technological progress has never stopped MS from holding back the industry, so why the fuck should we have expected them to take the opportunity of getting shot of iloveyou.txt.exe?

    1 : From man file on OSX

    There has been a file command in every UNIX since at least Research
    Version 4 (man page dated November, 1973).

    2 : As far as I can tell, this was so they could turn round to the regulators and say "no, we're not anti-trust, look at the way we've added code to explicitly handle storing Mac files".

    3 : … and thus, CP/M based



  • @tufty said:

    Nope. Unix systems have pretty much always had the "file" command1, which examines file contents to see what they actually contain. Yes, it gets it wrong sometimes, but mostly it works. Determining file type by extension is pretty much verboten under Unix; if nothing else, one file could have many hard references with different names.

    The reason we're stuck with fucking file extensions is, again, that fucking pox on the software industry, fucking Microsoft software; designed, as you say, by "shitty dumbasses". What's most ironic about this is that, from the introduction of NTFS, MS had a file system that was capable of multi-fork files2, so they had no reason to stick with their fucking noddy DOS3-based file handling disaster area.

    Wow, someone who thinks "file" is better than extensions. You are a fucking joke and a diseased mind.



  • Hey, I didn't say they were good, but it's the best option we have right now.

    Seriously, look at the idiots saying that trying to guess the file type based on using a heuristic to guess is somehow better than an unambiguous bit of metadata.

    Sorry, dude, but no matter how awesome Mac OS Classic was, hardly any of it persisted until the present day. So we're stuck with file extensions or scanning the file to try to guess the type. It should be obvious which is better.



  • @another_sam said:

    There's no excuse in 2014 for not storing Content-Type as a minimum when downloading a file.

    Yeah, except, it needs to be one or the other. Using Content-Type metadata AND file extension is a mess. What if they don't agree? So either everything has to maintain Content-Type (removable storage? FTP?) or we end up with multiple sources of truth. That would be great.



  • @tufty said:

    1 : From man file on OSX

    Ohhh.. an OSX user.

    The retard corral and mandatory sterilization booth is over there. Also, we're going to need your children and a burlap drowning sack.



  • @morbiuswilters said:

    Wow, someone who thinks "file" is better than extensions. You are a fucking joke and a diseased mind.

    $ cat > file.exe <<EOF
    > %PDF-1.3
    > you're a cunt
    > EOF
    $ ln file.exe file.jpg
    $ ln -s file.exe file.tar.gz
    

    *due to discourse fuckups, you may need to view the raw post to see what I'm getting at here.

    Now then, what file types are file.exe, file.jpg, and file.tar.gz?

    Filed under: Days since last bug in discourse text editor: 0



  • So you deliberately do stupid shit to make things not work.. Wow, it's not like you can trick a heuristic that tries to guess the type of a file!

    Oh, if you actually dealt with large-scale software, you would have encountered millions of files with usable file extensions but which file just reads as "application/octet-stream".

    File type guessing is bullshit.



  • If you knew what the fuck you were talking about, you'd have noticed that file gets it wrong as well.

    @morbiuswilters said:

    File type guessing is bullshit.

    Indeed. But that's all file extensions allow you to do. After all, you can't trust randomly editable metadata.



  • @tufty said:

    Now then, what file types are file.exe, file.jpg, and file.tar.gz?

    It doesn't really matter which type the file actually is. Hell, "file type" is not really a file attribute - just a hint to the OS/shell as to which program should open the file. If you treat it as a binary, it will try to open it as binary. If you treat it as an jpg file, it will try to open it as jpg file. Simple, and most of all, user-configurable - what if

    int main()
    {
        return 0;
    }
    

    is not actually a C file, but a plaintext file containing an early draft of my programming book? And what if I want to treat this file as such, and it's not intended to be compiled?

    With file extensions - simple, you just tell the OS "for all your intents and purposes, it's a text file, so treat it as such". With content metadata - less simple, but still possible. With heuristics - NOPE IT'S A C FILE I KNOW BETTER THAN YOU STUPID USER.



  • @tufty said:

    If you knew what the fuck you were talking about, you'd have noticed that file gets it wrong as well.

    Didn't try; too lazy

    @tufty said:

    After all, you can't trust randomly editable metadata.

    And.. metadata in the filesystem wouldn't be editable? Somehow?



  • @Maciejasjmj said:

    With file extensions - simple, you just tell the OS "for all your intents and purposes, it's a text file, so treat it as such". With content metadata - less simple, but still possible. With heuristics - NOPE IT'S A C FILE I KNOW BETTER THAN YOU STUPID USER.

    The better part is when heuristics can't guess the type, and instead see the file as something completely else.



  • @morbiuswilters said:

    And.. metadata in the filesystem wouldn't be editable? Somehow?

    Of course it is. Duh. That's the point. You can't trust file extensions, you can't trust file contents, you can't trust metadata. Content scanning will give you a potentially more useful / consistently wrong view for deliberately broken data, but as @Maciejasjmj points out, it might go against user intention.

    The only one that makes any sense, in the presence of file systems that allow multiple names for files, is proper metadata, not arbitrary naming schemes.



  • @tufty said:

    Content scanning will give you a more useful / consistently wrong view, but as @Maciejasjmj points out, it might go against user intention.

    Or it might just be fuck-nuts wrong, classifying tens of thousands of .mp4 files as "application/octet-stream".

    So, maybe instead of trying in vain guess the type, like the hapless carnie working the "Guess Your Weight" booth at a feminist conference, we should just rely on some highly-visible piece of metadata that is user-controlled.



  • @tufty said:

    The only one that makes any sense, in the presence of file systems that allow multiple names for files, is proper metadata, not arbitrary naming schemes.

    So we agree that it should be encoded somewhere along the file, not wild-guessed. And whether it's in a separate space, or just "in the last part after a dot" is not really that much of a difference.

    And with encoding it in filenames, you get an added bonus of having everything that works with file names also being able to work with file types. "Delete all header files"? Simple, just write rm *.h *.hpp and that's all it takes. With file, you would probably have to do a conditional on its output, and even then you're in for a world of pain (I don't give a shit if my executables are statically or dynamically linked, just move them all to the fucking /bin folder!"



  • @Maciejasjmj said:

    So we agree that it should be encoded somewhere along the file, not wild-guessed. And whether it's in a separate space, or just "in the last part after a dot" is not really that much of a difference.

    But, but.. what if I create a symlink with a deliberately misleading extension?? The whole system falls apart!! It's not as if most of the world has successfully used file extensions for the last two decades!!



  • Also, last time I checked, file names had to be unique per folder. So, say, you're working on something in LaTeX, and you save your thesis source in a file named "Thesis". Then, you want to render it to PDF - where do you save that? In another folder - that's too much hassle, and if you have more of such cases (C's header, source, output, for example), you need to separate them all by folders, instead of keeping them all together.

    No, what you do is just name the file after the type. And it's the most obvious place where the OS should look for type data, because well, you told it "it's a PDF", so it obeys.



  • Well.. the OS could allow multiple files of the same name with different Content-Type metadata. But wotta mess.


Log in to reply