The Most Absurd Thing You've Ever Coded/Built


  • :belt_onion:

    @Mason_Wheeler That's the optimal solution (not reinventing the wheel), but I didn't have access to GitHub, unfortunately.



  • @heterodox :sideways_owl:

    Is this a "IT is being a bunch of derps and cranking the firewall up to 11" situation, or... ❓ (Because if so, at that point I would start taking a very serious look at the practice known as "change your organization or change your organization"...)



  • @heterodox said in The Most Absurd Thing You've Ever Coded/Built:

    @Mason_Wheeler That's the optimal solution (not reinventing the wheel), but I didn't have access to GitHub, unfortunately.

    GitHub :barrier:


  • :belt_onion:

    @Mason_Wheeler said in The Most Absurd Thing You've Ever Coded/Built:

    @heterodox :sideways_owl:

    Is this a "IT is being a bunch of derps and cranking the firewall up to 11" situation, or... ❓ (Because if so, at that point I would start taking a very serious look at the practice known as "change your organization or change your organization"...)

    It is a "working on an air-gapped network" situation.

    There actually is a GitHub-equivalent where I can request repositories (that'll then automatically be mirrored from the Internet every two weeks or so, which is pretty cool) but again, it would have taken time to get that request approved. It only took an hour or so to write a basic .tar parser, as ugly as it surely was.


  • Considered Harmful

    This might also be a candidate (a PowerShell script for prompting for variable values and replacing the variables in every file in the directory). I think it qualifies as 'absurd' because it is designed to overcome a shortcoming of GitHub's template repository feature - namely, that the repository cannot be customized per project in any way and it's difficult to distinguish between stuff you should change because you're not example.developer writing Example Project, and stuff you should change because you've outgrown the defaults. How they made a feature called template repositories (as opposed to example repositories) and didn't go any further than "it's just forking, except the commits are squashed into one 'initial commit'" is absolutely beyond me.


  • kills Dumbledore

    @heterodox said in The Most Absurd Thing You've Ever Coded/Built:

    Note that your comment doesn't have anything to do with the format itself but with the command-line utility. I linked to the format specification above. It's actually fairly comprehensible

    I've never looked into it but isn't it basically a header saying how big each file is then sticking files together sequentially?



  • @Mason_Wheeler said in The Most Absurd Thing You've Ever Coded/Built:

    Today, ZIP archives indisputably rule the world and have for at least a quarter-century now.

    You might want to tell the Linux world about that, because .tar and .tgz (and occasionally .tbz2) files are still very prevalent there. My last job, we regularly used .tgz to transfer huge, sometimes multi-GB, codebases between organizations. Even in very Windows-oriented companies, chip design runs exclusively on Linux, and .tgz still rules the Linux world.


  • Java Dev

    @heterodox said in The Most Absurd Thing You've Ever Coded/Built:

    Note that your comment doesn't have anything to do with the format itself but with the command-line utility. I linked to the format specification above. It's actually fairly comprehensible

    The main problem with (compressed) tarballs is that you cannot only access part of the file. Any operation (including adding files, despite how you might think the contrary) requires reading and decompressing the entire archive first.

    Other properties, like how all numeric header fields use an octal digit string, or how the file name is split over two header fields, are archaic but not really a source of issues.


  • BINNED

    @PleegWat said in The Most Absurd Thing You've Ever Coded/Built:

    numeric header fields use an octal digit string

    :wtf_owl:


  • Java Dev

    @topspin Unless, of course, the high bit of the first byte is set. In that case, by GNU extension, it is an unsigned 64-bit integer instead. I don't know offhand (and can't be bothered to check) where in the 12-byte field that number is stored.


  • Discourse touched me in a no-no place

    @Jaloopa said in The Most Absurd Thing You've Ever Coded/Built:

    I've never looked into it but isn't it basically a header saying how big each file is then sticking files together sequentially?

    The index data is spread through the file (which was intended as an on-tape format first and foremost, so both readable and writable absolutely no rewinding/seeking allowed, and originated at a time when constructing a list of all the files within would have been unacceptably expensive in terms of memory) and contains a little more metadata than the size. In fact, it's pretty much everything you'd find in a classic Unix filesystem directory entry and inode (except for the bits that make no sense at all, such as the inode number or the link count). The tar program usually ignores quite a bit of the metadata on extraction.

    And actual file offsets are always padded (with zero bytes) to start at a 512 byte block boundary. That must've been particularly advantageous with the hardware of the time when this was all designed. (It was not usually a problem in practice over the past few decades: empty space compresses very well.)

    By contrast, the ZIP format was designed initially for floppy disks and not tapes, and was designed with the assumption that the index would fit in memory. Common computers of that time were quite a bit larger relative to the amount of data that they handled…


  • Java Dev

    @dkf Zip was also designed with the assumption that the archive would not fit on a single disk - multi-part archives are part of the base spec, though I don't know if support is mandatory. That plus the fact it can be preceded by any amount of arbitrary data makes for an interesting file format in its own right. Everyone has seen self-extracting zip archives (which were a zip extractor and a zip file in the same container), but I've actually also seen zip archives with a GIF image at the front, enabling uploading of a zip file to a site which actually only accepted images.


  • Java Dev

    Back to confessions: I once hand-rolled my own archive format, with the defining characteristic that the file index and data were in separate holding files.


  • Discourse touched me in a no-no place

    @PleegWat said in The Most Absurd Thing You've Ever Coded/Built:

    I've actually also seen zip archives with a GIF image at the front

    I've seen source files with a binary ZIP blob on the end.



  • @PleegWat said in The Most Absurd Thing You've Ever Coded/Built:

    Back to confessions: I once hand-rolled my own archive format, with the defining characteristic that the file index and data were in separate holding files.

    Got more of those than I can count. Baking rendering data (multiple meshes with multiple data streams, plus potentially textures and other stuff) tends to result in some ad-hoc archive-y format. Trying to squeeze the data into an existing contain (zip, tar or whatever) is possible, but either has a bunch of disadvantages. Having the index and some metadata outside of the main archive can be useful.

    glTF is a standardized format that somewhat follows that idea. Quite a few years ago, I had created something similar (and keeping in theme with this thread, the index was actually a Lua script, since I was too lazy to write a parser myself). Before that, my group was experimenting with a few different ad-hoc formats too.

    (Now, recently, I rolled my own "archive format" for transmitting a bunch of compressed+signed files. But that's a different wtf, for a different time.)

    @dkf said in The Most Absurd Thing You've Ever Coded/Built:

    I've seen source files with a binary ZIP blob on the end.

    :wtf_owl:


  • Java Dev

    @cvi said in The Most Absurd Thing You've Ever Coded/Built:

    Having the index and some metadata outside of the main archive can be useful.

    It allowed me to have fixed-size index records adjacent in one file, so I could easily read the index back into memory, without having to keep either the data or the index in memory while I was generating stuff. Originally we'd kept all files separate on disk, but that was a performance bottleneck.

    One version later I had cause to write it again and finally got rid of the requirement to materialise things on disk in the first place, so I kept the index in memory and generated the payload on demand. This was accompanied by things getting faster again.



  • A colleague from our service team just told me about a problem encountered at a customer side. He already organized the log files, and I started to scrutinize them. Well, an exception had happened and was logged. OK. So what's the problem?

    Here it is:

        private void Loop()
        {
            try
            {
                while (m_IsRunning)
                {
                    bool isActive = m_Input.IsActive;
                    m_Output.PerformOnOffAction(isActive);
                    Thread.Sleep(m_Interval);
                }
            }
            catch (Exception ex)
            {
                Logger.LogException(Name, ex);
            }
        }
    

    You see?
    Exactly.
    The catch is outside the while loop...

    Shame on me!



  • @heterodox said in The Most Absurd Thing You've Ever Coded/Built:

    Turns out .tar is a pretty simple file format though

    Well, yes and no, depending on which features you care about:

    (Warning: Long blog post. I linked the relevant section.)


  • Discourse touched me in a no-no place

    @cvi said in The Most Absurd Thing You've Ever Coded/Built:

    @dkf said in The Most Absurd Thing You've Ever Coded/Built:

    I've seen source files with a binary ZIP blob on the end.

    :wtf_owl:

    It was after the EOF character (Ctrl+Z) and meant that supporting binary attachments such as support DLLs and icons and stuff could all be bolted in there, making the source file be actually a redistributable application in itself. Which is both dead weird and highly awesome hackery.


  • Java Dev

    @dkf said in The Most Absurd Thing You've Ever Coded/Built:

    EOF character

    Why was that even a thing. Did early file systems not support byte-level granularity on file sizes?



  • @PleegWat That character probably was intended for data streams like from teletypes/serial transmissions and only got repurposed there


  • Discourse touched me in a no-no place

    @PleegWat said in The Most Absurd Thing You've Ever Coded/Built:

    Why was that even a thing. Did early file systems not support byte-level granularity on file sizes?

    🤷🏻♂ There's all sorts of weird stuff in the bottom of ASCII (and, by extension, the various ISO encodings and Unicode; the C1 block is even odder).


  • BINNED

    I've built a number of things ranging from stupid to absurd, with varying levels of success, but can't think of any good recent examples. Not sure if that means I'm not doing stupid things anymore or, more likely, not getting around to actually code enough.

    Some things off the top of my head:
    Back in my DOS days and with very limited internet access, I tried to write a simplistic sheet music composer (with ASCII line drawing and stuff) and converting it to MIDI files with what little I could understand from the file format description on wotsit. That didn't work out well at all.

    On Windows, I wrote a tiny DSL to synthesize mouse movement/clicking, keyboard entry, and some interactions with other windows to automate GUI interaction with programs that lacked certain features (integrated with my program to send messages, I think). Don't remember specific examples what for, but a theoretical example would be to assign a global keyboard shortcut to tell a media player to skip to next track (theoretical as media players probably do have such shortcuts). That worked pretty well for a variety of things.

    And as the topic has focused on archives: I didn't invent any archive formats, but found the source code for unrar online. Since back then neither full disk encryption nor TrueCrypt was widely available, I wrote a browser for encrypted archives and if the selected file was an image file, it decrypted the file in memory and displayed a preview of the file (in the browser view) or displayed the file full-screen. With some caching to decrypt ahead a few files in advance from the currently selected to make browsing through the list fast. That also worked quite well, as the components were there already (unrar, file browser widgets, etc.) Not quite sure what I wrote that in. It was definitely before I learned Java in college, and I doubt I would've pulled that off in C, I think it was in... Delphi?! Hmm, I didn't even remember ever having learned that if not for writing this post.
    I leave it to anyone's guess what pictures a teenager would want to be hidden from parents.



  • @HardwareGeek said in The Most Absurd Thing You've Ever Coded/Built:

    @Mason_Wheeler said in The Most Absurd Thing You've Ever Coded/Built:

    Today, ZIP archives indisputably rule the world and have for at least a quarter-century now.

    You might want to tell the Linux world about that, because .tar and .tgz (and occasionally .tbz2) files are still very prevalent there.

    Yeah, that's the problem. The rest of us have been trying to tell the *nix world about new developments in the real world for 35 years now, and they have consistently refused to listen! (And then they wonder why no one wants to use their products.)


  • BINNED

    @Mason_Wheeler said in The Most Absurd Thing You've Ever Coded/Built:

    The rest of us have been trying to tell the *nix world about new developments in the real world for 35 years now

    XZ compresses much better than zip.



  • @PleegWat said in The Most Absurd Thing You've Ever Coded/Built:

    Did early file systems not support byte-level granularity on file sizes?

    I think there were indeed early file systems like that, maybe during the CP/M era?
    EDIT: looks like it was indeed the case for CP/M 2.2, if I understand correctly.

    The DOS type command -- basically similar to cat on UNIX -- stops reading the file on Ctrl-Z. For that reason, some DOS-era file formats included a piece of readable text at the beginning of the file, followed by a Ctrl-Z character and the actual binary data.



  • @topspin said in The Most Absurd Thing You've Ever Coded/Built:

    @Mason_Wheeler said in The Most Absurd Thing You've Ever Coded/Built:

    The rest of us have been trying to tell the *nix world about new developments in the real world for 35 years now

    XZ compresses much better than zip.

    That's a pretty meaningless statement without any useful necessary context:

    How much better?

    With what type of data?

    How does the compression speed compare?

    How does the decompression speed compare?



  • @heterodox said in The Most Absurd Thing You've Ever Coded/Built:

    I'm sure almost all absurd things I've written have been due to constraints/just wanting to get things done in a timely manner rather than wait for approval processes/other teams.

    This reminded me of another absurd one I had on my first programming job.

    My boss wanted a program to send out emails to customers on a particular list. The system admins/network guys wouldn't give me the permission on the mail server to allow me to send out emails programmatically. Now, they knew what I was doing, and they had no problem with me doing it, they just weren't willing to give me the permission. Sort of like, "We're not going to unlock the front door for you, but you're welcome to climb in the back window."

    So I made an AutoIt script that sent the emails out through Outlook. So that if you looked at my monitor when the program was running, it looked like a ghost was sending the emails.



  • @Mason_Wheeler said in The Most Absurd Thing You've Ever Coded/Built:

    @heterodox said in The Most Absurd Thing You've Ever Coded/Built:

    The gzip part can be handled by the .NET Framework, the .tar part can't

    That's because... seriously, who even uses those today?

    TAR is an antique (and largely incomprehensible) format from the bad old days when CLIs and other bizarre dinosaurs ruled the earth, when concatenating multiple files into a linear Tape ARchive was something a user might be reasonably expected to do on a regular basis.

    Today, ZIP archives indisputably rule the world and have for at least a quarter-century now.

    Gzip has found some niche uses in graphics and HTTP, which makes it important to support, but for multi-file archives, anyone using anything but ZIP in this day and age is :trwtf: and everyone knows it, so why should the .NET Framework act as their enabler?

    I assume you're trolling here. In the Linux world, tar files are the norm. Handling zip files in Linux is a concession to those hopeless Windows folks.


  • Discourse touched me in a no-no place

    @jinpa said in The Most Absurd Thing You've Ever Coded/Built:

    I assume you're trolling here.

    He might just be vastly more ignorant than he believes.



  • @jinpa said in The Most Absurd Thing You've Ever Coded/Built:

    Handling zip files in Linux is a concession to those hopeless Windows folks.

    not so much a concession to the hopeless window folk, but admitting that the hopeless windows folks are so hopeless they can't get a real OS with a proper archiver, and conceding the point that while it would be more architecturally pure to ghost them and not accept communication it wouldn't be the nice thing to do, so the Unix and Linux folks support Zip files as an act of compassion for those poor lost souls that will never, nay can never know the beauty and purity that is the TAR format.



  • @jinpa said in The Most Absurd Thing You've Ever Coded/Built:

    In the Linux world

    I assume you're trolling here.

    As I said above, the rest of us have been trying to entice those hopeless *nix folks into the modern world for 35 years now and they simply will not listen, and then they can't seem to wrap their heads around why nobody wants to use their "obviously superior" software.

    For heaven's sake, the *nix world is still dominated by people who actually think a command line is a user interface! :headdesk:


  • Discourse touched me in a no-no place

    @Vixen ZIP has the nice property that you can extract individual files (and the index) from it cheaply. Compressed tar archives get better compression ratios precisely because they compress the whole of the data rather than each file individually. Which is best depends on what your use cases are.


  • ♿ (Parody)

    @Mason_Wheeler said in The Most Absurd Thing You've Ever Coded/Built:

    @HardwareGeek said in The Most Absurd Thing You've Ever Coded/Built:

    @Mason_Wheeler said in The Most Absurd Thing You've Ever Coded/Built:

    Today, ZIP archives indisputably rule the world and have for at least a quarter-century now.

    You might want to tell the Linux world about that, because .tar and .tgz (and occasionally .tbz2) files are still very prevalent there.

    Yeah, that's the problem. The rest of us have been trying to tell the *nix world about new developments in the real world for 35 years now, and they have consistently refused to listen! (And then they wonder why no one wants to use their products.)

    Ignorant people gonna ig.

    Don't be so mad that we don't share your desire for new shiny over stuff that works just fine. And very few things are worth having to put up with Windows.



  • Til 35-year-old, well-tested, mature technology is "shiny new stuff" to the :belt_onion: crowd...

    What'cha gonna do? Ignorant people gonna ig.



  • @dkf said in The Most Absurd Thing You've Ever Coded/Built:

    @Vixen ZIP has the nice property that you can extract individual files (and the index) from it cheaply. Compressed tar archives get better compression ratios precisely because they compress the whole of the data rather than each file individually. Which is best depends on what your use cases are.

    True, but solid compression is also supported by mode modern formats like RAR and 7z, so there's no reason to stick with TAR (pun intended) except for backward compatibility or :kneeling_warthog: purposes.



  • @Zerosquare said in The Most Absurd Thing You've Ever Coded/Built:

    reason to stick with TAR

    I can think of one reason to stick with it.

    it's practically GUARANTEED to be supported and installed on every Unix derived machine in the world (including BSD, All Flavors of Linux, OSX, and of course actual UNIX)

    I can drop a *.tar.gz or a *.tar.bz2 file on any of those machines and know that it will be supported (so long as i avoid vendor specific extensions to the format, which is easy enough to do)

    I can't do that with *.zip (although at this point it's likely to be supported too), *.7z, nor *.rar

    So if I want to distribute something *nix-y then the *.tar.gz or *.tar.bz2 format is still the way to go.



  • @topspin said in The Most Absurd Thing You've Ever Coded/Built:

    @PleegWat said in The Most Absurd Thing You've Ever Coded/Built:

    numeric header fields use an octal digit string

    :wtf_owl:

    Using a digit string had the advantage of being independent from endianness, and I guess using octal allowed using bitshifts to read it? I can see plenty of computer-side arguments for not using decimal, but most of the time my answer to those would be using hexadecimal, not octal.

    The only argument I can see for octal over hexadecimal is that the C standard mandates that all digits be encoded such that charValue-'0' == digitValue, whereas there is no such guarantee for letters (even though both ASCII and EBCDIC are kind enough to offer it for A~F and a~f -- meaning you can get the value with toupper(charValue)-'A'+10 -- a valid C program could be compiled and run on a platform whose encoding, say, puts each lowercase letter immediately after the uppercase one).



  • @Vixen said in The Most Absurd Thing You've Ever Coded/Built:

    @Zerosquare said in The Most Absurd Thing You've Ever Coded/Built:

    reason to stick with TAR

    I can think of one reason to stick with it.

    it's practically GUARANTEED to be supported and installed on every Unix derived machine in the world (including BSD, All Flavors of Linux, OSX, and of course actual UNIX)

    Yeah. All 1% of them. :rolleyes:

    And ZIP is guaranteed to be installed on the other 99% of the market share. If I wanted to distribute something for people to actually use, I know which format I'd rather package it in based on that alone!


  • BINNED

    @Mason_Wheeler said in The Most Absurd Thing You've Ever Coded/Built:

    35-year-old technology

    @Mason_Wheeler didn't say:

    The rest of us have been trying to tell the *nix Windows world about new developments

    @Mason_Wheeler said in The Most Absurd Thing You've Ever Coded/Built:

    @topspin said in The Most Absurd Thing You've Ever Coded/Built:

    XZ compresses much better than zip.

    That's a pretty meaningless statement without any useful necessary context:

    How much better?

    With what type of data?

    Varies, but with generally used types > 30% better than gzip (which itself is better or on par with zip). You're not going to compress a JPEG file much, obviously.

    How does the compression speed compare?

    From fast enough to pretty darn slow depending on settings.

    How does the decompression speed compare?

    Fast enough.



  • @topspin ...and yet nobody is using it. I wonder why. Let's look at that Wikipedia article.

    Hmm...

    Oh. Oh wow.

    No complete natural language specification of the compressed format seems to exist, other than the one attempted in the following text.

    Followed by an extensive technical documentation attempt in a Wikipedia article. It's actually pretty impressive, in a train-wreck sort of way. (The section is tagged with a standard Wikipedia "original research" banner, because of course it is.)

    Even so, that's not enough to keep people from using it; it's just amusing. Let's dig a little deeper.

    Code is licensed under the GPL, which, being toxic, would drive people away... but dual-licensed under the CPL which is a lot more palatable, and later placed in the public domain by the author. So difficulty attracting developers doesn't seem like a serious factor.

    Hmm... wait. When was this made? No History section... ah, there we go, right at the top of the article.

    It has been under development since either 1996 or 1998 by Igor Pavlov

    Well, that explains it. It's suffering from (ironically enough!) the same problem that's plagued Microsoft for over a decade now: too little too late. Try to bring a new entrant into a saturated marketplace that does basically the same thing as existing offerings just doesn't work. It's why Silverlight failed. It's why Windows Phone failed. (Repeatedly.) And it's most likely why 7z has failed.

    By 1996, ZIP already ruled the world. ZIP was able to break in even though TGZ was an established format by offering something decisively new and better: as @dkf pointed out above, it was designed as a disk archive format that worked better on modern hardware (allowing for random access rather than linear access). But 7Z is... basically just like a ZIP archive, with a different compression algorithm that compresses a bit better. That alone is simply not good enough of an incentive to produce the necessary ocean-boiling.



  • @dkf said in The Most Absurd Thing You've Ever Coded/Built:

    @Vixen ZIP has the nice property that you can extract individual files (and the index) from it cheaply. Compressed tar archives get better compression ratios precisely because they compress the whole of the data rather than each file individually. Which is best depends on what your use cases are.

    QFT.

    Besides, .zip files aren't exactly free of legacy cruft either. Decoding a .zip requires a bunch of seeking, which is somewhat annoying. And, although it supports different compression schemes in theory, in practice only methods zero and eight tend to be supported (uncompressed and deflate, respectively). There are better options these days.



  • @cvi said in The Most Absurd Thing You've Ever Coded/Built:

    Decoding a .zip requires a bunch of seeking, which is

    FTFY. How exactly are you going to implement random access without it?



  • @Mason_Wheeler said in The Most Absurd Thing You've Ever Coded/Built:

    FTFY. How exactly are you going to implement random access without it?

    You need to seek even if you're not interested in random access, but just linearly loading the contents of the archive. Makes it rather unsuitable for streaming. It's perfectly possible to create a format that you can random access if you need/want, but that doesn't require you to seek if you're just going over it all linearly. Perhaps a bit more annoying to write in the first place, but not too bad.



  • @Mason_Wheeler said in The Most Absurd Thing You've Ever Coded/Built:

    @Vixen said in The Most Absurd Thing You've Ever Coded/Built:

    @Zerosquare said in The Most Absurd Thing You've Ever Coded/Built:

    reason to stick with TAR

    I can think of one reason to stick with it.

    it's practically GUARANTEED to be supported and installed on every Unix derived machine in the world (including BSD, All Flavors of Linux, OSX, and of course actual UNIX)

    Yeah. All 1% of them. :rolleyes:

    And ZIP is guaranteed to be installed on the other 99% of the market share. If I wanted to distribute something for people to actually use, I know which format I'd rather package it in based on that alone!

    You are in a very weird bubble if you think the Linux/UNIX world is only 1% of the market. Probably 90 - 95% of the machines I use are Linux. In fact, I only have two Windows machines I regularly use. The rest are some version of Linux or another. And quite a few are also CLI-only server installs because that keeps the resource consumption down--very important on an overpacked VMware box--and reduces the attack surface.



  • @cvi Hmm... maybe. How would you implement that?

    AIUI, the seeking requirement comes primarily from putting the index at the end of the file.

    Now, an index has to exist, in order to make the archive random-accessible. The only other reasonable place to put the index would be at the beginning, but how would you implement that in such a way that adding a new file to an existing archive (and thus making the index larger) doesn't require rewriting the entire archive because you're bumping everything down by several bytes? That's not just "a bit more annoying to write in the first place"; that means that addition of a new file becomes an O(N) operation! For a large archive, that can mean an unacceptable performance degradation.

    Can you think of any way around this problem?


  • Java Dev

    @Vixen said in The Most Absurd Thing You've Ever Coded/Built:

    I can't do that with *.zip (although at this point it's likely to be supported too), *.7z, nor *.rar

    So if I want to distribute something *nix-y then the *.tar.gz or *.tar.bz2 format is still the way to go.

    I'd hazard anything that doesn't have .zip doesn't have .bz2 either.


  • BINNED

    @Mason_Wheeler said in The Most Absurd Thing You've Ever Coded/Built:

    too little too late.

    Well, to paraphrase you again, "The rest of us have been trying to tell the Windows world about new developments".



  • @mott555 Google "linux market share" sometime. Depending on which source you come up with, you'll get figures anywhere between about 0.75% and 2.5%. This seems like an example of selection bias on your end; if you're a person where

    Probably 90 - 95% of the machines I use are Linux. In fact, I only have two Windows machines I regularly use. The rest are some version of Linux or another.

    then you're the one in a very weird bubble.



  • @Mason_Wheeler I know I'm in a very weird bubble, it's called aerospace. But even the non-weird bubbles I've worked in had far greater than 1% Linux. And I'd bet that 99.9999% of the websites you visit are hosted on Linux boxes. For that matter, literally 100% of smartphones are Linux/UNIX based.

    The closest thing I can find to your claim is that Linux has about 2% marketshare for consumer PCs which is a very weird metric to use for the users of this site. We're all (mostly) IT professionals here. Factor in "smart" devices, cell phones, servers, tablets, vehicle control systems, network insfrastructure, and all the other technology you deal with and the percentage is certainly greater than 80% and probably in the high-90's.


Log in to reply