More Go Chum In The WTF Ocean



  • @joe.edwards said:

    @drurowin said:
    DLL hell.

    I think .NET handles this problem nicely with Strong Name Keys. There's a simple, human-readable, unique identifier for each distinct version of each distinct assembly, and the OS (rather than the program) can search the current user directory, GAC, and system library directories without any real worry of loading the wrong library.

    They really should have made the signing process mandatory and more automagical in Visual Studio though.

    Dynamic linking is good and it usually works fine in Linux, although sometimes you run into dll hell. The issue isn't dynamic linking, but the fact that the FOSStards who develop the Linux ecosystem refuse to support stable ABIs. Now, some larger distros like RHEL will make an effort to maintain stable ABIs, but the attitudes of people like the glibc team are "The source is the interface, motherfucker." So, yeah, it sucks ass. It's also why running a Linux distro is so much of a pain, because maker of libfoo can't just release a binary version of his library that will work with the stable, known versions of libbar and libqux. So you end up needing distro maintainers to keep all the patches in-check and make sure they don't break the ABIs of the distro.



  • @joe.edwards said:

    @drurowin said:
    DLL hell.

    I think .NET handles this problem nicely with Strong Name Keys. There's a simple, human-readable, unique identifier for each distinct version of each distinct assembly, and the OS (rather than the program) can search the current user directory, GAC, and system library directories without any real worry of loading the wrong library.

    They really should have made the signing process mandatory and more automagical in Visual Studio though.

    Hidden secrets of the IT industry: Microsoft fixed DLL hell in 1999. Linux still doesn't have any good way to keep multiple versions of something. I once had to fix a set of scripts by changing every #!/usr/bin/env python to #!/usr/bin/env python2.6.

    The "solution" to this: Only get software from the distro's package manager. The original app store!



  • @morbiuswilters said:

    @drurowin said:
    Dynamic linking is what's wrong with Linux.  Either you end up with a distro like Debian Stable, or you end up in a Linux version of DLL hell.

    There's plenty wrong with Linux, but dynamic linking ain't it. If your system used static linking it would waste gobs of memory. Besides, on a modern Linux distro you aren't supposed to statically link anything.

    Citation needed.

    @morbiuswilters said:

    @drurowin said:
    Static linking fixes that by building in your dependencies at compile time so that you don't have to shit around with having the right versions of libraries on your system.

    At the cost of bloated binaries and tons of wasted memory.

    When an end user desktop system has 8 GB RAM, no one will notice a few hundred MB missing.

    @morbiuswilters said:

    @drurowin said:
    And yes, you can statically link MySQL with no modification to the existing source, making it more useful on a wider range of platforms.

    Not reliably or correctly, you can't. There are tools that try to glom the shared libraries into a new, static executable, but they're pretty shitty. Expect random segfaults and difficult-to-debug errors. And say goodbye to ASLR (although if you're advocating static linking I can't imagine you'd know enough to use it.) Oh, and they're still PIC, so you don't even gain any advantage in performance.

    To the best of my knowledge, no current Linux distro has ASLR enabled by default anyhow.  It is a security bandaid at best, and a false sense of security at worst.  If you take care and don't fucking set up your build environment like a re-re, Ben L., or Indrora, you won't have "difficult-to-debug errors", either.

    @morbiuswilters said:

    Besides, statically linking libc has been discouraged for a very long time. You want shit like NSS or locales to work? Don't use static linking. It's basically only reliable to produce a program that can be run on exactly the same system it was built on.

    And then, I will once again mention the nightmare of trying to keep all of these applications up-to-date. If you're dynamically linking against libc and a bug is fixed, you only have to update libc. If you are statically linking now you have dozens (or hundreds) of programs which need to be patch. Have fun sorting that one out, asshole. This is why static linking is dumb. I can't think of a mainstream OS which doesn't use dynamic linking because static linking is fucking wrong.

     

    Many of Solaris' utilities, including the shell, are statically linked to provide a failsafe in case something goes wrong with your libraries.  A statically linked build of Firefox, for example, would be easy to deploy to a number of heterogenous client systems without having to build a binary tailored to each specific system.  My httpd on drurowin.org is statically linked, because performance testing showed it performed 11% faster than the same version of apache dynamically linked.  There shouldn't be any bugs left in libc by this point, and I feel confident in statically linking against it in production.



  • @morbiuswilters said:

    [...]but the attitudes of people like the glibc team are "The source is the interface, motherfucker."
     

    Interfacing to that source is generally done by static linking.  Dynamic linking ensures that you're chasing down ABI changes like that and having to suddenly recode your app or roll back an 'upgrade' to get back into production.

     



  • @morbiuswilters said:

    the attitudes of people like the glibc team are "The source is the interface, motherfucker." So, yeah, it sucks ass.

    And the only way to interface with the source directly is by adding it to executable's... so according to your post glibc is supposed to be statically linked.

    Edit: Ninja'd. Also double posted this because CS is blowing even more goats than usual recently.



  • @drurowin said:

    Citation needed.

    No. Fuck you, you can Google your own shit. It's been considered bad practice to statically link for nearly two decades now. If you don't know that, then you don't know anything about linking.

    @drurowin said:

    When an end user desktop system has 8 GB RAM, no one will notice a few hundred MB missing.

    I'm going to tell Eclipse you stole their trademarked slogan.

    @drurowin said:

    To the best of my knowledge, no current Linux distro has ASLR enabled by default anyhow.

    So what? Most Linux distros require configuration to do anything useful or correct.

    @drurowin said:

    It is a security bandaid at best, and a false sense of security at worst.

    Bullshit. It's good security practice and it's part of defense-in-depth. You really are clueless, aren't you?

    @drurowin said:

    If you take care and don't fucking set up your build environment like a re-re, Ben L., or Indrora, you won't have "difficult-to-debug errors", either.

    On a hacked static binary cobbled together from shared objects? Yeah, right. And you still haven't shown me it can be done reliably. I want you to build me a static MySQL and send it to me, and I will see if it runs in my environment and then laugh at you when it fails miserably because you don't understand how Unix works.

    @drurowin said:

    Many of Solaris' utilities, including the shell, are statically linked to provide a failsafe in case something goes wrong with your libraries.

    Oh yeah, I forgot that Solaris was a Linux distro. WTF? Sure, Sun did this. They also had total control over the fucking platform.

    @drurowin said:

    A statically linked build of Firefox, for example, would be easy to deploy to a number of heterogenous client systems without having to build a binary tailored to each specific system.

    I'm pretty sure FF is statically linked.

    @drurowin said:

    My httpd on drurowin.org is statically linked, because performance testing showed it performed 11% faster than the same version of apache dynamically linked.

    I call bullshit.

    @drurowin said:

    There shouldn't be any bugs left in libc by this point, and I feel confident in statically linking against it in production.

    Dude, there are security patches for system libraries on Linux all the fucking time.



  • @drurowin said:

    Interfacing to that source is generally done by static linking.

    What the hell are you talking about? No distro I know of statically links libc. My point was that dynamic linking would be even easier on Linux if the FOSS community wasn't populated by fanatical, crybaby retards.

    @drurowin said:

    Dynamic linking ensures that you're chasing down ABI changes like that and having to suddenly recode your app or roll back an 'upgrade' to get back into production.

    I'm starting to wonder if you've ever actually done programming for a living or know what you're talking about. The big Linux distros are pretty damn stable when it comes to ABI changes. Of course, this requires a lot of work because the upstream packages are run by people who care nothing about anyone who might be cursed to use their software, but still, I don't think I've ever done a security update for libc and had an application break.

    The bigger problem is that to maintain this stability it's impossible to find newer versions of software. So you end up having to compile shit by-hand, which is why I end up preferring source distros (which are still fucking dynamically linked because they're not morons..)



  • @MiffTheFox said:

    And the only way to interface with the source directly is by adding it to executable's... so according to your post glibc is supposed to be statically linked.

    No, it's not. It's explicitly not. glibc is just maintained by jackasses who think that it isn't their job to make distributing binary executables easy.



  • @drurowin said:

    static linking.

    And let me say, I am just stunned we are having this argument in twenty-fucking-thirteen. This is the kind of argument I would expect to have had 25 years ago on some BBS. "I'll be damned if I adopt this dynamic linking voodoo. And I still refuse to work on anything except a teletype, because what happens to a CRT if the power goes out? That's right: you loses your work! No, no, hard copy is the way to go. Also, I poop in an outhouse because I want to be ready for when the Toilet Grid goes down.."

    Then Ben L. would jump in and tell us that Captain Kirk was the best captain and we'd all kick his ass.



  • @morbiuswilters said:

    @drurowin said:
    static linking.

    And let me say, I am just stunned we are having this argument in twenty-fucking-thirteen.

     

    Yeah, dynamic linking was created to solve a problem of systems not having sufficient RAM by sharing copies of library objects in memory.  Guess what?  When a PC you buy at Walmart for $388 has 8 GB RAM, and anything serious professionals like you and I would use have 24-48 GB RAM, there's no fucking need for it!   Dynamic linking was a good idea 25 years ago, and today many of the solutions it poses to problems of the day are a problem today, like library hell and non-portable binaries.  I'll have you a statically linked x64 MySQL in about an hour, I need to finish this last shot of 99 Cocoanuts before I fire up gcc.



  • @drurowin said:

    Yeah, dynamic linking was created to solve a problem of systems not having sufficient RAM by sharing copies of library objects in memory.

    That is one of several things it solves. Also, as I've now pointed out a dozen times or more, makes patching much simpler (and therefore more likely to happen).

    @drurowin said:

    When a PC you buy at Walmart for $388 has 8 GB RAM, and anything serious professionals like you and I would use have 24-48 GB RAM, there's no fucking need for it!

    I don't know where you get this idea. First off, wasting memory for no good reason is just plain stupid. Second, my laptop only has 16gb of ram.. I don't think there's a thin-and-light out there that offers any more. Third, you realize that desktop applications are not the only systems in existence, right? Try running a statically-linked httpd and watch how fast your memory gets chewed up OH GOD YOU DID.

    @drurowin said:

    ...last shot of 99 Cocoanuts...

    -_-   If there's a just and loving God, I just won this argument.




  • @drurowin said:

    There shouldn't be any bugs left in libc by this point, and I feel confident in statically linking against it in production.

     @morbiuswilters said:

    Dude, there are security patches for system libraries on Linux all the fucking time.
     

    Not only in Linux. There are security patches for system libraries on Windows all the time. You just hardly notice from a programmer's point of view because you have a stable interface.There are security patches to system libraries fucking everywhere.

     "There shouldn't be any bugs anywhere in OpenSSL anymore", is something a guy would say before driving in reverse into ongoing traffic, because surely there is no need to return those botched Toyota-brakes in your car. After all we figured out brakes, right?

     And the next person that expects me to just have hundreds of megabytes of RAM just lying around because "dynamic linking sure is hard, y'all" should probably join them in their quest for being windshield splatter on a Ford Fiesta. Yes, sure I have lots of RAM. No, that still is limited by a system bus, a hard drive(from which I have to load 500 instances of MySQL in your world) and fucking common sense. Also your PC will get taken over by a cambodian extortion racket/botnet, because for some reason you statically depended on a 2-year old build of python, which statically depended on every library in the world, only without security fixes for 2 years. Have you ever used an unpatched system that is two years old (and without virus protection, since you're on Unix)? Shit gets taken over by Russians pretty fast..



  • @morbiuswilters said:

    In other news I actually give a shit about, the forum appears to be dying.. it won't let me save tags and it errors out after every post I make (although the post eventually shows up..) Happening to anyone else?
    It happened to me yesterday, but I thought it was an one-off (but I only posted a single message). OTOH, I stopped receiving e-mail notifications. EDIT: happened when posting this message, too.
    @drurowin said:
    Many of Solaris' utilities, including the shell, are statically linked to provide a failsafe in case something goes wrong with your libraries.
    My distro has a statically linked busybox in /bin, as a failsafe if you somehow manage to hose glibc. So far I had never had to use it.
    @drurowin said:
    Yeah, dynamic linking was created to solve a problem of systems not having sufficient RAM by sharing copies of library objects in memory.  Guess what?  When a PC you buy at Walmart for $388 has 8 GB RAM, and anything serious professionals like you and I would use have 24-48 GB RAM, there's no fucking need for it!
    You really have no idea about the security implications of static linking, do you? About 10 years ago a security problem was found in a library released by Microsoft - it was a DLL, but the recommended way to deploy it was to bundle it in the application's directory. It was a fairly popular library, and they ended having to write a tool that scanned the entire hard drive for vulnerable copies of the DLL and replaced them with a fixed version. Now imagine if this was a statically linked library instead - users would've been left with a bunch of programs that could be exploited, because you can be certain that not everybody would've updated their programs (if the vendors even provided such updated versions).



  • @ender said:

    About 10 years ago a security problem was found in a library released by Microsoft - it was a DLL, but the recommended way to deploy it was to bundle it in the application's directory. It was a fairly popular library, and they ended having to write a tool that scanned the entire hard drive for vulnerable copies of the DLL and replaced them with a fixed version. Now imagine if this was a statically linked library instead - users would've been left with a bunch of programs that could be exploited, because you can be certain that not everybody would've updated their programs (if the vendors even provided such updated versions).

    The MS wtf is that they couldn't bundle shared libraries with Windows because they're afraid they'd face monopoly charges. That's the only explination I have for why every app comes with MSVC++, .NET Framework, and/or DirectX redistributables.

    For the record, I am in favor of dynamically linked libraries, I just think that the way Linuxes handle them, to use the technical term, blows goats. Hence what I said in another Gobashing thread "you can't take a [dynamically linked] Linux binary across distros let alone cross compile from another OS".



  • @MiffTheFox said:

    The MS wtf is that they couldn't bundle shared libraries with Windows because they're afraid they'd face monopoly charges.
    Actually, the reason is that once an OS is released, it only receives bugfixes. Windows 7 ships with .net 3 and , Windows 8 ships with .net 4.5. They also ship with MSVC runtimes. The reason so many programs still bundle them is because those programs also support XP, which at this point is 12 years old, and by default doesn't contain any of those libraries.



  • @alphadogg said:

    http://blog.disqus.com/post/51155103801/trying-out-this-go-thing

    Disqus, and admittedly decently big online service, is going Go! Since you all hate on Go, I thought I'd chum the waters with this,stand back and watch the ensuing bitchfest.

    <grabs popcorn>

     

    TRWTF is that instead of just getting some more servers (they said they had 4), like any sane company would. Instead they rewrote core parts of their product in a new programming language, which most of the team aren't familiar with.

     


  • Discourse touched me in a no-no place

    @morbiuswilters said:

    glibc is just maintained by jackasses
    You speak the truth.



  • @lucas said:

    (they said they had 4)

    What?? Holy shit, I guess Disqus is really tiny. I assumed this was a real service people use.

    @lucas said:

    Instead they rewrote core parts of their product in a new programming language, which most of the team aren't familiar with.

    Yeah, even without Go, that's just insane. If you ported your entire app from PHP to Java solely for performance instead of investing in a couple of extra servers, that would be dumb. Porting to Go is just shockingly inept. These people should be out of a job.



  • @drurowin said:

    I'll have you a statically linked x64 MySQL in about an hour...

    Your hour has passed. Although, I'm starting to have second thoughts about running a binary given to me by some random stranger over the Internet.



  • @morbiuswilters said:

    Although, I'm starting to have second thoughts about running a binary given to me by some random stranger over the Internet.
    That's what virtual machines are for.



  • @lucas said:

    TRWTF is that instead of just getting some more servers (they said they had 4), like any sane company would. Instead they rewrote core parts of their product in a new programming language, which most of the team aren't familiar with.

    This is how you can tell a development company is full of open source idiot hipsters. Save a $15,000 capital cost by spending $500,000 in labor costs.

    Because in the open source mindset, even in companies who hire open source developers, labor is free. Sure the $0.00 amount paid line in your MySQL receipt looks pretty good-- up until the first time you have to repair a table, at which point you pay 20 copies-worth of MS SQL Server in labor to fix the fucking buggy piece of shit.



  • @blakeyrat said:

    This is how you can tell a development company is full of open source idiot hipsters. Save a $15,000 capital cost by spending $500,000 in labor costs.
    It's not just development companies - we've got several clients that pay us a lot to keep ancient machines running, even though the cost of new machines would be offset by reduced support calls in a few months (not that it bothers us that much - services is what we make money on, sales are just an extra).



  • @ender said:

    It's not just development companies - we've got several clients that pay us a lot to keep ancient machines running, even though the cost of new machines would be offset by reduced support calls in a few months (not that it bothers us that much - services is what we make money on, sales are just an extra).

    I'm a capitalist, I got no problem with that. All I'd say is it's your responsibility to bring up the possibility during the next renewal/sales cycle, but if they say no, hell, rake in the dough. Why not.



  • @blakeyrat said:

    I'm a capitalist, I got no problem with that. All I'd say is it's your responsibility to bring up the possibility during the next renewal/sales cycle, but if they say no, hell, rake in the dough. Why not.
    I bring it up often, and we even offer paying off computers in installments to our clients, but there's never any budget for new computers, only for support costs.



  • @ender said:

    ]It's not just development companies - we've got several clients that pay us a lot to keep ancient machines running, even though the cost of new machines would be offset by reduced support calls in a few months (not that it bothers us that much - services is what we make money on, sales are just an extra).
     

    I can understand why a non-tech customer of a software company would take the "if it ain't don't fix it approach" to a lot of stuff.

    The big difference in my mind is that at a development shop, things like this should be obvious, that this is the easiest and best solution ... add hardware (especially when there is only 4 servers and I see their comments ... edit I don't they went down this afternoon 4pm BST). What happened at discus is that 2 or 3 of the programmers wanted to learn Go so they redeveloped a core part of it because they had some clout.

    I am currently trying to reign in some developers at my place about redeveloping a website from scratch that has 4 million users a month because he doesn't like WebForms and likes MVC. I am sure many would agree that MVC is the better framework, but we have a lot of WebForms components that have gone through testing, uat and are in production and working everyday. Throwing that away that code would be losing 100s of man days of effort, with no benefit to the business.


  • Discourse touched me in a no-no place

    @drurowin said:

    Yeah, dynamic linking was created to solve a problem of systems not having sufficient RAM by sharing copies of library objects in memory.
    Not just that. It also makes it far easier to manage updating the system when a problem is found. Provided the ABI stays the same, it's just a slot-in replacement and nothing else really needs to care.

    The problem on Linux is that some developers think that providing a stable ABI is a bad idea. I've heard it argued that ABI stability is morally wrong because it allows commercial applications to be written against the lib when in fact everything ought to be rebuilt from source every time and then distributed from a central fount of Free Software goodness. What's worse, there's at least the suspicion that some of the worst offenders this way are maintainers of key system libraries (though to be fair, if none of them were then nobody would really give a shit).

    FWIW, the libraries I maintain have a long-term stable ABI.



  • @ender said:

    @morbiuswilters said:
    Although, I'm starting to have second thoughts about running a binary given to me by some random stranger over the Internet.
    That's what virtual machines are for.

    I don't really want to go to the effort of setting up a VM just to test this out. Besides, I wanted to test it on my machine, because I'm pretty sure it will break in my environment.



  • @blakeyrat said:

    Save a $15,000 capital cost by spending $500,000 in labor costs.

    You have no fucking clue. You have no idea how many times I've heard "Isn't Java/PHP/Python slow? Shouldn't you be developing in C++?" I've had lots of people ask me that when they're interviewing for a position. That's usually a clue not to hire them.

    CPU time is cheap; developer time is expensive. What's more, CPU time is very easy to quantify. If Option B runs 30% faster than Option A, but will take "some development work", it's much easier to see that Option A is only going to cost me 30% more in hardware costs. It's easy to calculate, easy to depreciate. "Some development time" is nearly impossible to quantify. And then I have to look at opportunity costs: I lose nothing with a capex for more hardware, but if I have to divert a few good engineers for a few months (obviously somewhat open-ended) there's no telling what opportunities we will miss out on.

    The really sad part is I've worked with business people (C-levels, even) who don't understand this. I can kinda see why devs don't understand basic economics, but you'd think business people would get that spending 100 dev hours to save $5000 in IT costs is fucking stupid.



  • @lucas said:

    I see their comments ... edit I don't they went down this afternoon 4pm BST

    Did their comments run in Disqus? Because that would be sweet, sweet justice.

    @lucas said:

    What happened at discus is that 2 or 3 of the programmers wanted to learn Go so they redeveloped a core part of it because they had some clout.

    This is why I despise the "programmers are special little snowflake wizards" mentality that companies like Google promote. It gets so you end up with a team of developers who think their job is to entertain themselves playing with useless technologies on company time (although who in their right mind would think playing with Go is fun?) I fire people like that, but it makes it hard to replace them because this attitude is so prevalent.

    @lucas said:

    I am currently trying to reign in some developers at my place about redeveloping a website from scratch that has 4 million users a month because he doesn't like WebForms and likes MVC. I am sure many would agree that MVC is the better framework, but we have a lot of WebForms components that have gone through testing, uat and are in production and working everyday. Throwing that away that code would be losing 100s of man days of effort, with no benefit to the business.

    The "Great Rewrite" is never as easy as even your most pessimistic engineer thinks it will be. Believe me, I've been through a few. I've even pushed for it in the past, when I was green and didn't know any better.



  • @dkf said:

    It also makes it far easier to manage updating the system when a problem is found. Provided the ABI stays the same, it's just a slot-in replacement and nothing else really needs to care.

    I've already pointed this out to these guys several times. They just don't give a shit about the real world.

    @dkf said:

    I've heard it argued that ABI stability is morally wrong because it allows commercial applications to be written against the lib when in fact everything ought to be rebuilt from source every time and then distributed from a central fount of Free Software goodness. What's worse, there's at least the suspicion that some of the worst offenders this way are maintainers of key system libraries (though to be fair, if none of them were then nobody would really give a shit).

    FOSS projects are run by idiot man-children. Thankfully, most distros go out of their way to maintain stable ABIs so that security patches can be applied without fucking everything up. And it usually works, although it consumes a lot of developer time to maintain all of those distros. Time that could be spent improving FOSS, but instead is pissed away on silly ideological grounds. Of course, if they weren't trying to work around the hostility of core FOSS library developers, they'd just be spending their time writing another fucking mediocre windowing system.

    FOSS projects tend to lack the ability to proceed past the infancy stage of development, probably because they're just being done for shits-and-giggles by people in their spare time.



  • @morbiuswilters said:

    @blakeyrat said:
    Save a $15,000 capital cost by spending $500,000 in labor costs.

    You have no fucking clue. You have no idea how many times I've heard "Isn't Java/PHP/Python slow? Shouldn't you be developing in C++?" I've had lots of people ask me that when they're interviewing for a position. That's usually a clue not to hire them.

    CPU time is cheap; developer time is expensive. What's more, CPU time is very easy to quantify. If Option B runs 30% faster than Option A, but will take "some development work", it's much easier to see that Option A is only going to cost me 30% more in hardware costs. It's easy to calculate, easy to depreciate. "Some development time" is nearly impossible to quantify. And then I have to look at opportunity costs: I lose nothing with a capex for more hardware, but if I have to divert a few good engineers for a few months (obviously somewhat open-ended) there's no telling what opportunities we will miss out on.

    The really sad part is I've worked with business people (C-levels, even) who don't understand this. I can kinda see why devs don't understand basic economics, but you'd think business people would get that spending 100 dev hours to save $5000 in IT costs is fucking stupid.

    So you'd be willing to run a brainfuck-based HTTP server if you had a team of brainfuck experts over, say, something like IIS or nginx?



  •  @Ben L. said:

    So you'd be willing to run a brainfuck-based HTTP server if you had a team of brainfuck experts over, say, something like IIS or nginx?

     

    @morbiuswilters said:

    You have no fucking clue. 




  • @Ben L. said:

    @morbiuswilters said:
    @blakeyrat said:
    Save a $15,000 capital cost by spending $500,000 in labor costs.

    You have no fucking clue. You have no idea how many times I've heard "Isn't Java/PHP/Python slow? Shouldn't you be developing in C++?" I've had lots of people ask me that when they're interviewing for a position. That's usually a clue not to hire them.

    CPU time is cheap; developer time is expensive. What's more, CPU time is very easy to quantify. If Option B runs 30% faster than Option A, but will take "some development work", it's much easier to see that Option A is only going to cost me 30% more in hardware costs. It's easy to calculate, easy to depreciate. "Some development time" is nearly impossible to quantify. And then I have to look at opportunity costs: I lose nothing with a capex for more hardware, but if I have to divert a few good engineers for a few months (obviously somewhat open-ended) there's no telling what opportunities we will miss out on.

    The really sad part is I've worked with business people (C-levels, even) who don't understand this. I can kinda see why devs don't understand basic economics, but you'd think business people would get that spending 100 dev hours to save $5000 in IT costs is fucking stupid.

    So you'd be willing to run a brainfuck-based HTTP server if you had a team of brainfuck experts over, say, something like IIS or nginx?

    How in the fuck did you manage to get that out of what I said?



  • @morbiuswilters said:

    How in the fuck did you manage to get that out of what I said?

    That's Ben L's superpower.



  • @blakeyrat said:

    @morbiuswilters said:
    How in the fuck did you manage to get that out of what I said?

    That's Ben L's superpower.

    im spesial



  • @Ben L. said:

    @blakeyrat said:
    @morbiuswilters said:
    How in the fuck did you manage to get that out of what I said?

    That's Ben L's superpower.

    im spesial

     

    I thought you were artistic.



  • @morbiuswilters said:

    @drurowin said:
    I'll have you a statically linked x64 MySQL in about an hour...

    Your hour has passed. Although, I'm starting to have second thoughts about running a binary given to me by some random stranger over the Internet.

     

    Yeah, and I don't want to get blamed when you somehow fuck your system over.  Build MySQL on a VM, statically linked, and then post back when it runs perfectly on your special flower setup.

    Asshole.



  • @Ben L. said:

    That's a nice example of:

    • clever code
    • using preexisting libraries
    • the opposite of what you're trying to prove

    No, it's a nice example of somebody who clearly wasn't a high elf wizard with 152 points in charisma screwing up a standard library in a way that compromised security for literally millions of users by causing code assumed cryptographically secure to be no longer so.

    The high elf wizards who wrote it were using the contents of a slab of RAM their code had never written to as a source of randomness. The drone who screwed it up saw Valgrind complaining about the use of non-initialized data, assumed that anything Valgrind complains about must be a bug, and "fixed" it.

    The point is that cryptography is subtle and easy to get wrong, and that a default posture of not being inclined to trust anything that's both new and cryptography-related is perfectly sane and appropriate. Crypto code absolutely needs to earn respect, and that can only happen with extensive testing over very long periods.



  • @morbiuswilters said:

    although who in their right mind would think playing with Go is fun?

    I think it's a little bit like how a cat will gleefully throw a mouse around while the mouse is trying to write its will out using its own intestines.
    Much like Go, that mouse never discovered that pens are far better for writing.


  • Discourse touched me in a no-no place

    @Ben L. said:

    So you'd be willing to run a brainfuck-based HTTP server if you had a team of brainfuck experts over, say, something like IIS or nginx?
    You can't do that. BF doesn't have a foreign function interface or a way to call arbitrary system calls, so writing a real application in it is wholly impractical.



  • @drurowin said:

    Yeah, and I don't want to get blamed when you somehow fuck your system over.

    Oh, it will probably be fine.

    @drurowin said:

    Build MySQL on a VM, statically linked, and then post back when it runs perfectly on your special flower setup.

    Wait, I thought the whole point is that you were going to build me a statically-linked MySQL in "about an hour" to prove your point? Why would I waste time trying to get MySQL to statically link to help prove your point? Do you not know how a debate works?



  • @flabdablet said:

    The point is that cryptography is subtle and easy to get wrong, and that a default posture of not being inclined to trust anything that's both new and cryptography-related is perfectly sane and appropriate. Crypto code absolutely needs to earn respect, and that can only happen with extensive testing over very long periods.

    Bam! You said it all.


    And you did it without calling anybody names... How.. how did you do that?



  • @dkf said:

    @Ben L. said:
    So you'd be willing to run a brainfuck-based HTTP server if you had a team of brainfuck experts over, say, something like IIS or nginx?
    You can't do that. BF doesn't have a foreign function interface or a way to call arbitrary system calls, so writing a real application in it is wholly impractical.

    Does BF not have the ability to modify registers or execute arbitrary instructions?



  • @morbiuswilters said:

    @dkf said:
    @Ben L. said:
    So you'd be willing to run a brainfuck-based HTTP server if you had a team of brainfuck experts over, say, something like IIS or nginx?
    You can't do that. BF doesn't have a foreign function interface or a way to call arbitrary system calls, so writing a real application in it is wholly impractical.

    Does BF not have the ability to modify registers or execute arbitrary instructions?

     

    Not really, "standard" brainfuck runs in a VM with just an output, an input, and a tape. It also has plenty of compatibility problems due to incomplete standardization: what range do the tape cells have? How is EOF or newline represented? etc. However there are plenty of derivative languages. It wouldn't be hard to add an interface to allow system calls.

    For more complex stuff though, Funge-98 is probably a better language. It's based on Befunge, which is kind of a 2-dimensional brainfuck, but it adds support for code in 1 or 3 dimensions (and could easily be extended to 4 or more), multithreading, command execution, file I/O and an extension mechanism to add arbitrary features. Pretty neat stuff. Unfortunately implementations are going to be slower too (BF has very sophisticated compilers and even specialised hardware).



  • @morbiuswilters said:

    @drurowin said:
    Yeah, dynamic linking was created to solve a problem of systems not having sufficient RAM by sharing copies of library objects in memory.

    That is one of several things it solves.

    Almost 100 comments, and no one has yet mentioned that shared objects are required to make the most of your memory cache.

    Statically linking everything kills memory performance in addition to just hogging more RAM, making your computer a lot slower.



  • @boh said:

    Almost 100 comments, and no one has yet mentioned that shared objects are required to make the most of your memory cache.

    Statically linking everything kills memory performance in addition to just hogging more RAM, making your computer a lot slower.

    I think I mentioned in some other thread where we had this same stupid argument that it makes program startup slower. But I don't think it "kills" memory performance, unless you are spawning and terminating new processes like crazy (so I guess for something like Postrgres which spawns a process-per-connection, it probably would kill memory performance..)



  • @flabdablet said:

    The high elf wizards who wrote it were using the contents of a slab of RAM their code had never written to as a source of randomness. The drone who screwed it up saw Valgrind complaining about the use of non-initialized data, assumed that anything Valgrind complains about must be a bug, and "fixed" it.

    This is where the high elf wizards probably should have put big comments around that code saying // TOUCH THIS AND DIE MORTAL!!!!!

    Or at least something to that effect.



  • @boh said:

    Almost 100 comments, and no one has yet mentioned that shared objects are required to make the most of your memory cache.

    ... didn't Drurowin say that in one of the posts you're quoting?



  • @boh said:

    @morbiuswilters said:

    @drurowin said:
    Yeah, dynamic linking was created to solve a problem of systems not having sufficient RAM by sharing copies of library objects in memory.

    That is one of several things it solves.

    Almost 100 comments, and no one has yet mentioned that shared objects are required to make the most of your memory cache.

    Statically linking everything kills memory performance in addition to just hogging more RAM, making your computer a lot slower.

     

    The overall performance gains are well worth it.  A statically-linked build of p7zip is a whopping 275% faster at compression on my machine, as well as the statically-linked MySQL that I mentioned, as well as almost every program I've ever rebuilt with static linking.  (My system is a dual quad-core Opteron 4130 running Solaris 11 with 32 GB RAM and 8 147 GB SAS hard drives.)  My eventual goal is to have a system free of shared libraries.

     



  • @drurowin said:

    A statically-linked build of p7zip is a whopping 275% faster at compression on my machine...

    I call bullshit. There's no way static linking is going to give performance gains like that. If you're not: 1) lying; or 2) unable to do a simple benchmark accurately, then I'd say it's a result of something else you did.

    In fact, this illustrates just how clueless you are. Anybody who understands compilers, linking and PIC would never claim a 275% increase in performance from static linking. Even a knowledgeable person in favor of static linking (if such a thing exists) would instantly realize that that large of a performance gain is ridiculous and something must be wrong. This just shows that, once again, you have no fucking clue what you are talking about.


Log in to reply