Containers for Windows


  • ♿ (Parody)

    So, Containers for Windows this has been all the rage in the "I wish I actually developed on Linux, cause those graybeards and/or hipsters are so much cooler than me, but my boss makes me use WINBLOWS and also, I don't really know how to install a Linux, but think of all the stickers I could put on my laptop if I could" community.

    I mean, just look at how excited Docker's marketing teamdeveloper evangelists are about it:

    https://twitter.com/frazelledazzell/status/631260003836391424

    Yippie.

    So as it turns out, I’m quite familiar with Docker (specifically the architecture, but moreover why it’s needed in the Linux world). Our ProGet product is actually shipped as a Docker Container.

    I'm also intimately familiar with Windows NT architecture. Quite frankly the two are fundamentally incompatible (like apples and the orange paint) and a giant WTF perpetuated solely for marketing purposes. Namely to retreat on the “machine” front of the virtualization war (which they’ve already gotten their asses kicked by with VMWare on, sadly), and hope to win on this emerging “container” front.

    “Containers” exist in Linux solely because of the dependency hell that is Linux, whereas Windows never had that problem thanks to a registry, COM, and tremendous number of other things that NT got right thanks to starting in ’93 instead of ’73 (and obviously, b/c Dave Cutler was familiar with the idea of "design", as opposed to "hacking shit until it works on my machine").

    I mean, you really have to work hard to get Windows apps to not work from one server to another. OTOH, it’s nearly impossible to get Linux apps working on anything except for a single server, with all the right files in all the right places… which is where Docker sorta helps (but you still need the right host kernel, and several other dependencies on the host server to get working).

    Thus, I see no value-add for Windows Containers, and moreover I feel it might even encourage developers to create convoluted, Linux-style fuckpiles of “applications” that comprise of a dozen barely functional components that will be impossible to maintain. I feel like I’m the only one shouting that the emperor has no clothes.

    My hope is that Containers on Windows are Microsoft’s secret strategy to get people to notice that HyperV is actually pretty decent, and maybe they should consider that instead of VMWare.

    .... that being said, our products will (and already do) support building them. I'm just not happy about it.



  • @apapadimoulis said:

    I wish I actually developed on Linux, cause those graybeards and/or hipsters are so much cooler than me...

    This is me. I can't even grow a beard without it being ginger, which is not grey, so that sucks. I have to install Linux instead.

    @apapadimoulis said:

    ... I don't really know how to install a Linux...

    Only on my Windows 8 laptop, which I don't dare touch in this regard. It's a functional laptop which I use for everything because I'm hardly ever at home. Meanwhile my dev (as of recently an Ubuntu dual boot) and gaming (windows 7) desktops are barely used.

    @apapadimoulis said:

    I mean, you really have to work hard to get Windows apps to not work from one server to another. OTOH, it’s nearly impossible to get Linux apps working on anything except for a single server, with all the right files in all the right places…

    My experience has been precisely the opposite. Then again, my experience has been with the likes of legacy horrors such as SugarCRM and ProcessMaker. That story usually goes as follows: spent hours trying to make it work on Windows without success, tried Ubuntu. Spent hours trying to make it work with success.

    Regardless, I wanted to start a docker project (to learn about docker, and in unrelated news, python/postgres) and run it on my laptop so I can develop while out and/or about.

    I assume I need a VM, which is probably handled with docker-machine, or boot2docker or whatever.


  • ♿ (Parody)

    @Shoreline said:

    my experience has been with the likes of legacy horrors such as SugarCRM and ProcessMaker

    I'll note, both of those are Linux/LAMP products. That's why they are a complete fucking mightmare to get working on Windows (as opposed to, just a pain in the ass on Linux). If they were built with proper, modern (i.e. 2000 and later) approaches for Windows, they should be installable in minutes using an installer.


  • BINNED

    We use VMWare Client virtualization to push our thick client apps. Instead of having the customer IT going around and installing framework x (not .NET) and our app we deliver 1 executable. It starts an sandbox that contains the app + all expected things like the framework. This solves a lot of our installation and update issues with freaky, old, fat, LoB applications. Granted part of the issues is our own fault for using shitty framework x. Starting the app up is a bit slower, but that is something the users typically only do when they start their work day.
    For the servers there is an installation doc. Just follow the steps, install the right frameworks, updates & do the config and it should be fine. On the server side I don't see why need something like that.


  • ♿ (Parody)

    @Luhmann said:

    We use VMWare Client virtualization to push our thick client apps

    So you'd think that they'd design containers for windows to compete with the VMWare Client rubbish... but nope. It's just a shitty, stripped-down server core with a bunch of Linux-looking commands to make it work like shitty docker containers (which, by the way, just wget random shit from the internet, and install it)


  • Discourse touched me in a no-no place

    @apapadimoulis said:

    So you'd think that they'd design

    Woah there, buddy!



  • Hear hear.

    I was looking at using Laravel as a framework for an app (shelved currently due to other matters) and wanted to use the ready-to-go image they provide.

    Vagrant on Windows is really not the most fun you've had to get going.



  • @apapadimoulis said:

    I'll note, both of those are Linux/LAMP products.

    Good point. I suspect PHP was TRWTF. Seriously, it always seems to take hours to days to set up an apache server project. The very first time I started a node server took 30 minutes of downloading, coding and watching shit TV to make it do something server-ish. The part that simply astounded me coming from PHP was how it actually fucking told me what I'd done wrong.


  • BINNED

    @Shoreline said:

    Then again, my experience has been with the likes of legacy horrors such as SugarCRM and ProcessMaker.

    I don't know about ProcessMaker, but SugarCRM is freaking easy to get running on Linux. What were you using, Arch? :trollface:



  • @apapadimoulis said:

    If they were built with proper, modern (i.e. 2000 and later) approaches for Windows, they should be installable in minutes using an installer.

    Hmm, an installer often exists, but as soon as you need more than one program installed and talking to each other, the default configuration often fails or is insecure.

    Ever tried to install some J2EE apps and an Oracle database on a Windows server? Yes, there are installers, but you'll spend a few hours - if you know what you are doing (if you don't you probably need a week) - to configure these beasts so that they work together and the database will not shut down once the redo logs (which are not backed up or deleted by default) reach 32GB.

    Or, if you prefer Microsoft software, ever tried to set up a Windows domain using Windows 2012, and latest Exchange and Lync servers, so that afterwards your mobile devices (outside your NATted company network) can chat and watch shared screens from your external consultant's laptops (also outside the NATted network)? It is of course possible, but if you've never done it, you probably spend a few days to a week to set it up (Not talking about firewall rules or configuring external DNS yet, just installing 2 DCs, one Exchange server, one Lync server and one Lync Edge server, configuring them that they know each other, and creating accounts for a few people to test it).

    If needed that more often, probably it would be easier to use System Center, but that is another huge learning cliff to get above that (If I can believe people who tell that it is the alternative to Puppet or Chef on the Windows world).

    [When I compare that to setting up a SVN server, a LDAP server, an OpenFire chat server and a MediaWiki on a Debian Linux server (without containers or Puppet/Chef, but manually using the deb packages), I'd say the Windows way is harder. And installing MediaWiki did not break any of the other services; Installing the Edge Server for Lync first broke company internal Lync screen sharing too...]



  • I'm waiting for this "make everything a container" trend to go away so people will focus on actually working systems again, like packaging their applications properly.



  • Hours to days? WampServer installs in like 10 minutes! 🚎

    The version we have at work installs in less time than that and is completely reusable as an installer between projects.


  • ♿ (Parody)

    @Shoreline said:

    I can't even grow a beard without it being ginger, which is not grey, so that sucks.

    Most of the ginger in my beard has turned grey, which is a little sad, but I've learned to cope and keep the kids off the lawn.

    @apapadimoulis said:

    If they were built with proper, modern (i.e. 2000 and later) approaches for Windows, they should be installable in minutes using an installer.

    If they were built with proper, modern approaches for Linux, they should be installable in minutes using an installer, too.


  • ♿ (Parody)

    @mihi said:

    ever tried to set up a Windows domain using Windows 2012, and latest Exchange and Lync servers

    Domain controllers and Exchange are definitely pre-2000 technologies; Lync, it wouldn't surprise me if they leveraged the exchange rubbishes. SQL Server, the same. That's why they're a pain in the ass to get working. It wouldn't surprise me if the exchange team lost knowledge of what is actually required to be installed, and thus their installer does everything just to be sure.

    But..... there's no way these techs will work in containers; you need a full VM to get these things working.

    @mihi said:

    If needed that more often, probably it would be easier to use System Center, but that is another huge learning cliff to get above that (If I can believe people who tell that it is the alternative to Puppet or Chef on the Windows world).

    Check out Otter!


  • ♿ (Parody)

    @boomzilla said:

    If they were built with proper, modern approaches for Linux, they should be installable in minutes using an installer, too.

    Really? I thought there was no central registry / component store, meaning applications just have to guess/hope that a particular component is installed. For example, there's no equivalent of HTTP.SYS on Linux, so you have to rely on a separate packages with slightly different implementations, and the lack of a central store = clusterfuck?



  • @Onyx said:

    Arch

    No idea.

    @Onyx said:

    freaking easy

    Love to hear what your definition of that term is. My definition of that term is as follows:

    • Checkout files into suitable location.
    • Download whatever software to run the build script, server, etc.
    • Run build script
    • Start server
    • View at localhost:[some_port]
    • Modify code
    • See changes

    What actually happened was closer to this:

    • Download (not checkout) files into a suitable location. Wait. Time passes. Thorodin sings songs about gold. Why no source control? Apparently sugarcrm doesn't support source control questions. Eventually myself and another developer worked out we could do it if we were careful about what we source-controlled.
    • My machine already had PHP, which was working, so I didn't need to download it. That part was easy by definition, but I can't credit SugarCRM with it being 'easy', let alone 'freaking easy'.
    • I didn't need to run the build script - at least all the files were present after waiting an age for the download.
    • Apache is already running. Put in a vhost. Modify and restart. Repeat ad nauseum until:
    • View at localhost. 500.
    • Turn to drink.

    So with regards to this:

    @Onyx said:

    freaking easy

    Wrong.

    Also, fuck you. 'freaking easy'. Fuck you.

    Hey, you wanted a reaction.

    @Arantor said:

    Hours to days? WampServer installs in like 10 minutes!

    Where the fuck are you guys getting your magical apache software from that it actually works? Is there some black market?Racist.



  • Largely it depends if you're doing the 'install everything by hand' clusterfuck or not. If you are:

    http://www.apachelounge.com/download/ to download the Apache binary.

    http://windows.php.net/download/ to download the PHP binary. Make sure that the version of VC is the same, currently you want VC11 TS binary to match your Apache for x86 or x64.

    The config files are mostly annoying though the actual bit of getting Apache to talk to PHP is fairly straightforward.

    Or, again, you could use something like WampServer but it's really not designed for internet-facing production use as it does weird things like come bundled with XDebug because it's designed for dev use

    As far as the whole 'getting a 500' goes, you know there are error logs, right?

    SugarCRM, has to be said, is one of the worst of the worst when it comes to PHP though, so you're fighting an uphill battle both ways on that one.


  • ♿ (Parody)

    @apapadimoulis said:

    Really? I thought there was no central registry / component store, meaning applications just have to guess/hope that a particular component is installed.

    A proper modern Linux distro tracks all that stuff.

    @apapadimoulis said:

    For example, there's no equivalent of HTTP.SYS on Linux, so you have to rely on a separate packages with slightly different implementations, and the lack of a central store = clusterfuck?

    I have no idea what HTTP.SYS is (I'd guess it's something about serving http), but yeah, you'd rely on packages for whatever it was you needed. I'm sure that trying to come up with exact analogs going either way will lead to frustration.



  • Yeah, Linux doesn't have IIS therefore it sucks?


  • BINNED

    @boomzilla said:

    I have no idea what HTTP.SYS is

    Isn't that the thing that hooks directly into the kernel (yes, the kernel has HTTP capabilities) and is a massive DONOTUSE because it's full of holes? Might not be, but I know such a thing exists, it was mentioned here recently.


  • ♿ (Parody)

    @Onyx said:

    Might not be, but I know such a thing exists, it was mentioned here recently.

    Yeah, I remember it being mentioned, too. All I know is that it makes me want to edit autoexec.bat.


  • ♿ (Parody)

    @Onyx said:

    Isn't that the thing that hooks directly into the kernel (yes, the kernel has HTTP capabilities)

    Yes; that's why I figured everyone would know about it 😄

    It sounds WTF-y, but only when you consider it like the Linux Kernel. NT is very different.

    This is one of the types of services that make sense there I think. The biggest advantage to having it in the kernel is that you can easily forward requests with different hostnames to different users contexts. So, my application can listen on "whatever.localhost:1000" and yours can listen on "hdars.localhost:1000". Otherwise that would be impossible to do without an application server that listened to port 1000, and then marshaled requests to another process, context switching, etc.

    Aaaanyway, to my point, this is why you have less dependencies in Windows. There's no good reason to implement an HTTP stack, since it's already implemented.


  • Fake News

    @boomzilla said:

    keep the kids off the lawn



  • Because having a server that listens on the port and marshals requests, in userland is so much worse than doing it in kernel space. What the hell was I thinking?


  • ♿ (Parody)

    @Arantor said:

    What the hell was I thinking?

    "Linux"

    @Arantor said:

    a server that listens on the port and marshals requests

    Is that even a thing? It's been a long while since I used apache, but will that take a request, and pass it to a process that's running under a different user context?

    I thought it all ran under a single user, and you just spun up a new apache if you wanted a different user.

    Does ngix do that?


  • ♿ (Parody)

    @apapadimoulis said:

    There's no good reason to implement an HTTP stack, since it's already implemented.

    What a coincidence, I have this same situation in Linux! I can honestly say I've never had a need to implement an HTTP stack.

    It's fine with me if you like what Windows does. Different systems are different and everything is a trade off.


  • FoxDev

    @apapadimoulis said:

    I thought it all ran under a single user,

    it can, and i believe that is the default configuration in most distros.

    IIRC, Apache gets really shirty if you don't have it drop root privs after it binds to :80

    @apapadimoulis said:

    and you just spun up a new apache if you wanted a different user.
    You can do this too, it's actually fairly easy, so long as you dont' get too complicated

    We actually have it run as different users for each of our virtualhosts that run our public facing websites.... that was interesting to set up

    @apapadimoulis said:

    Does ngix do that?
    it can work either way, although the common use for nginx is to handle the static files and hand off any dynamic things to a different daemon process that can be running as any user you want (usually via something like UWSGI or FastCGI or even via an HTTP(S) socket connection)



  • You can tell Apache exactly how you want to route the requests. You don't have to have them mapped to physical paths if you don't want.

    You configure a virtual host in Apache to listen on a given physical port for one or more named virtual hosts. I, for example, on my Linux VPS have about a dozen different websites all served from the one instance of Apache and each file is owned by the user account in which it lives (i.e. one account per site)

    We do this at work to a point, we have two different URLs that hit the application, one is mapped to /htdocs inside our app structure, a second that is mapped to /htdocs/special, all in the one configuration. For bonus points, we also configure it so /server-status points to somewhere different if you hit it from a certain IP address and that path doesn't even exist. Oh and we do this on Apache on Windows too.



  • @apapadimoulis said:

    “Containers” exist in Linux solely because of the dependency hell that is Linux, whereas Windows never had that problem thanks to a registry, COM, and tremendous number of other things that NT got right thanks to starting in ’93 instead of ’73 (and obviously, b/c Dave Cutler was familiar with the idea of "design", as opposed to "hacking shit until it works on my machine").

    Look, I know DLL Hell caused a lot of trauma, but you've completely blocked out all memory of it?

    The fix for it was this:

    which arrived in Visual C++ 2005 / Windows XP SP2 (?).



  • @apapadimoulis said:

    Lync, it wouldn't surprise me if they leveraged the exchange rubbishes.
    Skype for Business, Lync, Office Communications Server, or Live Communications Server, depending on how many onions are on your belt, emerged as part of the Office 2003 lineup. It has deep tentacles into Exchange.@apapadimoulis said:
    Really? I thought there was no central registry / component store, meaning applications just have to guess/hope that a particular component is installed.
    Yes and no. Most systems have some sort of package manager (DPkg or RPM) which keep a list of what's installed on the system through them, from the OS itself to ancillary applications and all the libraries in between. Each package has a list of dependencies, a list of conflicts, and a list of packages it can substitute for. If you try to install a package, the package manager will ensure that everything is available and install anything else missing / remove anything conflicting. The package itself then needs to interrogate the package manager for further information (what web server is installed? where is the Apache config directory?). Usually the user doesn't need to touch this, as the package builders have done this all for you.

    However, if you step outside the package manager for anything, you're on your own. You have to make sure to put everything in the right place, you have to figure out where all your dependencies are installed, and if you later install a package that relies on what you rolled by hand, the package manager's not going to know about it, and then bitch and moan when it can't install its thing because what you made is in the way.@boomzilla said:

    I have no idea what HTTP.SYS is
    Windows kernel-mode driver that binds to port 80 and handles serving static files itself, or handing off to IIS, DCOM, .NET Remoting, or whatever else wants to listen to port 80, based on the hostname and path. Yes, it's exactly as stupid as it sounds.@Arantor said:
    Because having a server that listens on the port and marshals requests, in userland is so much worse than doing it in kernel space. What the hell was I thinking?
    So, you want to write something to take I/O from the kernel, figure out where to marshal to, push stuff back (more kernel I/O) so the target receives the entire request, find your target, somehow notify them they have a socket pending (more kernel I/O for IPC), wait for them to accept it (yet another kernel callback), and then remember to close it on your end (another syscall) before finally being done (a final syscall)?

    Screw that. Let's make that all live in the kernel. No syscalls needed because you're already there, and it's a lot easier to lie to the userland applications.

    Of course, we have to make sure it's completely secure, but that's not hard... 🚎


  • Discourse touched me in a no-no place

    @TwelveBaud said:

    completely secure

    We can do that. We've got the concrete and the oceanic trench.



  • @Arantor said:

    error logs

    I've noticed it doesn't seem to matter, because...

    @Arantor said:

    SugarCRM, has to be said, is one of the worst of the worst when it comes to PHP though, so you're fighting an uphill battle both ways on that one.

    I guess you covered that.

    The fact that if you want a module with two different 'relate' fields to reference separate records on another module, you have to manually fix the code so that the field ids don't interfere in the database shows that it was underdeveloped across versions 5 and 6.

    That said, it's use before I got to it in most cases was so abusive, SugarCRM wasn't entirely to blame for the WTFs I saw. Cloning modules by copying code is not a good idea, but then the devs don't have many options beyond creating the module inside the admin section.

    Basically, the number of times I thought about how I would write an instruction manual, and came across cases which amounted to "X is Y, except when it isn't", brought me to conclude SugarCRM was the issue and not the devs.

    Anyway, tangents are fun.


  • Garbage Person

    @apapadimoulis said:

    I feel like I’m the only one shouting that the emperor has no clothes.

    Nope, I have the same viewpoint. Containers solve an inherently Linux problem that Windows devs largely do not have.

    Windows developers tend to be working in a totally different problem space, too. I've met very few Linux guys working on line of business apps and very few Windows guys working on stuff that will ever be deployed outside their organization.


  • ♿ (Parody)

    @Weng said:

    I've met very few Linux guys working on line of business apps

    Probably lots of Java where it happens. That's my experience, at least.



  • @apapadimoulis said:

    Is that even a thing? It's been a long while since I used apache, but will that take a request, and pass it to a process that's running under a different user context?

    Yes, FastCGI as has been said before me. It doesn't really limit to Apache but used with every sane HTTP server. It's common practice to run a single front facing HTTP server that passes CGI processing to FastCGI processes running as the intended user. This setup is resource efficient and easy to configure if you know what you're doing.

    However, if you have only one HTTP server instance running it needs read access to all static files of all websites so it can serve them so you need to set up users/groups that the server can read all of them.

    @apapadimoulis said:

    I thought it all ran under a single user, and you just spun up a new apache if you wanted a different user.

    Does ngix do that?

    I don't think running a server per user is really worth the trouble. If the server software itself is leaking it will leak for every user separately so it's a lose-lose situation when you need the resources to run separate servers and reverse proxying madness from :80 to each invidual smaller server. There are legitimate uses for reverse proxies though.

    Also for what it's worth, Apache is the worst HTTP server resource wise. Lighttpd was a thing for a while but everyone seem to use nginx now. There's really no difference to me which one is used as both of them are efficient enough for any reasonable workload.


  • BINNED

    So the problem was developing shit for Sugar, not installing the base version? That's different.

    Also, yes, it's something I gave up on 1 hour into researching it. Luckily, I was just tasked to check how viable it is, not actually forced to do it, so I got off lightly.



  • Like.


  • ♿ (Parody)

    Totally.


  • FoxDev

    @boomzilla said:

    Totally.

    Have you ever really looked at your hands, man? they like touch everything, but they can't touch themselves... that's like totally deep.....



  • @mihi said:

    Ever tried to install some J2EE apps and an Oracle database on a Windows server?

    Right; but again, you're talking about shitty software that doesn't do things the Windows way. Oracle and Java don't have to be piles of shit to install, they are because Oracle (the company) doesn't know what the fuck they're doing when it comes to writing software.



  • @powerlord said:

    Look, I know DLL Hell caused a lot of trauma, but you've completely blocked out all memory of it?

    He didn't say he "erased all memory of it", he said it was fixed a long time ago, which is true.



  • Apache used to have a multi-processing-module call mpm-peruser that ran different sites as different users even before it hit the script layer.

    It died a horrible death (until someone resurrected it as an independent project).

    For those people who don't know what Apache MPMs are, they are basically Apache's core. For most OSes, you just have one (i.e. mpm_winnt for Windows, which is threaded). Unix/Linux/BSD have 3: prefork, worker, and event.

    Because they're lazy, Linux distributions tend to stick with prefork and compile things like mod_php under that assumption.

    @hifi said:

    Also for what it's worth, Apache is the worst HTTP server resource wise. Lighttpd was a thing for a while but everyone seem to use nginx now. There's really no difference to me which one is used as both of them are efficient enough for any reasonable workload.

    From memory, Lighthttpd ran into a slew of security problems. nginx arrived around the same time and basically stole Lighthttpd's current user base and has continued to grow as people become frustrated with Apache's "heaviness."


  • FoxDev

    @blakeyrat said:

    they are because Oracle (the company) doesn't know what the fuck they're doing when it comes to writing software.

    To be fair, at least with Java, Sun Microsystems wasn't any better in their install experience


  • Discourse touched me in a no-no place

    I can't imagine installing Oracle database on Windows is any harder than installing it on Solaris. If it is, Oracle have definitely done something wrong.



  • @accalia said:

    To be fair, at least with Java, Sun Microsystems wasn't any better in their install experience

    Here's two statements:

    • Oracle sucks

    • Sun Microsystems is really awesome

    Guess the relationship between these two statements!






    Give up?

    Ding! Time's up!

    The answer is: fucking NOTHING.



  • @blakeyrat said:

    @powerlord said:
    Look, I know DLL Hell caused a lot of trauma, but you've completely blocked out all memory of it?

    He didn't say he "erased all memory of it", he said it was fixed a long time ago, which is true.

    Here, I'll quote the relevant section again and bold the important part.

    “Containers” exist in Linux solely because of the dependency hell that is Linux, whereas Windows never had that problem thanks to a registry, COM, and tremendous number of other things that NT got right thanks to starting in ’93 instead of ’73 (and obviously, b/c Dave Cutler was familiar with the idea of "design", as opposed to "hacking shit until it works on my machine").

    It got fixed in 2005, 12 years after NT was released.



  • @apapadimoulis said:

    “Containers” exist in Linux solely because of the dependency hell that is Linux

    Dependencies are hell everywhere. But the operating systems that deal with them best are mostly based on Linux.

    The problem is that there is more than one such system and they use two common (dpkg and rpm) and several less common (pacman, emerge, etc.) tools for the purpose. So if somebody wants to provide a package for “Linux”, they have to deal with several systems.

    Now it is not that hard, really. But it is something the developers would have to learn and many choose not to bother.

    @apapadimoulis said:

    Windows never had that problem thanks to a registry, COM, and tremendous number of other things

    Dependency hell exists in Windows. In fact, most developers don't bother trying to solve it and just bundle everything and are done with it.

    The things you mention only solve finding installed dependencies and only do so if the developers know how to use them properly. The same problem is solved in Linux by having standard places for things, which again only works if the developers know where to properly place things.

    What Windows don't have, at all, is a tool for actually managing those dependencies. Linux-based systems do. But it still all boils down to application developers not wanting to bother.

    @apapadimoulis said:

    I mean, you really have to work hard to get Windows apps to not work from one server to another.

    Well, if you bundle everything, like most people do, then it's likely going to work.

    On Linux, everybody is aware that bundling everything is not the right way—because it isn't, on either system—but the developers trained with bundling everything often don't know how to properly specify them.

    Yes, the fact that there are several separate systems for it does not help.

    @apapadimoulis said:

    which is where Docker sorta helps

    Docker is absolutely horrible from security point of view as it makes it harder to keep the important dependencies up-to-date, since each application comes with its own copy. Which was always the case in Windows anyway.

    It does have one security advantage though—the applications are separated, so a breach in one is less likely to compromise anything else.

    @apapadimoulis said:

    Thus, I see no value-add for Windows Containers

    It keeps the clusterfucks of applications, which they already are, compartmented and therefore less likely to endanger the rest of the system with their security vulnereabilities.

    @apapadimoulis said:

    fuckpiles of “applications” that comprise of a dozen barely functional components that will be impossible to maintain

    The developers who use docker did it already, on Linux and Windows alike. Those who didn't will, hopefully, keep their packages.

    @apapadimoulis said:

    If they were built with proper, modern (i.e. 2000 and later) approaches for Windows, they should be installable in minutes using an installer.

    If they were built with proper, modern (i.e. 1993 and later) approaches for Linux, they should be installable in minutes using the package manager.

    @apapadimoulis said:

    Really? I thought there was no central registry / component store, meaning applications just have to guess/hope that a particular component is installed.

    Linux does not have a big, fancy system for locating components, but most components either have a standard location or have their mechanism for discovery, which can be adjusted by the system administrator if they choose non-standard location for them.

    And there are package managers that allow declaring that component has to be installed, which Windows lack altogether. They've introduced Live Store in Win8, but as far as I can tell, applications from store can't depend on each other and must bundle everything.

    @apapadimoulis said:

    Aaaanyway, to my point, this is why you have less dependencies in Windows. There's no good reason to implement an HTTP stack, since it's already implemented.

    Some things that come bundled with Windows are certainly useful. But they have nothing to handle the things that don't come bundled with them. MSI is a joke compared to any Linux package manager.

    @apapadimoulis said:

    Is that even a thing? It's been a long while since I used apache, but will that take a request, and pass it to a process that's running under a different user context?

    At least FCGI and WSGI interfaces do that as does using apache as reverse proxy for Tomcat/JBoss/any other java servlet container.

    @apapadimoulis said:

    I thought it all ran under a single user, and you just spun up a new apache if you wanted a different user.

    You can't spin a new apache, because it wouldn't be able to listen on the same port. The simplest mechanism is that apache forks a worker that switches to the correct context. But remember that forking is very fast on Linux while on Windows it is not. Windows often need to use services and components for performance reasons where Linux gets away with forking helpers.

    Other mechanisms do indeed forward the request to separate process.

    @apapadimoulis said:

    Does ngix do that?

    AFAICT nginx can use the same set of mechanisms as apache. And it uses the marshalling more often than apache, because e.g. PHP has module built into apache, but nginx always calls it via FCGI (marshalling to dedicated process) or CGI (spawning a helper) even if not switching contexts.

    @Weng said:

    Nope, I have the same viewpoint. Containers solve an inherently Linux problem that Windows devs largely do not have.

    It is not inherently Linux nor inherently Windows. It is simply problem that dependency management is hard and some developers prefer to avoid it. Arguably the fact many library developers chose to resign on maintaining backward compatibility does not help and arguably the fact C++ makes maintaining backward compatibility pain in the arse contributed to those choices.

    @blakeyrat said:

    Right; but again, you're talking about shitty software that doesn't do things the Windows way.

    Which is the same thing as talking about shitty software that doesn't do things the Linux way. Which is the whole point. There is a proper way of dealing with dependencies on each system and there are developers that don't bother with it on any of them.


  • FoxDev

    @blakeyrat said:

    Sun Microsystems is really awesome

    that's also a statement i have never said, ever.


  • FoxDev

    @Bulb said:

    It does have one security advantage though—the applications are separated, so a breach in one is less likely to compromise anything else.

    it also makes anyone who can run the docker command effectivetly root: http://reventlov.com/advisories/using-the-docker-command-to-root-the-host



  • @blakeyrat said:

    @mihi said:
    Ever tried to install some J2EE apps and an Oracle database on a Windows server?

    Right; but again, you're talking about shitty software that doesn't do things the Windows way. Oracle and Java don't have to be piles of shit to install, they are because Oracle (the company) doesn't know what the fuck they're doing when it comes to writing software.

    J2EE / JavaEE apps mostly aren't written by Oracle either.

    Then again, a lot of JavaEE apps you just use a control panel in your appserver to install (or drop into the appserver's apps directory). You might have to set up a database connection pool for it too, but usually there's a GUI in the appserver's control panel to do that.

    (Well... you might have to edit persistence.xml as well, depends on the app.)


Log in to reply