Containers for Windows



  • @apapadimoulis said:

    Oh for fucks sake. This is why we (I'm pointing to @blakeyrat) hate the linuxautomobile ecosystem. WindowsFord, you have one servermodel: IISFiesta. And it's great, and runs circles around all of these rubbish serversmodels.

    CATFY


  • Discourse touched me in a no-no place

    @TwelveBaud said:

    With a Microsoft add-on package, even their shitty shell scripts work.

    There are some really good implementations of some of the classic Unix tools out there for Windows. They're not technically the same, but rather reimplementations that use the native Windows APIs, which is a lot more efficient on Windows. I used to use them a lot back when I had a laptop running XP (originally before SP1, so we're talking a while back).

    From a user programming interface perspective, Windows and Unix are not hugely different — they mostly do really similar stuff — but there's just enough difference in how things are factored to make writing portable stuff very hard.



  • @Onyx said:

    So the problem was developing shit for Sugar, not installing the base version?

    Are you talking community (free) version? That could well be very different from the badly-extended (by the SugarCRM team, no less) version I ended up developing from.


  • BINNED

    @Shoreline said:

    Are you talking community (free) version?

    Yes.

    @Shoreline said:

    That could well be very different from the badly-extended (by the SugarCRM team, no less) version I ended up developing from.

    Possibly. All I know is that the what little docs I found were crap (Might be better on the paid side? No idea.) and that the higher-ups didn't like the workflow anyway, and tying in our stuff would be a pain, so I dodged that bullet.


  • Grade A Premium Asshole

    @Bulb said:

    You can't spin a new apache, because it wouldn't be able to listen on the same port.

    Not that this does what APap seemed to want to do, but you can (theoretically) start up a new apache on the same port if you're running on Linux 3.9 or later, but you can't run the second apache instance as a different user than the first. Grep for SO_REUSEPORT here: http://man7.org/linux/man-pages/man7/socket.7.html and maybe look at the LWN article on the option: https://lwn.net/Articles/542629/

    SO_REUSEPORT is designed for fair distribution of initial TCP or UDP connections between some number of applications.


  • Grade A Premium Asshole

    @dkf said:

    ...there's just enough difference in how things are factored to make writing portable stuff very hard.

    Remember that time that MSFT went on a quest to discourage the use of the top N insecurely-used library functions in their libc, deprecated like three of the most commonly used libc functions used on Windows, Mac, or Linux, and replaced them with versions that -when you dug a little into their design- didn't actually fix the the problems they set out to fix?

    Good times.



  • @Onyx said:

    All I know is that the what little docs I found were crap (Might be better on the paid side? No idea.)

    Nope.

    @Onyx said:

    I dodged that bullet.

    You are the one. I feel more like Cypher, trying to bargain with the devil to avoid having anything to do with it. Fortunately I only needed to clean it from my CV rather than sell out Morpheus.



  • @apapadimoulis said:

    Well, sort of, except Windows/Microsoft implements most of the stuff your LOB applications need, so you really need to take few dependencies on anything, really at all.

    Yes, but only as long as you're content with limiting yourself to .NET/ASP, MSSQL for your database needs and with Microsoft release cycles which tend to be on the order of years.

    Well, comparable thing to do in Linux world is relying on stuff in official archives of Debian Stable and/or Ubuntu LTS. It still gives you bigger choice of tools and easier installation than the Microsoft stuff.

    But the Docker crowd is the kind of people who are not content with 2-year old stable releases of anything. They want to use this hot new library downloaded off github revision 285cd35 and that even hotter library off bitbucket revision dead3376. And Windows don't offer anything to help you with that kind of stuff.

    In fact, it offers much less, because on Linux the package manager is there and able to express and handle dependencies and making basic package is not more than couple of hours work if you care enough to learn how. And creating suitable repository for easy distribution is not more than couple of hours work either and open-source projects can use Ubuntu's ppa service on launchpad. While on Windows none of the provided method for distributing 3rd party stuff really works and you admitted it as well.

    So that's why Docker for Windows. The people not content with standard tools provided Debian, Ubuntu and Red Hat are not content with standard tools provided by Microsoft either. No, I don't like Docker. I think it is the wrong way. But there is nothing that would make it more sense on Linux then Windows.

    @apapadimoulis said:

    Oh for fucks sake. This is why we (I'm pointing to @blakeyrat) hate the linux ecosystem. Windows, you have one server: IIS. And it's great, and runs circles around all of these rubbish servers.

    Still, most heavy-duty content providers use Nginx on Linux for static data and often also something on Linux for dynamic content. Because Linux can run circles around Windows when it comes to disk access (so IIS may be able to run circles around something else on Windows, but not around some things on Linux). And Linux is cheaper, too.

    @FrostCat said:

    I wasn't going to bother, but since you mentioned it, IIRC you could always[1] avoid the problem, except for COM DLLs[2], by putting a copy of the version of the DLL you needed in your application's directory.

    Which essentially eliminates any advantage that DLLs may have. While not doing much with the downsides.


  • Discourse touched me in a no-no place

    @Bulb said:

    Which essentially eliminates any advantage that DLLs may have.

    Well, once we started having big hard drives, the "saving disk space" argument lost a lot of its urgency.

    @Bulb said:

    While not doing much with the downsides.

    ...except for the one we were talking about, DLL Hell. I'd consider the tradeoff totally[1] worth it.

    [1] well, mostly. But if you have redundancies of specific versions of a given DLL, there's things you could do if you really need that couple of megs of disk back. Some of them involve me giving you a nickel, kid, to buy a real computer.


  • ♿ (Parody)

    @apapadimoulis said:

    Several Microsoft teams have abandoned it (we did, long ago).

    What do you use?



  • @bugmenot said:

    SO_REUSEPORT is designed for fair distribution of initial TCP or UDP connections between some number of applications.
    But it's not context-aware, so it's useless in this context.@FrostCat said:
    But if you have redundancies of specific versions of a given DLL, there's things you could do if you really need that couple of megs of disk back.
    I saved a few dozen gig by hardlinking all the copies of the DirectX Setup, XNA Setup, and Mono together in my Steam folder. Steam's trying to do the same thing, by bundling them together and making the apps take a VMF dependency, but they're ... slow.


  • ♿ (Parody)

    @Bulb said:

    But the Docker crowd is the kind of people who are not content with 2-year old stable releases of anything. They want to use this hot new library downloaded off github revision 285cd35 and that even hotter library off bitbucket revision dead3376. And Windows don't offer anything to help you with that kind of stuff.

    💡 ahh, ok, I'm starting to see this a little clearer now ... the Docker defaults seem to be grab the latest everything of everything (my experience with Discose, and with our Docker ProGet build). You can work around that, but it felt like a big workaround.


  • Discourse touched me in a no-no place

    @apapadimoulis said:

    the Docker defaults seem to be grab the latest everything of everything

    I think it depends on whether you bind to the virtual name or the actual version identifier. The latter is bound to a very specific version of everything, so you can rely on it being something very exact. That's actually useful for certain types of audit tracking. (OTOH, no upgrades of anything at all that way.)


  • ♿ (Parody)

    @boomzilla said:

    @apapadimoulis: said:
    Several Microsoft teams have abandoned it (we did, long ago).

    What do you use?

    Well turns out, MSI itself does absolutely nothing except copy files and register the files and installer itself in an obscure, often corrupted internal database (which can only be repaired through Mr. Fix it).

    If you want to actually "install things" like IIS Websites, Services, you either bundle an exe in your MSI and have the MSI call the exe, or (better) bundle the msi in an exe, do all the actual instally things, then run the msi and pray nothing goes wrong.

    The "Remove Programs" thingy in control panel is a completely separate thing; it's just some basic metadata and a pointer to an uninstaller. If the uninstall fails or doesn't exist, the remote program thingy asks if you'd lke to remove from the list.

    And that's just using MSI; building an MSI is an absolute nightmare.

    I don't even thing the install tools (like InstalShield or whatever) build msi anymore.


  • ♿ (Parody)

    @apapadimoulis said:

    Well turns out, MSI itself does absolutely nothing except copy files and register the files and installer itself in an obscure, often corrupted internal database

    Ah...I only played with it a little bit once, so I assumed I just hadn't explored all that it could do.

    So...that was informative and I thank you for the information, but at the risk of repeating myself...

    @boomzilla said:

    What do you use?



  • @apapadimoulis said:

    Well turns out, MSI itself does absolutely nothing except copy files and register the files and installer itself in an obscure, often corrupted internal database (which can only be repaired through Mr. Fix it).
    It also supports per-user versus per-machine installs, advertised installs (Install-on-Demand), self-healing, Group Policy deployment, InstallCast (dedicated tab in Add/Remove Programs to grab installers from), and a whole bunch of other slicing and dicing, all at no extra effort over building the MSI itself.

    But no, no one uses any of those things. They're worthless.@apapadimoulis said:

    I don't even thing the install tools (like InstalShield or whatever) build msi anymore.
    They do. Windows Update, ironically enough, does not; starting with Windows 7 they use mysterious Windows-only "MSU" files instead.



  • Microsoft seems to agree with you about MSI, and are essentially building something dockerish into 10 at some point, so that you can, for instance, install Photoshop from the store, except it lives in a sandboxed environment with its own registry, and doesn't have an installer at all.

    I don't know if it's the best idea, but I kind of like the idea that I'll be able to install software without having to hope I didn't accidentally install ten more Ask toolbars.


  • ♿ (Parody)

    @boomzilla said:

    @boomzilla: said:
    What do you use?

    Just C# and WPF. We have a very lightweight component library (Inedo.Installer) that has some stuff shared between products. We had to write all those things when we had an MSI-based installer anyway, and the new way is simpler, does exactly what we need, and has full parity with the silent installation mode (by design). Then we bundle it with NSIS as a single exe and sign it.

    It seemed ridiculous to write your own installer (which is why we didn't at first), but after looking for more modern alternatives to the MSI/WiX rubbishpile (we had first looked at InstallAware/InstallShield), it was literally easier to write our own than to integrate with something else.

    @TwelveBaud said:

    It also supports per-user versus per-machine installs, advertised installs (Install-on-Demand), self-healing, Group Policy deployment, InstallCast (dedicated tab in Add/Remove Programs to grab installers from), and a whole bunch of other slicing and dicing, all at no extra effort over building the MSI itself.

    And all those just wrap well-defined Windows conventions. The cost of building the MSI itself is quite high, at least one that doesn't just copy files onto a computer. Anyway, we didn't need any of those things, so it didn't make sense for us.



  • @FrostCat said:

    Well, once we started having big hard drives, the "saving disk space" argument lost a lot of its urgency.

    Saving disk space is the least compelling argument. The most important advantage is that when a vulnerability is found in the library, you simply update the master copy and don't have to go hunting down everything that uses the vulnerable code. That is also the motivation for LGPL stopping at dynamic link boundary.

    The other advantages are saving memory (because the mapped file is also shared) and cpu cache, which, though they also grew, are not that huge compared to typical binary sizes.



  • @apapadimoulis said:

    I'll concede, the Live Store is probably bigger WTF than Docker for Windows, on so many levels.

    Well, Live Store, and WinRT jailed environment, is an altogether different kettle of fish. It was created because of change in the threat model. On servers, and mostly on desktop, security was to prevent remote attacks and to prevent different users from interfering with each other, but it was generally assumed that the administrator only installs applications they trust and that that trust is mostly deserved.

    However as more and more non-tech-savvy users are getting to computers, the new threat appears: untrustworthy applications. And this is getting worse with mobile where the goal is to give everybody a smartphone. A smartphone usually only has one user, but it is expected that they will download many applications security of which they won't be able to judge. So now the applications need to be segregated to prevent fucking with each other.

    I do feel that Microsoft overdid it somewhat though. In Android you can't have library package, but at least you can have packages that provide service over IPC to be consumed by other applications, so at least some componentization is possible. But Windows Phone don't allow anything remotely close to that.



  • @Bulb said:

    The most important advantage is that when a vulnerability is found in the library, you simply update the master copy and don't have to go hunting down everything that uses the vulnerable code.

    In theory, that sounds like a great idea.

    In practice, when you change a dependency, it changes things. Very frequently it changes more things than just the bugfix you care about, and so all too often these changes break things. That's what DLL Hell is all about. "Simply update the master copy and everything linked to it magically works" is as much a silly *nix myth as "everything is a text file" or "command lines are an acceptable form of user interface."

    This is why developers outside the *nix echo chamber have known for decades that bundling everything really is the best way (or at least the least bad way) to do it.



  • I want my desktop OS apps to be as sandboxed as mobile or browser apps. Why should a program inherit all my privileges by default instead of only getting access to files it created or I browsed to using an OS-provided file browser? That'd stop most viruses right then and there, since people who don't know how to use computers generally won't be able to navigate to whatever Windows system DLL the virus wants to infect.


  • Considered Harmful

    I want to write a program that finds files containing certain text.



  • We have that already. And yes, it works in the sandbox.



  • If you sandbox desktop apps, you'll get about the same functionality as you get on mobile. Pretty little dumb clients, with tentacles extending to their respective server side backends.

    Go to the Windows Store now, and enjoy your pick of shiny storefronts, freemium cowclickers and fart apps. The future has arrived.



  • @blakeyrat said:

    I have better things to do than talk to insane people.

    We all have. But talking is fun. Sometimes.



  • @ScholRLEA said:

    Anyone who isn't trolling on WTDWTF is in the wrong place.

    So I've been in the wrong place here for more than two hours when I was catching up reading the new posts?



  • @apapadimoulis said:

    And that's just using MSI; building an MSI is an absolute nightmare.

    Not with the right tools. (I use Wix)

    @apapadimoulis said:

    I don't even thing the install tools (like InstalShield or whatever) build msi anymore.

    InstallShield does. (Or did when I last used it a few years ago)

    To build a decent MSI does mean you need to understand the MSI technology. You can't just click a couple buttons and expect it to work. Follow the rules, and things just work.

    @apapadimoulis said:

    Well turns out, MSI itself does absolutely nothing except copy files and register the files and installer itself in an obscure, often corrupted internal database

    It does a lot more than just copy files! :) And the only corruption I've seen is either caused by a bad installer or by interrupting the install process. Rebooting in the middle of an install is not a good idea.

    @apapadimoulis said:

    The cost of building the MSI itself is quite high

    Disagree. The cost of building isn't high. But the cost of learning MSI is.



  • @Mason_Wheeler said:

    In practice, when you change a dependency, it changes things. Very frequently it changes more things than just the bugfix you care about, and so all too often these changes break things.

    Then You're Doing It Wrong. Security updates don't change anything except the vulnerability. See Debian's security support for an example of how to do it right.

    @Mason_Wheeler said:

    DLL Hell

    isn't called .so hell for a reason. Shared libraries on unix-style systems work far better than they do on Windows and always have.

    @Mason_Wheeler said:

    "everything is a text file"

    You mean "everything is a file", not always text. But where is the file that represents my network interface? That's the one that has always stood out for me.

    @Mason_Wheeler said:

    bundling everything

    Ew, yuck, don't do it, etc.



  • @another_sam said:

    @Mason_Wheeler said:
    DLL Hell

    isn't called .so hell for a reason. Shared libraries on unix-style systems work far better than they do on Windows and always have.

    There's actually no technical reason why shared objects are handled better on Linux (or any other similar system). It's the simplicity of naming them uniquely between binary compatibility breaking changes (libfoo5.so) that actually does the trick of not having a hell in the first place. Those fancy assemblies on Windows seem to over complicate a simple DLL naming issue by masking many versions behind the same name.

    Then there's something like ffmpeg that creates its own hell by being API incompatible every single commit and everyone just bundles it because of that. Stupidity can break a working dependency system.


  • BINNED

    @another_sam said:

    where is the file that represents my network interface?

    You mean you never use netcat to look inside?

    @hifi said:

    There's actually no technical reason why shared objects are handled better on Linux

    There is, simple so versioning as well as symbol versioning (this one mostly for core libraries because it requires effort).
    DLL-hell is not fixed as opposed to what is claimed, whoever claims side-by-side assembly has fixed it has never shipped multiple third-party libraries with conflicting dependencies to see the joy of closed source and DLL-hell.



  • @dse said:

    @hifi said:
    There's actually no technical reason why shared objects are handled better on Linux

    There is, simple so versioning as well as symbol versioning (this one mostly for core libraries because it requires effort).

    API versioning vs symbol versioning, pretty much the same thing if you want to keep backwards compatibility in the same dll/so. Symbol versioning makes it more transparent while API versioning is something you need to plan for. Still not really groundbreaking.



  • @Mason_Wheeler said:

    In practice, when you change a dependency, it changes things. Very frequently it changes more things than just the bugfix you care about, and so all too often these changes break things.

    That's the job of the security team to prepare updates for the stable releases that don't change anything else except plugging the security holes. The security teams of major Linux distributions are doing pretty good job of it.

    It also depends on the culture in given language community a lot. C, and to an extent C++, library maintainers usually take pride in providing stable ABI and the more widely used C libraries do have stable ABI for years. Plus in Unix the shared libraries have version numbers, so as long as the maintainer cares, ABI updates can be handled safely too.

    On the other hand Java community never gave a damn, nor did Ruby community and just took on bundling everything.

    @hifi said:

    libfoo5.so

    It is libfoo.so.5.2.4, to which a symlink libfoo.so.5 points (and libfoo.so symlink marks the default for linking). foo5.dll is what you are stuck with on Windows. It can mostly do the job (and does it for cygwin and msys2 that use dlls almost like sos on Linux), but the difference is that nothing tells you you should. On Unix it does.

    @dse said:

    DLL-hell is not fixed as opposed to what is claimed, whoever claims side-by-side assembly has fixed it has never shipped multiple third-party libraries with conflicting dependencies to see the joy of closed source and DLL-hell.

    Windows overcomplicate DLLs also by having mutually incompatible compiler options in general use. Windows have separate debug and release runtime and separate dynamic and static runtime and they also used to have non-threaded runtime. And code must be compiled differently to link against different (static versus dynamic) dependencies. Since an executable must be linked against consistent versions, so then you get combinatorial explosion of library variants.

    On the other hand Linux has just one, shared, version of libc and libstdc++ and other system libraries, so everything else can also do with just one variant. And even if you have static variants, they can be mixed and matched as needed when linking the final executable because compiler does not care about the distinction, only linker does.


    @another_sam said:

    But where is the file that represents my network interface?

    Sockets are almost files. You still read them with read and write them with write, you only open them differently.

    Yes, it would have been more unixy to be able to simply open("/dev/comm/ip/stream/10.20.30.40/666", O_RDWR), but back then the BSD authors probably considered virtual paths like that more complicated to implement. There are other deviations from the files approach like SysV IPC, though that one's been rectified with /dev/shm to a large extent.

    @dse said:

    You mean you never use netcat to look inside?

    Well, if they were true files, you could have used cat and wouldn't need special netcat.



  • I want somehow to explain to me how Linux treats something like a video capture card (with TV and FM tuner) as a "file".



  • @blakeyrat said:

    I want somehow to explain to me how Linux treats something like a video capture card (with TV and FM tuner) as a "file".

    Made up pseudo code:

    open("/dev/fmtuner")
    ioctl(101.5)
    read() /* here comes an audio stream */


  • BINNED

    This is what I use for video4linux2

    fd=open("/dev/video0")
    

    Then you can just memorymap to some buffer. Frame buffers work perfectly with file analogy.


  • Notification Spam Recipient

    @boomzilla said:

    I can't even grow a beard without it being ginger, which is not grey, so that sucks.

    Most of the ginger in my beard has turned grey, which is a little sad, but I've learned to cope and keep the kids off the lawn.

    Gray hair is the best. Makes you look like you've been in the wars.

    @Shoreline said:

    Where the fuck are you guys getting your magical apache software from that it actually works? Is there some black market?Racist.
    boxfuse! I will shill this company forever or until some pwneds my bullshit!


  • ♿ (Parody)

    @Bulb said:

    Well, Live Store, and WinRT jailed environment, is an altogether different kettle of fish. It was created because of change in the threat model.

    I don't know if threat model was on Steven Sniofsky's mind, but yes, apps/WinRT are a nice sandbox. Just an awful desktop experience was more I was getting at....


  • ♿ (Parody)

    @dcon said:

    And the only corruption I've seen is either caused by a bad installer or by interrupting the install process. Rebooting in the middle of an install is not a good idea.

    This. There were a few critical moments during installation that could take an awfully long time (like a SQL Installation), etc., and a handful of customers ignored all warnings and killed the process. Or they would try to manually uninstall it. Or who knows what.

    @dcon said:

    Disagree. The cost of building isn't high. But the cost of learning MSI is.

    It's been a few years since we abandoned it, but IIRC the commercial tools (InstallShield, etc) weren't designed for non-interactive use (thus automating on build server was a pain), and WiX meant that just one guy on the team was "the installer guy", whereas now anyone can maintain it.


  • ♿ (Parody)

    @blakeyrat said:

    I want somehow to explain to me how Linux treats something like a video capture card (with TV and FM tuner) as a "file".

    You're thinking at the wrong abstraction layer.

    By using files for everything, you don't need to interpret the bytes that things like TV/FM tuners provide, you can just use BlockCopy and never worry about how those things are encoded.



  • @DogsB said:

    boxfuse

    Investigating...


  • ♿ (Parody)

    @blakeyrat said:

    I want somehow to explain to me...

    Why? It didn't make any difference the last few times.



  • @Mason_Wheeler said:

    "Simply update the master copy and everything linked to it magically works" is as much a silly *nix myth as "everything is a text file" or "command lines are an acceptable form of user interface."

    Maybe because in the Linux world, userspace API compatibility is rather taken seriously? As well as making sure documentation doesn't lie in your face about what APIs do.

    Also, we mostly don't have the habit of using undocumented functions and APIs. I heard windows programmers recently started to grow out of that habit too, and good for them, although it still looks a bit retarded.



  • @DogsB said:

    boxfuse

    I've started an account and done no other research so far, and I'd like your opinion/guidance/hand-holding/rainbows.

    What's your feelings on this in terms of these steps (yes, the question is that open):

    • download boxfuse
    • run against project environment config, and gain access to dev tools for the project (e.g. server, unit tests, linting)

    Does it automatically do this (based on a config file or equivalent for a specific project):

    • download dev packages, e.g. npm/bower
    • run stuff, e.g. npm install, bower install, grunt build

  • Discourse touched me in a no-no place

    @apapadimoulis said:

    now anyone can maintain it.

    ....still waitin' to hear what you're using. Unless Discurse read some posts for me.




  • Notification Spam Recipient

    @Shoreline said:

    @DogsB said:
    boxfuse

    I've started an account and done no other research so far, and I'd like your opinion/guidance/hand-holding/rainbows.

    What's your feelings on this in terms of these steps (yes, the question is that open):

    • download boxfuse
    • run against project environment config, and gain access to dev tools for the project (e.g. server, unit tests, linting)

    Does it automatically do this (based on a config file or equivalent for a specific project):

    • download dev packages, e.g. npm/bower
    • run stuff, e.g. npm install, bower install, grunt build
    I usually just pass it the war or jar as an argument and it pops out a vm image ready to go in about a minute. It will probably take longer the first time because it has to download everything. but the end product is usually 50mb + the size of your war/jar. It's magical. I'm waiting for someone to burst my bubble though.


  • Containers for Windows have been mentioned before: http://ars.userfriendly.org/cartoons/?id=19980531



  • @dse said:

    Then you can just memorymap to some buffer. Frame buffers work perfectly with file analogy.

    ... oookay, but how do you tell it what channel you want? Whether you want it in FM or TV mode? Etc.

    You need 2-way communication. And it needs to send audio and video information simultaneously. And the video is likely compressed with MPEG2 or MPEG4 or something. And the audio's compressed with AAC or MP3 or something. And it's a file? How does that make sense?

    You'e like "frame buffers work", but how does the application know what compression format the frame buffer's in? How does it get the audio? Is it muxed in the same file, or... a separate file? How does an application going in "blind" discover any of this stuff?


  • Considered Harmful

    One might need a whole tree of files, then..


Log in to reply