Explicating the survival of the Shell as default OSS UI


  • I survived the hour long Uno hand

    Exactly! So when people say "Meh, command-line is fine" because they're too lazy to work out how to make a useable GUI, I frown sternly in their direction.


  • Discourse touched me in a no-no place

    Depends on whether it is the best use of their time. Sometimes it might be. Sometimes not. Putting a snazzy GUI on a DB server won't make any difference if the thing randomly loses user data for a laugh. Prioritisation is a vital skill of a software engineer.


  • I survived the hour long Uno hand

    See, but you're changing the scenario. If someone says "I'd like to make a nice GUI but first I have to fix this high-priority bug" I have 0 problem with them.


  • BINNED

    @bp_ said:

    We're saying Linux should be usable.

    But most won't say it's usable unless it works like what they're used to, which means in practice that it needs to work the same way as Windows.


  • Discourse touched me in a no-no place

    @Yamikuronue said:

    See, but you're changing the scenario.

    No, because there are always competing priorities. If you've got time to do everything, you've got too much time.



  • @Yamikuronue said:

    I don't think anyone's saying the command-line should vanish.

    For the record, I think it's an interesting concept and also should stick around-- but it does need to start changing and evolving, which it hasn't done in something like 20 years. It needs to be free to kill the cruft and replace it with something better, which is currently isn't able to do.



  • Thank you for making an incredibly long post that communicates nothing. Please jump into the nearest lava pit.


  • kills Dumbledore

    What are your thoughts on Powershell? Perfect? Good start? Completely the wrong path?

    I haven't used it much, and find the syntax a bit confusing, but I do like being able to access properties and things


  • I survived the hour long Uno hand

    My post was specifically calling out an attitude, not a behavior. You're changing the attitude while maintaining the behavior and claiming to be the same scenario.



  • This post is deleted!


  • I'm not going to re-review the entire 300+ post topic, but to add an answer to the question posed by the title:

    • OSS OSes tend to be used more on servers than on desktops and GUIs take up memory that could be used by other services.

    Even Microsoft realizes this, which is one of the reasons why Windows Server 2008 and newer have a Server Core Installation Option which only has a command prompt.


  • Java Dev

    No, that just means nobody managed to do remote GUI right yet. The only one I've found actually works is web interfaces.



  • The OS being used primarily without GUIs is also why many OSS programs use the CLI as their primary interface.



  • @powerlord said:

    Even Microsoft realizes this, which is one of the reasons why Windows Server 2008 and newer have a Server Core Installation Option which only has a command prompt.

    Microsoft realizes idiot sysadmins are delusional about this.

    The reality is that Windows Server, even as far back as NT4, swapped-out the GUI when it wasn't being used, and the memory impact was as close to zero as is measurable.

    Problems and perceived problems are two different things.



  • @powerlord said:

    I'm not going to re-review the entire 300+ post topic, but to add an answer to the question posed by the title:

    OSS OSes tend to be used more on servers than on desktops and GUIs take up memory that could be used by other services.

    Even Microsoft realizes this, which is one of the reasons why Windows Server 2008 and newer have a Server Core Installation Option which only has a command prompt.

    @blakeyrat said:

    Microsoft realizes idiot sysadmins are delusional about this.

    The reality is that Windows Server, even as far back as NT4, swapped-out the GUI when it wasn't being used, and the memory impact was as close to zero as is measurable.

    Problems and perceived problems are two different things.

    Actually, RAM isn't the issue that I see with putting a GUI on a server. There are two very, very real issues with GUIs on servers, though:

    1. In this day and age, disk usage of GUI bits and bobs is actually a going concern, considering that many servers are VMs that boot off of SAN or NAS storage instead of physical servers with a presumably-ample local disk at hand, and thus have very limited disk space because network disk is far more expensive and precious.
    2. The more bits and bobs you have running, the more attack surface you expose. It shouldn't be possible to pwn your AD controller because the Windows font renderer had a bug in it. Ditching the GUI components keeps such things from happening, especially since low-level GUI code tends to be a complex affair that leans on manual memory handling for speed's sake. (I've had to deal with an app tickling a very old latent crash bug in the X11 Bresenham implementation, and I'm fairly sure that if you asked the GDI team, they'd have stories to tell of similar happenings on the Windows side of the ball.)

  • Discourse touched me in a no-no place

    @PleegWat said:

    No, that just means nobody managed to do remote GUI right yet. The only one I've found actually works is web interfaces.

    X actually works well remotely. You might not like the way the apps work and much of the current crop are crappy, but X actually works and CDE was actually pretty well integrated way back when I was using it (late '90s). As a user, there was no need to drop to a CLI (except for the software I was developing at the time). It's insurmountable problems were that it was too closely associated with a failing platform (Solaris/SPARC); the PC was busy conquering everything at the time.

    The expectation of what is possible and what a platform should provide has changed quite a bit over the years.



  • @tarunik said:

    In this day and age, disk usage of GUI bits and bobs is actually a going concern, considering that many servers are VMs that boot off of SAN or NAS storage instead of physical servers with a presumably-ample local disk at hand, and thus have very limited disk space because network disk is far more expensive and precious.

    I can dig it, but I still think people are vastly over-exaggerating the impact of the GUI components. Is the Core Installation really that much smaller than the normal one? (I'm asking seriously; I honestly don't know.)

    @tarunik said:

    The more bits and bobs you have running, the more attack surface you expose.

    I can see this also, but I also think it's over-exaggerated. It's demonstrably not a problem in practice... when was the last time a server was pwned due to a problem with the GUI layer? Or even Remote Desktop protocol? Drop in bucket.

    You're more likely to be pwned by a bug in your network-enabled KVM switch.



  • @blakeyrat said:

    It's demonstrably not a problem in practice...

    Umm, http://secunia.com/advisories/product/42761/?task=advisories_2014

    Admittedly, most of the really nasty ones are from the "Microsoft Windows Flash Player", but there's a jpeg handling bug that enables system access, an SVG bug that enables system access, etc.


  • Java Dev

    @dkf said:

    X actually works well remotely.

    I've used X remotely. It works fine on a local connection (Server in the server room in the same office), and gives an excellent user experience there. However I've found at least with my applications that it's very latency sensitive - VNC is preferable even for UK servers, and for US servers I avoid GUI entirely.

    Part of the cause of this may be that, as I understand, remote X is a synchronous protocol, where each draw operation is sent to and confirmed by the server before the next draw operation starts.


  • Discourse touched me in a no-no place

    @PleegWat said:

    Part of the cause of this may be that, as I understand, remote X is a synchronous protocol, where each draw operation is sent to and confirmed by the server before the next draw operation starts.

    You can put it in that mode, but normally it isn't. Unless you've got applications that insist on synching all the time (or like to draw by scribbling on bitmaps by hand, which is how a lot of modern font renderers work, because the authors are a bunch of lazy retards the people who think that Wayland is the solution). Doing everything synchronously would suck over a network.

    Synch is forced by a few things, of course. Non-mouse grabs and clipboard manipulation are the cases I remember.


  • BINNED

    @blakeyrat said:

    Is the Core Installation really that much smaller than the normal one? (I'm asking seriously; I honestly don't know.)

    Hmm. Am I reading this right.

    Be aware that 32 GB should be considered an absolute minimum value for successful installation. This minimum should allow you to install Windows Server 2012 R2 in Server Core mode, with the Web Services (IIS) server role. A server in Server Core mode is about 4 GB smaller than the same server in Server with a GUI mode.
    Source: http://technet.microsoft.com/en-us/library/dn303418.aspx

    and

    Reduced memory and disk requirements. A Server Core installation on x86 architecture, with no roles or optional components installed and running at idle, has a memory footprint of about 180 megabytes (MB), compared to about 310 MB for a similarly equipped Full installation of the same edition. Disk space needs differ even more—a base Server Core installation needs only about 1.6 gigabytes (GB) of disk space compared to 7.6 GB for an equivalent Full installation. Of course, that doesn't account for the paging files and disk space needed to archive old versions of binaries when software updates are applied. See Chapter 2 for more information concerning the hardware requirements for installing Server Core.
    Source: http://msdn.microsoft.com/en-us/library/dd184076.aspx



  • @PleegWat said:

    Part of the cause of this may be that, as I understand, remote X is a synchronous protocol, where each draw operation is sent to and confirmed by the server before the next draw operation starts.

    Also because most X applications do their own text and graphics rendering, and send the finished bitmap to the X server.


  • Java Dev

    Probably the one or two X apps I've used remotely doing it wrong then. The one I use frequently is HP fortify, which is a java application. The others are internal, but wouldn't be surprised if they're java too. Otherwise they're probably perl.


  • Discourse touched me in a no-no place

    @VinDuv said:

    Also because most X applications do their own text and graphics rendering, and send the finished bitmap to the X server.

    Not also. Plain because. The issue is that nobody's actually gone and told the server (via appropriate extensions) how to do the drawing on behalf of the client, yet that is how the X protocol was designed to work. Heaving bitmaps back and forth all the time can't be an efficient or elegant solution.

    It's worse though, since using bitmaps all the time effectively forces synchronisation too, because you can't draw on the surface until you've copied it from the server and server can't actually put the surface on the screen until you've copied it back. Doing It Wrong Is Slow! Who knew?

    I blame the Xrender library, which is what is really doing the falling back to copy-copy. It's supposed to be paired with an extension which lets the server do the work, but I suspect it's either not widely deployed or only works where there's a shared memory transport available. Since it was all written by Keith Packard, I suspect the latter; Keith doesn't seem to really understand the need for having clients remote from servers.


  • Discourse touched me in a no-no place

    @blakeyrat said:

    I usually reply to each post individually just to spite Atwood.

    QFT.


  • I survived the hour long Uno hand

    @Luhmann said:

    Of course, that doesn't account for the paging files

    It uses a LOT of paging.


  • Discourse touched me in a no-no place

    @Yamikuronue said:

    It uses a LOT of paging.

    Switch to scrolls?


  • Discourse touched me in a no-no place

    I'm just trying to...uh, let's say "help." Yeah, that's it. "Help."



  • This is why Wayland completely bypasses the XServer. Of course, the problem with that is... Wayland can't be used over networks.



  • In Windows, the network-aware protocol isn't the same thing as the desktop protocol. That's why we can play video games.


  • Discourse touched me in a no-no place

    @blakeyrat said:

    In Windows, the network-aware protocol isn't the same thing as the desktop protocol. That's why we can play video games.

    I doubt you'd want to play many games with the display rendering over a congested laggy network. Yes, it's an idea that occurs people from time to time (I forget the names of the last group to try and do it; OnLive? something like that) and yes, it's a Bad Idea™.

    Your IDE might conceptually work fine over such a network though, or your word processor. In both cases, they mostly spend their time doing things other than pushing pixels to the screen.



  • @dkf said:

    I doubt you'd want to play many games with the display rendering over a congested laggy network.

    Yes that was exactly the point I just made in the post you are replying to. Thank you.



  • @blakeyrat said:

    In Windows, the network-aware protocol isn't the same thing as the desktop protocol. That's why we can play video games.

    This is not even wrong. *guffaws* 3D acceleration has had a bypass path around all the normal windowing facilities for as long as I have known modern operating systems, whether they are offspring of Torvalds or Cutler. Has it dawned on you that games reinvent the GUI wheel because they can't use the system UI toolkit? (AFAIK: it simply won't DTRT when drawing inside an area that's already claimed for 3D rendering duties. Something like Autodesk Inventor can get away with it because it has all the 3D rendering tucked away in its own viewports, and probably does something goofy for context menus.)

    Filed under: or am I wrong, and they simply refuse to use the system UI toolkit for reasons unbeknown? I really wish they would in that case, because then we wouldn't have bitter complaints from Israeli EVE players who wish they could send ingame mail in Hebrew to their buddies, but can't...(and Eve gets a lot more of Unicode right than many other games, especially ones of its era)



  • @tarunik said:

    Something like Autodesk Inventor can get away with it because it has all the 3D rendering tucked away in its own viewports, and probably does something goofy for context menus

    We provide cloud-based Autodesk solutions for a living, and I can tell you that Autodesk stuff doesn't get away with anything. We have to jump through a lot of hoops to get a good remote desktop experience, probably the same hoops that would be necessary to make a game work.

    What I can tell you is that even though Windows does a lot of bypassing of the window manager for 3D, it's all done in a way that pretty much always works, even remotely. We have our share of performance problems, but we rarely run into software that completely misbehaves.



  • @Jaime said:

    We have to jump through a lot of hoops to get a good remote desktop experience, probably the same hoops that would be necessary to make a game work.

    What I can tell you is that even though Windows does a lot of bypassing of the window manager for 3D, it's all done in a way that pretty much always works, even remotely. We have our share of performance problems, but we rarely run into software that completely misbehaves.

    Agreed that Windows does its darndest to make it all work. HOWEVER: the 'get away with it' I was referring to was using the system common controls, menus, etal in conjunction with 3D rendered viewports. (Which is impossible for games, as far as my impressions go.)



  • Isn't the entire Windows UI now drawn in 3D? Wasn't that the point in the Desktop Windows Manager / Aero switchover in Vista and newer?



  • @powerlord said:

    Isn't the entire Windows UI now drawn in 3D?

    No; it's drawn in 2D.

    It just uses the video card's memory and shaders to do the drawing, instead of wasting CPU on it.

    @powerlord said:

    Wasn't that the point in the Desktop Windows Manager / Aero switchover in Vista and newer?

    That and a few other things.


  • Banned

    A GUI isn't as portable, doesn't work with SSH, and isn't fun to write. I like everything being available as a CLI tool, so when I'm over SSH I can do anything.


  • Notification Spam Recipient

    @candlejack1 said in Explicating the survival of the Shell as default OSS UI:

    A GUI isn't as portable, doesn't work with SSH, and isn't fun to write. I like everything being available as a CLI tool, so when I'm over SSH I can do anything.

    Guys, I think I found an @fbmac alt!


  • area_can

    @blakeyrat said in Explicating the survival of the Shell as default OSS UI:

    I actually think a large part of the problem here is open source's reluctance to use any programming language more modern than C/C++ for application development

    And now here we are, with every application coming out these days being written in js. How things change. Kinda funny how HTML/CSS has because the default UI toolkit


  • BINNED

    @candlejack1 said in Explicating the survival of the Shell as default OSS UI:

    when I'm over SSH I can do anything.

    Even kidnapping random people? I was sure you would need a GUI for that. 🚎


  • Banned

    @antiquarian said in Explicating the survival of the Shell as default OSS UI:

    Even kidnapping random people? I was sure you would need a GUI for that.

    It needs to be very fast, as it had to happen before the user click the submit button. So it was over ssh, before I retired from that.


Log in to reply