Javascript is to Java as JScript is to J?



  • @blakeyrat said:

    When I run a single Java desktop program that doesn't suck shit, maybe I'll start changing my mind. If you want people to think your little pet language is good, you need to get your community to stop releasing shit software written in it.

     

    Jdownloader?

     



  • @blakeyrat said:

    Yes, well, problem number one is deciding to write a CLI interface in the first fucking place. Thankfully, Amazon has finally gotten around to writing a web interface for all their services and their CLI tools sit around unused.

     

    What is your weird obsession with all things CLI? I fail to see the problem with command line interfaces (there's a wtf that you called it CLI interface, +1000 stupid points for the redundant word)  they actually make some things a hell of a lot easier to do.

     



  • @dtech said:

    Actually quoting Facebook for choosing Java while a few simple googles could've told you that Facebook is written in PHP an compiles it to C++.
     

    Veggen was referring to Cassandra, not Facebook.



  • @ASheridan said:

    @blakeyrat said:

    Yes, well, problem number one is deciding to write a CLI interface in the first fucking place. Thankfully, Amazon has finally gotten around to writing a web interface for all their services and their CLI tools sit around unused.

     

    What is your weird obsession with all things CLI? I fail to see the problem with command line interfaces (there's a wtf that you called it CLI interface, +1000 stupid points for the redundant word)  they actually make some things a hell of a lot easier to do.

     

    NOT THIS SHIT AGAIN!



  • @blakeyrat said:

    They do the same thing with .net, in case you haven't noticed.

    Nope, hadn't - but then I'm not a .net user, so ta for the info.

    @blakeyrat said:

    Supposedly it's a huge pain to make bleeding-edge games in OpenGL for exactly this reason...

    .. and once a much easier method is provided (DirectX), people find it hard to justify going down the more arduous route. Surely OpGL will die if it retains this mentality?

    @tdb said:

    And since Microsoft isn't collaborating with the Khronos group, they have to wait for Microsoft to release the new version of DirectX before they can update the OpenGL spec to match.

    I'm not convinced about that - browser vendors didn't wait for updates to IE6 so they could play catchup, they went ahead and innovated anyway, and it suddenly became Microsoft that began implementing other browser functionality into their products.

    To make OpGL successful, two things need to happen:

    • give a reason to continue to use it - clear out cruft, make things better and easier for existing users
    • give a reason to begin using it - don't wait for Microsoft to dictate the next thing in DX so OpGL can clone it, start coming up with new and exciting ways of graphics rendering NOW that arouse juices of the games industry and give developers a reason to choose it over DX (or at least offer it as a parallel option).

    @ASheridan said:

    What is your weird obsession with all things CLI? ...they actually make some things a hell of a lot easier to do.

    Blakey's a UI bod, so prefers mouse-driven over keyboard-driven. I think also some past (bad) experiences with Linux running MUD have traumatised him away from the command line. You've got to read those rants as reasons why he won't use the CLI, rather than reasons why it shouldn't be used.



  • @dtech said:

     I think the troll (veggen) is hilarious. Actually quoting Facebook for choosing Java while a few simple googles could've told you that Facebook is written in PHP an compiles it to C++.

    S/he said Cassandra, which is used by facebook and written in Java (not the facebook website).



  • @Cassidy said:

    Blakey's a UI bod

    ... what the hell does that mean?

    @Cassidy said:

    You've got to read those rants as reasons why he won't use the CLI, rather than reasons why it shouldn't be used.

    The CLI, at least all the ones I've been exposed to (with the possible exception of PowerShell-- although I don't have enough experience with that to make the call really) are shit UIs. That's the reason I come here and say they're shit UIs.

    I have no issues with the idea of a command line interface. It's definitely something that can be done well. The problems are:
    1) Much like the Java thing, I've never seen a CLI that wasn't shit
    2) CLIs are completely, 100% stagnant. The primary reason PowerShell is a possible exception is that it's not 30 years old

    Stagnation is death.

    But the idea that there's no such thing as a "friendly" CLI is stupid. You can make a friendly CLI if you actually gave a shit and tried to make one. It's probably harder than making a friendly GUI, but it's certainly not impossible-- it's just nobody's fucking tried.



  • @blakeyrat said:

    But the idea that there's no such thing as a "friendly" CLI is stupid. You can make a friendly CLI if you actually gave a shit and tried to make one. It's probably harder than making a friendly GUI, but it's certainly not impossible-- it's just nobody's fucking tried.

    Out of curiosity, what would be required of a good CLI? I've been using Linux for the past 10 years and gotten used to the way its CLI tools work, so I might be stuck inside the box. I'm interested to hear your ideas though, so that I could write better software in the future.



  • @blakeyrat said:

    ... what the hell does that mean?

    User-Interface person. I've read many of your rants about crap implementations of the meatspace<->cyberspace layer. They're quite a fascinating insight into how presentation and data capture is quite important.

    @blakeyrat said:

    But the idea that there's no such thing as a "friendly" CLI is stupid. You can make a friendly CLI if you actually gave a shit and tried to make one. It's probably harder than making a friendly GUI, but it's certainly not impossible-- it's just nobody's fucking tried.

    There doesn't seem to be a requirement for it: The Linux/Unix shell isn't user-hostile, it's more noob-intolerant at the expense of speed and simplicity. Change it to improve the user environment is largely subjective taste, so is considered part of user-controlled customisations. Windows is aimed at the more novice computer user, for which pointy-clicky methods suit over typed commands.

    .. unless we factor in SpagettiSwamp. Obviously then all bets are off.

    @tdb said:

    I've been using Linux for the past 10 years and gotten used to the way its CLI tools work, so I might be stuck inside the box.

    If you've used HP-UX, Solaris, or SCO (*spit*) beforehand, you'll see the advances in Linux towards friendliness. But sometimes it amounts nothing more than giving the cluebat a fresh lick of paint - the pain is identical but you're impressed at the way it shines when glancing off your forehead.





  • @Cassidy said:


    I'm not convinced about that - browser vendors didn't wait for updates to IE6 so they could play catchup, they went ahead and innovated anyway, and it suddenly became Microsoft that began implementing other browser functionality into their products.

    To make OpGL successful, two things need to happen:

    • give a reason to continue to use it - clear out cruft, make things better and easier for existing users
    • give a reason to begin using it - don't wait for Microsoft to dictate the next thing in DX so OpGL can clone it, start coming up with new and exciting ways of graphics rendering NOW that arouse juices of the games industry and give developers a reason to choose it over DX (or at least offer it as a parallel option).

     

    Completely different situation:

     That won't happen, because updates to the specification are decided by a committee which consists of:

    - major vendors(ATI, Nvidia, Intel), which naturally hate each other, don't want to share any knowledge and don't want to collaborate unless either forced(DirectX, freshness of feature gone) or when they see an advantage in playing nice(OpenCL, CUDA). Why do you think the whole accelerating h.264-thing is such a hassle?

    - embedded systems vendors that actually really care about OpenGL-ES, or WebGL, or other shiny new stuff(TI, Google with Android/ChromeOS, ARM)

    - Operating system vendors that only vaguely profit from actual innovation. Mac never had a major focus on high-end gaming, and the Mesa/Linux-people are still trying to catch up with OpenGL 3.0

     Please note that even if anybody had the actual manpower for your ideas(which was the case with Firefox), you can't just kick out an API and call it a day. You need hardware for it. Which has to sell. Which it won't because there are neither vendors, nor games, nor infrastructure.

     

     



  • @Cassidy said:

    @blakeyrat said:

    ... what the hell does that mean?

    User-Interface person. I've read many of your rants about crap implementations of the meatspace<->cyberspace layer. They're quite a fascinating insight into how presentation and data capture is quite important.

    Which of these definitions of "bod" were you going for?

    @Cassidy said:

    There doesn't seem to be a requirement for it: The Linux/Unix shell isn't user-hostile, it's more noob-intolerant at
    the expense of speed and simplicity.

    You're just replacing one stereotype with another. "CLI = hard to use" is turning into "easy to use = inefficient." Neither of those stereotypes are true. (Rather, they don't have to be true.)

    Mac Classic was, in its era, by far the most productive computer system while being simultaneously the easiest-to-use.

    @Cassidy said:

    Change it to improve the user
    environment is largely subjective taste, so is considered part of
    user-controlled customisations.

    That's exactly like saying "we can fix all our usability problems if we add themeing to our application." Dead. Fucking. Wrong. Worse than wrong, because it gets people writing code in the wrong direction.



  • @tdb said:

    @blakeyrat said:

    But the idea that there's no such thing as a "friendly" CLI is stupid. You can make a friendly CLI if you actually gave a shit and tried to make one. It's probably harder than making a friendly GUI, but it's certainly not impossible-- it's just nobody's fucking tried.

    Out of curiosity, what would be required of a good CLI? I've been using Linux for the past 10 years and gotten used to the way its CLI tools work, so I might be stuck inside the box. I'm interested to hear your ideas though, so that I could write better software in the future.

    Thing is, I'm probably in the box as well. What you really need is to define a few tasks a CLI would be useful for (and this is where I am in the box), and then start from scratch and do frequent and regular user testing. The main point is to do testing with human beings, not uber-nerd "high priesthood of technology" idiots. There's no reason your grandma shouldn't be able to use the CLI to do a mail-merge for her knitting club.

    The cultural problem is that the only group with any interest in the CLI whatsoever is the uber-nerd "high priesthood of technology" idiots, which is exactly why it's completely stagnant and will never improve. As long as the CLI is difficult, their position at the top of their bullshit fake hierarchy where they can pretend to be better than everybody else is secure.



  • @Cassidy said:

    I'm not convinced about that - browser vendors didn't wait for updates to IE6 so they could play catchup, they went ahead and innovated anyway, and it suddenly became Microsoft that began implementing other browser functionality into their products.

    I'm not convinced that this is a valid comparison. HTML was already an existing standard, and all the browsers were able to do the basic things. What you described would be more akin to different hardware vendors making their own extensions to OpenGL before waiting for Khronos to update the core spec - and this is exactly what's happening.

    @Cassidy said:

    To make OpGL successful, two things need to happen:

    • give a reason to continue to use it - clear out cruft, make things better and easier for existing users
    • give a reason to begin using it - don't wait for Microsoft to dictate the next thing in DX so OpGL can clone it, start coming up with new and exciting ways of graphics rendering NOW that arouse juices of the games industry and give developers a reason to choose it over DX (or at least offer it as a parallel option).

    I find it hard to imagine what new things OpenGL could offer that wouldn't have equivalents in DirectX by the time they're actually usable performance-wise. Consider that DirectX 10 was released in 2006, and there are still [url=http://en.wikipedia.org/wiki/Civilization_V]recently published games[/url] as well as [url=http://us.battle.net/support/en/article/diablo-iii-system-requirements]upcoming ones[/url] that use DirectX 9. One possible thing would be real-time radiosity, but I'm not sure if that needs new hardware/API features as much as new methods and algorithms for using what's already available. Maybe something to control really huge amounts of data and render trees with individual leaves?

    Besides the available features, there are other costs associated with switching APIs. If your developers are all of the DirectX variety, they're not going to learn OpenGL overnight. And you'll need new systems for handling audio and input as well, if you truly want to leverage OpenGL's greatest advantage which is its portability. Vendor loyalty should not be underestimated either. I've heard that Windows Phone is a good choice for users because it's "safe and familiar" - as far as I can see, the only familiar thing about it is the name "Windows". And I don't even know what kind of incentives Microsoft might be offering game developers to use its APIs.

    If you really want to build up pressure for an API switch, you'll need something revolutionary; preferably a complete paradigm switch. Affordable hardware that can do real-time ray-tracing could be such a thing. That'd need an entire new API and a new way of thinking about the 3D data, but it would provide huge advantages in fields like reflections, refractions and radiosity which are notoriously hard to do with a polygon-based rasterizer.



  • @blakeyrat said:

    But the idea that there's no such thing as a "friendly" CLI is stupid. You can make a friendly CLI if you actually gave a shit and tried to make one. It's probably harder than making a friendly GUI, but it's certainly not impossible-- it's just nobody's fucking tried.
    I disagree, but only because I go much further than you. CLIs are bad and will always be bad (as user interfaces), although could be better. They're not really user interfaces so much as opportunities to enter a single line of code at a time, though.

    And GUI is a complete misnomer, in my book - the user interfacing is graphical only on the output side. I'm not just being pedantic about language there, by the way: I mean that until you're interacting directly with the displayed objects - which will require proper 3d displays and motion tracking - it's a temporary workaround at best.

    Although technically even a row of dip-switches is a user interface, in the truer meaning of the phrase there is very little which actually fits the bill. Noteworthy exceptions are the accelerometers in smartphones, and laptops which detect when they're closed. Laptop screens and smartphones are actual physical objects, but the point is that we only ever 'interface' with things using our actual physical bodies. A decent UI would allow us to interact with virtual objects as simply, easily and intuitively as with actual ones - which means that whilst they don't have to have mass or solidity, they do have to occupy volume and detect when they are 'touched'.

    Whilst keyboards aren't going to be beaten any time soon as a means of entering text, entering text is an inherently bad idea to base a user interface on.



  • @tdb said:

    I'm not convinced that this is a valid comparison. HTML was already an existing standard, and all the browsers were able to do the basic things. What you described would be more akin to different hardware vendors making their own extensions to OpenGL before waiting for Khronos to update the core spec - and this is exactly what's happening.

    Which is why we have one unified syntax for CSS. Oh wait, no we don't.

    Pretty much what's going on is what you're describing. Browser vendors are making their own extensions to CSS and JS (which is what most people mean when they talk about HTML5) without waiting for the W3C to finish the spec- and suddenly there's five different ways to create a gradient.



  • @blakeyrat said:

    There's no reason your grandma shouldn't be able to use the CLI to do a mail-merge for her knitting club.
    No, but isn't it always the case that it would be easier to teach her to do it with a GUI? Multiple choice test are always easier than those with essay questions.



  • @tdb said:

    One possible thing would be real-time radiosity, but I'm not sure if that needs new hardware/API features as much as new methods and algorithms for using what's already available. Maybe something to control really huge amounts of data and render trees with individual leaves?

    If you really want to build up pressure for an API switch, you'll need something revolutionary; preferably a complete paradigm switch. Affordable hardware that can do real-time ray-tracing could be such a thing. That'd need an entire new API and a new way of thinking about the 3D data, but it would provide huge advantages in fields like reflections, refractions and radiosity which are notoriously hard to do with a polygon-based rasterizer.

     

    This kind of talk made me hot, and also made realise I've proficiated in the wrong part of the field, programming-wise.

    Time to jump ship.

    Any recommendations for starting out painting some pixels on the screen with room for evolving into letting my code have long conversations with the video card?

    ... I feel like I've asked this before.



  • @fterfi secure said:

    @blakeyrat said:
    There's no reason your grandma shouldn't be able to use the CLI to do a mail-merge for her knitting club.
    No, but isn't it always the case that it would be easier to teach her to do it with a GUI? Multiple choice test are always easier than those with essay questions.

    That's exactly why I'm in the box. As I mentioned. Because I don't see a need for the CLI existing at all. I would definitely be the wrong person to design the "usable CLI".



  • @blakeyrat said:

    @tdb said:
    @blakeyrat said:

    But the idea that there's no such thing as a "friendly" CLI is stupid. You can make a friendly CLI if you actually gave a shit and tried to make one. It's probably harder than making a friendly GUI, but it's certainly not impossible-- it's just nobody's fucking tried.

    Out of curiosity, what would be required of a good CLI? I've been using Linux for the past 10 years and gotten used to the way its CLI tools work, so I might be stuck inside the box. I'm interested to hear your ideas though, so that I could write better software in the future.

    Thing is, I'm probably in the box as well. What you really need is to define a few tasks a CLI would be useful for (and this is where I am in the box), and then start from scratch and do frequent and regular user testing. The main point is to do testing with human beings, not uber-nerd "high priesthood of technology" idiots. There's no reason your grandma shouldn't be able to use the CLI to do a mail-merge for her knitting club.

    The strongest selling point of CLI is probably scripting, which is useful for scheduled and otherwise automated or repeated tasks. In simple cases GUI programs could provide the same functionality, but things start getting complex once you want to chain multiple commands together (I'm running networked syslog and the server groups logs by host and date; I have a cron script that compresses log directories older than a week and deletes those older than a year). Many operations involving files are also faster using glob patterns than clicking on a list with a mouse.

    It could also be that the optimal solution is neither CLI or GUI alone. [url=http://acko.net/blog/on-termkit/]This[/url] seems like an interesting idea. I don't agree on certain implementation details (why do they have to stick browser engines everywhere), but the concept is something I'd like to try myself and expand upon.


  • ♿ (Parody)

    @blakeyrat said:

    @fterfi secure said:
    @blakeyrat said:
    There's no reason your grandma shouldn't be able to use the CLI to do a mail-merge for her knitting club.

    No, but isn't it always the case that it would be easier to teach her to do it with a GUI? Multiple choice test are always easier than those with essay questions.

    That's exactly why I'm in the box. As I mentioned. Because I don't see a need for the CLI existing at all. I would definitely be the wrong person to design the "usable CLI".

    Good lord, this subject gets stupid and repetitive. Fortunately, in the real world, we all don't live in the same box. The CLI can compress the spaciality of the GUI at the expense of easy discoverability, etc. In some cases, this is a huge win. In others, not so much. There is room for improvement in both paradigms, and lots of examples of really dumb stuff in both. They need to coexist, regardless of whether certain individuals can be productive with one or the other.



  • @MiffTheFox said:

    @tdb said:

    I'm not convinced that this is a valid comparison. HTML was already an existing standard, and all the browsers were able to do the basic things. What you described would be more akin to different hardware vendors making their own extensions to OpenGL before waiting for Khronos to update the core spec - and this is exactly what's happening.

    Which is why we have one unified syntax for CSS. Oh wait, no we don't.

    Pretty much what's going on is what you're describing. Browser vendors are making their own extensions to CSS and JS (which is what most people mean when they talk about HTML5) without waiting for the W3C to finish the spec- and suddenly there's five different ways to create a gradient.

    I'm going to be pedantic here and note that the syntax is the same, but semantics are different. It's not like browser vendors are going completely solo either; the CSS spec defines a way for implementors to provide additional functionality, and it's guaranteed to not interfere with any future spec. So again, this is extending the existing system, not an entirely new and incompatible one.

    For a better analogue, imagine that Firefox would have shipped with a proprietary stylesheet syntax based on Lisp, which had all kinds of cool features like nested styles and variables. Webmasters would face the choice of using the NSL (nested style list) syntax which only works on Firefox, or expending more effort to do the same things with CSS, but gain cross-browser compatibility.



  • @blakeyrat said:

    @fterfi secure said:
    @blakeyrat said:
    There's no reason your grandma shouldn't be able to use the CLI to do a mail-merge for her knitting club.
    No, but isn't it always the case that it would be easier to teach her to do it with a GUI? Multiple choice test are always easier than those with essay questions.

    That's exactly why I'm in the box. As I mentioned. Because I don't see a need for the CLI existing at all. I would definitely be the wrong person to design the "usable CLI".

    I struggle to understand how a CLI can be 'good' whilst also not being the best way to do it.

    I think in this case thinking outside the box consists of recognising that command line entry is not really a user interface at all - it's a data entry method. Sometimes, at the moment, entering data relatively directly is the simplest way to control/program a computer, but the whole point of a user interface is to abstract that process and make it more intuitive.

    As I said, you could enter your commands with a row of dipswitches, one character at a time. All keyboards do is make that easier, and the same goes for things like autocomplete. Ultimately you still have to know what command it is you need to enter, where a GUI is presenting you with a list of options. Obviously a text interface can present a list of options in the same way, but in that case why not make them clickable?

    The only reason we're having this discussion is that currently GUIs are not sufficiently efficient to be optimal in all applications - the lack of ease of use of CLIs is sometimes outweighed by increased efficiency.



  • @tdb said:

    If you write games using DirectX, they will run on Windows and Xbox, period. OpenGL implementation can be found on Windows, Linux, Mac OS X, Android and iOS at least.
    Wasn't there some news a few years ago that Gallium3D implemented DirectX 10 and 11 API?



  • @tdb said:

    The strongest selling point of CLI is probably scripting, which is useful for scheduled and otherwise automated or repeated tasks.

    That's the kind of thing people who never used AppleScript always say. Old hat. Apple had it figured out 10 years ago. God knows what kind of horrible abomination it's become with the NeXT guys in charge, though...

    @tdb said:

    It could also be that the optimal solution is neither CLI or GUI alone. This seems like an interesting idea. I don't agree on certain implementation details (why do they have to stick browser engines everywhere), but the concept is something I'd like to try myself and expand upon.

    Welcome to AppleScript. Again.



  • @dhromed said:

    @tdb said:

    One possible thing would be real-time radiosity, but I'm not sure if that needs new hardware/API features as much as new methods and algorithms for using what's already available. Maybe something to control really huge amounts of data and render trees with individual leaves?

    If you really want to build up pressure for an API switch, you'll need something revolutionary; preferably a complete paradigm switch. Affordable hardware that can do real-time ray-tracing could be such a thing. That'd need an entire new API and a new way of thinking about the 3D data, but it would provide huge advantages in fields like reflections, refractions and radiosity which are notoriously hard to do with a polygon-based rasterizer.

     

    This kind of talk made me hot, and also made realise I've proficiated in the wrong part of the field, programming-wise.

    Time to jump ship.

    Any recommendations for starting out painting some pixels on the screen with room for evolving into letting my code have long conversations with the video card?

    ... I feel like I've asked this before.

    I'm going to recommend OpenGL, although the reasons are more political than technical (see my last few posts here). There should be a tutorial for modern OpenGL here, but the server seems to be down at the moment. Google's cached version is only a few days old, so it's probably a temporary failure.

    Be prepared to learn a lot of theory; understanding how the stuff is supposed to work makes it orders of magnitude easier to write a working implementation. Wikipedia is your friend. Being able to visualize three-dimensional things and their relations in your head helps a lot.

    Have an idea of what you want to make. Start small; if you go for Crysis 2 on your first try, you'll only get frustrated and discouraged. Minecraft is a more realistic thing to start with (after you've learned the basics). Have ambitions, but realise your limits.

    I'll also have to warn you that regardless of which API you choose, to get something interesting on the screen (aside from certain procedural things like fractals), you'll need to get 3D models and textures from somewhere. Making them is hard and finding someone to make them for you is even harder (unless you're willing to pay).



  • @fterfi secure said:

    I struggle to understand how a CLI can be 'good' whilst also not being the best way to do it.

    Maybe it can't. We have no way of knowing in the current environment where CLI development is dead as a doornail.

    @fterfi secure said:

    The only reason we're having this discussion is that currently GUIs are not sufficiently efficient to be optimal in all applications - the lack of ease of use of CLIs is sometimes outweighed by increased efficiency.

    Bullshit. Stop spreading this bullshit.



  • @ender said:

    @tdb said:
    If you write games using DirectX, they will run on Windows and Xbox, period. OpenGL implementation can be found on Windows, Linux, Mac OS X, Android and iOS at least.
    Wasn't there some news a few years ago that Gallium3D implemented DirectX 10 and 11 API?

    Yes. The driver infrastructure is still very much in development though, and probably based on Microsoft's API documentation rather than an actual specification. I'm also somewhat skeptical of whether Nvidia will pick it up, and ultimately Nvidia is the one 3D vendor that matters most in the Linux world.

    There's also Wine, which can translate DirectX calls to OpenGL, but in many cases it requires considerable tweaking to get newer games running.



  • @blakeyrat said:

    Welcome to AppleScript. Again.

    I'm not familiar with AppleScript (except that it's a way to script for GUI applications), wouldn't that have issues with if the script kicked off while a user was trying to do something else?  Or is it more of a hooking things in the applications and suppressing them popping up to the user when doing whatever than a record series of clicks?



  • @blakeyrat said:

    @tdb said:
    The strongest selling point of CLI is probably scripting, which is useful for scheduled and otherwise automated or repeated tasks.

    That's the kind of thing people who never used AppleScript always say. Old hat. Apple had it figured out 10 years ago. God knows what kind of horrible abomination it's become with the NeXT guys in charge, though...

    Interesting. In some sense that amounts to having both a GUI and CLI in the same application, with the commands being sent through the AppleScript interpreter rather than from a shell. Some Linux programs provide similar functionality through conventional CLI. In one project I'm generating svg files from a program and then using Inkscape to convert them into png files.



  • @locallunatic said:

    @blakeyrat said:

    Welcome to AppleScript. Again.

    I'm not familiar with AppleScript (except that it's a way to script for GUI applications), wouldn't that have issues with if the script kicked off while a user was trying to do something else?  Or is it more of a hooking things in the applications and suppressing them popping up to the user when doing whatever than a record series of clicks?

    According to Wikipedia, AppleScript works with special events, which applications need to support in order to be scriptable. Presumably it can work independently of what's on the screen; requiring the scripted application to be visible and active would seriously limit its use in scheduled tasks.



  • @fterfi secure said:

    I struggle to understand how a CLI can be 'good' whilst also not being the best way to do it.

    That's because you're dumb and I hate all arm-flailing pie-in-sky Minority Report-fellating UI futurists such as yourself.

    The word "good" by itself is nearly meaningless. It must always be qualified with "good for WHAT"? When it comes to nontrivial UI design the constant source of tension is between expressiveness and discoverability. Expressive interfaces (like CLIs) are good for experts, while discoverable interfaces (GUIs, broadly speaking) are good for newbies. Creating a single UI that is both expressive and discoverable is generally considered to be really damn hard, if not impossible for many applications, so UI designers have to decide who their primary audience will be, and tune the interface as appropriate. So that is how a CLI can be "good".

    I think in this case thinking outside the box consists of recognising that command line entry is not really a user interface at all - it's a data entry method.

    This is dumb. One could argue just as well that pointing and clicking is just data entry too. UI is any arrangement where there is an explicit communication loop between the machine and the user, no matter whether it's typing or clicking or jumping around on a dance mat. If the machine is responding to your actions, it's an interface.

     



  • @tdb said:

    According to Wikipedia, AppleScript works with special events, which applications need to support in order to be scriptable. Presumably it can work independently of what's on the screen; requiring the scripted application to be visible and active would seriously limit its use in scheduled tasks.

    It does, but it varied depending on the level of support an application had. For example, if Word had implemented only the 4 required AppleScript events (IIRC: Open, Close, Print, Quit), you would need to write your script to open the document, then basically dink around in the application's menus. Which is definitely not a good way to do it. If the application has an AppleEvent for every function its capable of (which is what they're supposed to be doing), then yes AppleScript could just invisibly do its thing behind-the-scenes.

    BTW, the same "scripting in a GUI" thing exists in Windows, with the VBScript and JScript environments. Windows applications don't have an equivalent to AppleEvents, so what they can do is much, much more limited.

    I guess the "first step" in making the "usable CLI" is to make sure everybody on the team has their history straight, so you're not starting from scratch! Hah.



  • @blakeyrat said:

    @fterfi secure said:
    I struggle to understand how a CLI can be 'good' whilst also not being the best way to do it.

    Maybe it can't. We have no way of knowing in the current environment where CLI development is dead as a doornail.

    I'm clearly not getting through to you. CLI development is a subset of interface development. Interface development is ongoing. The reason there is no CLI development is the same reason we don't work on better dip-switches: there are no improvements you can make without moving to a different model altogether.

    So there's no 'maybe' about it. The usability of CLIs cannot be improved. They will always be less usable than an equivalently well-designed graphical interface by their very nature. This does not make them 'bad' or 'wrong', but it does have a lot to do with which jobs they're the right tools for.

    @blakeyrat said:

    @fterfi secure said:
    The only reason we're having this discussion is that currently GUIs are not sufficiently efficient to be optimal in all applications - the lack of ease of use of CLIs is sometimes outweighed by increased efficiency.

    Bullshit. Stop spreading this bullshit.

    Which part of that are you saying is bullshit?



  • @fterfi secure said:

    The reason there is no CLI development is the same reason we don't work on better dip-switches: there are no improvements you can make without moving to a different model altogether.

    But how do you know? Nobody's even trying.

    @fterfi secure said:

    So there's no 'maybe' about it. The usability of CLIs cannot be improved.

    Now THAT is bullshit.

    @fterfi secure said:

    They will always be less usable than an equivalently well-designed graphical interface by their very nature.

    Probably true. But that's a far cry from "CLIs cannot be improved."

    @fterfi secure said:

    Which part of that are you saying is bullshit?

    Ease of use and efficiency/power are not mutually-exclusive. If you think they are, you should not be working with UIs of any type.

    And that's even assuming you have a reasonable definition for "efficient". I would say the UI 99% of the population can use without help is more efficient than the UI that completes the same task in a tenth the time.

    UI design is about research, it's about numbers, it's about statistics. It's about science. Saying things like, "the CLI has increased efficiency" is stupid. Unless you define your terms and provide concrete evidence, then come back and we'll talk about it.


  • ♿ (Parody)

    @fterfi secure said:

    So there's no 'maybe' about it. The usability of CLIs cannot be improved. They will always be less usable than an equivalently well-designed graphical interface by their very nature. This does not make them 'bad' or 'wrong', but it does have a lot to do with which jobs they're the right tools for.

    It's important to note that usability does not exist in a vacuum. Personally, the biggest example of an application with a more usable CLI than GUI is SCM. Now, I don't doubt that it's possible to improve the GUI versions of these, it hasn't been done yet, but I'm skeptical that it will.

    The most obvious advantage of a CLI in this case (and many similar cases) is that you only get a display of what you ask for, as opposed to the entire source tree. So finding, for instance, the few files that have been modified, and that are dispersed within the source tree is much simpler than with a GUI. I don't have to scroll to find the root to ask for changes. I don't have to scroll around and visually filter on different colored icons. I don't have to wait for my application to match up icons, etc. I don't have to expand or close directories, just to see which 5 files are modified.



  • @boomzilla said:

    It's important to note that usability does not exist in a vacuum. Personally, the biggest example of an application with a more usable CLI than GUI is SCM.

    Bullshit.

    Getting sick of calling bullshit here. Someone else do it for a few hours. Or! Better idea! People could stop posting bullshit.


  • ♿ (Parody)

    @blakeyrat said:

    @boomzilla said:
    It's important to note that usability does not exist in a vacuum. Personally, the biggest example of an application with a more usable CLI than GUI is SCM.

    Bullshit.

    Getting sick of calling bullshit here. Someone else do it for a few hours. Or! Better idea! People could stop posting bullshit.

    So stop with the bullshit bullshit calls. For the record, I agree with most of the bullshits that you called, but now you're just being stubborn or ignoring what I wrote. I'm sure you have a magnificent treatise on how I'm totally confused by quickly seeing a simple list of paths vs hunting for the information I need in a tree of similar icons, but it's a lot easier to say bullshit.

    I dunno, maybe you only work on trivially small projects. Or it's a side effect of the dyslexia. But you're full of shit on this one, and no hand waving about statistics is going to make you sound reasonable here.



  • @blakeyrat said:

    Which of these definitions of "bod" were you going for?

    "body". Person. It's a limey expression.

    @blakeyrat said:

    You're just replacing one stereotype with another.

    Erm.. probably.

    @blakeyrat said:

    "CLI = hard to use" is turning into "easy to use = inefficient." Neither of those stereotypes are true. (Rather, they don't have to be true.)

    No, they're untrue - pure and simple.

    CLI isn't harder to use (and I hope I didn't imply that in my post), it just has a steeper learning curve. Similarly, I don't believe that something which is easy to use is inefficient - at the very least it's more time-efficient in many cases.

    Again, that wasn't the impression I was trying to convey in my post: my point was largely that you can't supply into no demand, that if a friendly CLI came long, I'm not sure who the intended audience is. I don't agree that there isn't one, just that I could see it being pretty much a niche market.

    @blakeyrat said:

    That's exactly like saying "we can fix all our usability problems if we add themeing to our application."

    Is it? It's not what I wrote. I meant that usability of many shells is customisable, and the default setting is fairly uncustomised yet many systems administrators make site-specific or organisation-specific changes to improve their usability to a known audience. I'll concede that there are so many things that users add on it seems daft to think they're turned off, but these changes aren't just a theme or a skin.

    (perhaps I'm approaching this from a programmer point of view).

    @blakeyrat said:

    There's no reason your grandma shouldn't be able to use the CLI to do a mail-merge for her knitting club.

    There's no reason she shouldn't, but the main reasons are appropriateness to usage: if there's an easier way, why use the CLI? For some grandmothers, opening a command prompt and typing "main-merge.sh" may be all that they want to do, rather than remember a complicated series of checkboxes and mouse-clicks.

    @blakeyrat said:

    Thing is, I'm probably in the box as well.

    Ditto, thinking about it, which is probably why I can't visualise a Use Case from a non-geek POV.

    @blakeyrat said:

    The cultural problem is that the only group with any interest in the CLI whatsoever is the uber-nerd "high priesthood of technology" idiots, which is exactly why it's completely stagnant and will never improve.

    And yet in my experience (and occupation), I've made accountants, secretaries and HR bods comfortable with the Linux command-line after only two days. I watch them grow and learn, eager to try new things out, and begin discussions not about what it all is, but more about what they can do with it and how it could be productive for them. I guess we just move in different circles.

     

     



  • @veggen said:

    I mean Cassandra (the storage engine behind your beloved Facebook)

    AFAIK, Facebook stopped using Cassandra a while ago (albeit to move to other Java backends).

    @veggen said:

    Hadoop (the distributed file-system and mapReduce engine behind Yahoo)

    Java: bringing you the success of Yahoo!

    @veggen said:

    Android

    Oh boy, a third-rate smartphone platform which has quickly become the default for the "free" shitphones Verizon gives out. Oh, and it's become a divergent mess with each OEM slapping their own UI on top of it because the default UI sucks. Oh, and it will get you sued.

    That said, Java isn't the worst language ever, it just sucks. A lot. Some* projects choose it because there aren't many better choices for server apps, unless you want to manage your own memory. Or use C++, which is actually worse than Java. PHP is okay if you're doing request-based web stuff, but its GC can't handle long-running apps that recycle lots of memory.

    C# looks really good. Unfortunately it's useless in the Unix world. It looks like they fixed about 50% of the shit I hate about Java, though.

    • Note that most of these "non-toy" projects are actually distributed, server-side software like databases, MapReduce, etc.. Being distributed means you can scale horizontally, ameliorating the Java Bloat by throwing hardware at it. I've used Java, a lot. For some Unix-based apps, it's unfortunately the best choice. That doesn't make it good, it just makes it a PITA we have to tolerate.


  • @dtech said:

     I think the troll (veggen) is hilarious. Actually quoting Facebook for choosing Java while a few simple googles could've told you that Facebook is written in PHP an compiles it to C++.

    Sorry dude, no trolling here. Just you not reading. I never said Facebook is Java, I said Cassandra (the storage engine behind Facebook) was written in Java, which can be confirmed by a "few simple googles". Truth is a bitch.
    Anyway, since everyone chose to ignore the bulletpoints I mentioned (except for Blakey's Silverlight non-argument), I retreat from this dumb "my lang is better because I don't know yours" argument. Just do yourselves a favor and look at that list (6 items, not that long) every time you're starting a non-toy project. Try to give it an objective thought and you'll thank yourself later. Or just go for the technology that is the buzzword of the moment, cause it has to be the best.



  • @blakeyrat said:

    @veggen said:
    Silverlight? Seriously, who even develops that let alone installs that?

    Silverlight is amazing. I wish a lot more sites used it.

    Netflix. I assume they do it because of the security features that flash doesn't (FLV downloader anyone?)

    @blakeyrat said:

    @veggen said:
    Being slow is quite close to being nonsense as well. It's slower than native code, but just as fast as any managed language and much much faster than the hip scripting langs.

    Please. It takes somewhere around 45 seconds just to close NetBeans. You honestly think that's as fast as "any managed language?" As for as "hip scripting langs", look into the newer generation of JavaScript interpreters, they basically kick ass and take names.

    I'm arguing with a program written in java right now which apparently can't get [i]the data it's written to get[/i] in 15 minutes when I can do so on the command line in less than a minute on the same box. It also throws the most awesome less-than-useful several paragraph java exceptions which may or may not point to the actual file causing the issue.

    Don't beat me up because I'm a CLI person; I'm a curmudgeonly Unix admin who tells these Linux kids to get off my lawn. Powershell is almost but not quite useful enough to me as a CLI. I at least try to do things the native way before resorting to hacks like *shudder* cygwin.



  • @blakeyrat said:

    ... CLI development is dead as a doornail.

    Is it?

    IMHO there is some CLI development going on. It just isn't called CLI development any more. For example, look at speech recognition and voice control.

    At least conceptually, there doesn't seem to be a big difference between typing "mail -s 'Lunch' someone@example.org" in a terminal and saying "dial home" to your phone. Both are commands, both have parameters, both must be known to the user before she/he can invoke them (particularly the correct synonym or the order will be ignored), and so on.

    Let's see how voice interfaces evolve as they gain popularity. I'm fairly certain that many improvements can be "backported" to command line interfaces.



  • @blakeyrat said:

    @fterfi secure said:
    The reason there is no CLI development is the same reason we don't work on better dip-switches: there are no improvements you can make without moving to a different model altogether.

    But how do you know? Nobody's even trying.

    This is what I'm trying to explain to you. It's not that user interface development is impossible, but that there's nothing you can do to a command line which will be a significant improvement and not make it something other than a command line (unless you wilfully ignore something obvious, like in the example I gave above).

    @blakeyrat said:

    @fterfi secure said:
    So there's no 'maybe' about it. The usability of CLIs cannot be improved.

    Now THAT is bullshit.

    Actually, you're right. My original statement was more nuanced, but that goes a bit far. The usability of CLIs can be slightly improved.

    @blakeyrat said:

    Ease of use and efficiency/power are not mutually-exclusive.
    No, of course not. You either didn't read the thread, or didn't understand what I was saying. They are, in practical terms, impossible to have both of with today's technology. By the very nature of being an abstracted control layer, a GUI trades direct control (and thereby maximum efficiency) for ease of use. A CLI does the opposite. You can combine the two to an extent, but neither is a terribly good solution.@blakeyrat said:
    If you think they are, you should not be working with UIs of any type.
    Everyone works with UIs. I don't work on UIs in any sense other than occasionally throwing some buttons on a page, though, if that's what you meant.

    @blakeyrat said:

    And that's even assuming you have a reasonable definition for "efficient". I would say the UI 99% of the population can use without help is more efficient than the UI that completes the same task in a tenth the time.
    Nope, that's more 'usable', to me at least. Efficiency (by the definition I was using above) is about productivity with adequate knowledge/training, where usability is about productivity without those.

    If we want to play 'define that', come up with a proper definition of 'user interface'. Try and start by defining what it should be, rather than by describing the current pitiful attempts.



  • @veggen said:

    I never said Facebook is Java, I said Cassandra (the storage engine behind Facebook) was written in Java, which can be confirmed by a "few simple googles".

    Once again, Facebook appears to have dropped Cassandra.

    @veggen said:

    Anyway, since everyone chose to ignore the bulletpoints I mentioned (except for Blakey's Silverlight non-argument)..

    You made a flurry of poorly-thought-out posts all at once. That's probably why nobody responded to you.

    @veggen said:

    I retreat from this dumb "my lang is better because I don't know yours" argument.

    Once again, I probably know Java better than you. I just hate it because it sucks.



  • @fatbull said:

    At least conceptually, there doesn't seem to be a big difference between typing "mail -s 'Lunch' someone@example.org" in a terminal and saying "dial home" to your phone. Both are commands, both have parameters, both must be known to the user before she/he can invoke them (particularly the correct synonym or the order will be ignored), and so on.
    You're right, of course - but neither method is an interface so much as an input method. Computers being able to handle natural language inputs is obviously necessary for a decent UI. Where and how you enter the text is immaterial to that point.

    To be an interface, there has to be two-way communication. A command line interface is not just the line where you type in commands, but also the way the computer lets you know the response to those commands.



  • @fterfi secure said:

    @fatbull said:
    At least conceptually, there doesn't seem to be a big difference between typing "mail -s 'Lunch' someone@example.org" in a terminal and saying "dial home" to your phone. Both are commands, both have parameters, both must be known to the user before she/he can invoke them (particularly the correct synonym or the order will be ignored), and so on.
    You're right, of course - but neither method is an interface so much as an input method. Computers being able to handle natural language inputs is obviously necessary for a decent UI. Where and how you enter the text is immaterial to that point.

    To be an interface, there has to be two-way communication. A command line interface is not just the line where you type in commands, but also the way the computer lets you know the response to those commands.

    There's a pretty big difference between a CLI and a natural-language interface.



  • @veggen said:

    Bonus points for Java bashers if you can name any of the following:

    1) an ORM that can compare to Hibernate

    2) full text search that can compare to Lucene and Solr

    3) anything that will let you do SmartCard authorization through the browser alone

    4) anything that can compare to Hadoop

    5) anything that can do distributed transactions without breaking sweat

    6) libraries for everything from form validation to molecelar biology readily available

    7) all of the above on any platform you want

    Mindless bashing made me the Java defender I am.

    1) NHibernate
    2) Lucene.NET
    3) Adobe AIR (shudder). The fucking browser
    4) Define Hadoop. Apache tells me it is a group of projects, not one.
    5) MS DTC
    6) That's not a product.
    7) That's not a product.

    You know mindless defending is as bad as, if not worse than, mindless bashing right?



  • @morbiuswilters said:

    @fterfi secure said:
    @fatbull said:
    At least conceptually, there doesn't seem to be a big difference between typing "mail -s 'Lunch' someone@example.org" in a terminal and saying "dial home" to your phone. Both are commands, both have parameters, both must be known to the user before she/he can invoke them (particularly the correct synonym or the order will be ignored), and so on.
    You're right, of course - but neither method is an interface so much as an input method. Computers being able to handle natural language inputs is obviously necessary for a decent UI. Where and how you enter the text is immaterial to that point.

    To be an interface, there has to be two-way communication. A command line interface is not just the line where you type in commands, but also the way the computer lets you know the response to those commands.

    There's a pretty big difference between a CLI and a natural-language interface.

    Yes, because they're not types of the same thing, although we're using the same word. Evidently tiredness is preventing me making any sense, since apparently no-one can understand what I'm banging on about. The 'interface' in 'natural language interface' is the protocol used for communication - natural language, obviously - where the one in 'command line interface' is about how data is actually exchanged. In the broader sense, an entire UI built predominantly around any particular interface method tends to get called by the name of that part - e.g. CLI - but that's not all there is to it.

    When I talk about a user interface, I include every interaction you have with a computer. The on-off switch is part of the UI, even the power plug.



  • @fterfi secure said:

    @morbiuswilters said:
    @fterfi secure said:
    @fatbull said:
    At least conceptually, there doesn't seem to be a big difference between typing "mail -s 'Lunch' someone@example.org" in a terminal and saying "dial home" to your phone. Both are commands, both have parameters, both must be known to the user before she/he can invoke them (particularly the correct synonym or the order will be ignored), and so on.
    You're right, of course - but neither method is an interface so much as an input method. Computers being able to handle natural language inputs is obviously necessary for a decent UI. Where and how you enter the text is immaterial to that point.

    To be an interface, there has to be two-way communication. A command line interface is not just the line where you type in commands, but also the way the computer lets you know the response to those commands.

    There's a pretty big difference between a CLI and a natural-language interface.

    Yes, because they're not types of the same thing, although we're using the same word. Evidently tiredness is preventing me making any sense, since apparently no-one can understand what I'm banging on about. The 'interface' in 'natural language interface' is the protocol used for communication - natural language, obviously - where the one in 'command line interface' is about how data is actually exchanged. In the broader sense, an entire UI built predominantly around any particular interface method tends to get called by the name of that part - e.g. CLI - but that's not all there is to it.

    When I talk about a user interface, I include every interaction you have with a computer. The on-off switch is part of the UI, even the power plug.

    I probably should have replied to fatbull rather than you. I get what you mean, I just think "CLI" pretty much means what it means now "an environment with limited commands and a built-in language for extending it". If I can just tell my computer what I want to do in natural language and have it do it, then I wouldn't classify it as a CLI, even if the commands are typed in.


Log in to reply