Explicating the survival of the Shell as default OSS UI



  • Oh, it should come first, in all but the most trivial cases, yes. However, one of the general rules of the Unix world is to break everything down into trivial cases (or pretend that you can, at any rate) so that the UI simply doesn't matter. This was a great idea in 1968, when most machine interfaces were teletypes - not dumb terminals, but actual ASR-33 teletypewriters with long scrolls of paper as their user output đź“ś - but somehow misses the mark today :trollface:

    Seriously, the 'Unix Way' originally was to have all commands do exactly one task, with no arguments, and everything simply joined together with pipes to perform more complex tasks. Who thought this was a good idea? Why, a team of exceptionally brilliant coders who never expected it to be used outside their tiny enclave. That last part is the relevant issue: not arrogance, not a High Priesthood, merely a lack of foresight and the results of the Law of Unintended Consequences. Unfortunately, that attitude - and lack of vision - persists. Most programmers on the shell don't really get that all user programs have a UI, even if it is nothing more than a name to be typed into the shell. They can't understand that a shell is a user interface, and that the program's shell interface needs to be designed just the same as a graphical interface does.

    To be fair, most university courses teach programming in a vacuum, writing a lot of small programs that don't actually interact with the user beyond command invocation, and while they may give lip service to 'UI first', they rarely actually teach how to design a UI, whether CLI or GUI. They are so focused on filling their students' brains with language syntax, algorithms, and data structures that they rarely get around to explaining how to actually use them in a workable program. This explains (in part) the often observed but usually misunderstood phenomenon of self-taught programmers often being better than University graduates (the other part of the equation is that the auto-didactic is usually doing so out of personal interest in programming, whereas many college students go into CS/SE because they think money is falling out of the sky in that field - which is about as successful a reason for choosing a field as it is in Law and Medicine, which is to say, it results in a lot of self-important assholes trying to take the money and run rather than actually practicing their craft).


  • Discourse touched me in a no-no place

    @ScholRLEA said:

    Seriously, the 'Unix Way' originally was to have all commands do exactly one task, with no arguments, and everything simply joined together with pipes to perform more complex tasks. Who thought this was a good idea?

    But the reverse situation, banging everything together into one big ball of mud, isn't better either. The Unix Way ought to be recognisable to anyone with functional programming language experience: it's just straight function composition. The functions happen to be programs, but the concept is pretty exact.



  • @blakeyrat said:

    Why would you write a straight-forward GUI in MFC? Even without leaving the world of Microsoft, you have much, much better tools available to you. And, as I said above, if you ever experienced the simplicity of Mac Classic's API, you'd be blown-away by it.

    I wouldn't be surprised if the Classic Mac API was better thought out than W32, considering Apple actually had enough sense to break people who did naughty things like groveling in internal data structures.

    As to why MFC, two reasons:

    1. I can't make a .NET app into a single, self-contained binary that can be xcopy'ed to a box, pointed at a database via an ODBC DSN or two, and run. (At least, as far as I know. If you know of the .NET version of Python's cx_Freeze/..., please let me know.)
    2. I'm the 2nd dev working on this code. The first dev has been doing Windows API development since Windows 1.0. And he's no slouch at it either. We even have it so that the UI is not blocked by processing tasks in a single-threaded program.

    @blakeyrat said:

    Stop using C. WinForms does 99% of this shit for you. And WinForms is one of the worst alternatives.

    So, you're saying that we should assume that .NET is part of Windows now? BZZT. Hint: Windows is not a .NET delivery vehicle any more than it is a MSVCRT.DLL delivery vehicle. Only difference is that I can statically link the CRT into my binary, and just push a new .exe out when needed.

    Also, MS really needs to rethink the MSDN docs for COM. (Which is 90% of why IAccessibility and friends are complicated.)



  • That analogy holds, to a point at least, I never said it was a bad way to organize the shell, and in fact it was only later when complicated switches came to be part of many program UIs did it get out of hand.

    The problem is that it is a brilliant way to organize a UI if and only if only experienced programmers are going to use it. To a non-programmer, the learning curve is high, and the gulfs of execution and evaluation too broad to be easily crossed.

    If you sit down to an unfamiliar terminal with only a shell interface, what command do you use to display a list of files? Is it 'catalog'? 'ls'? 'dir'? 'sys.os.listdir()'? What is the command to get help? How do you tell if a program has completed correctly or not? To a newcomer, these are a lot more intimidating questions than to an experienced programmer, yet in this situation the experience coder is just as lost as the novice.

    The point is, there are ways of making things a lot more explicit than your typical shell interface does. True, many GUIs are poorly conceived or poorly organized, maybe even most, but they are still much easier to learn on the whole, even if they are poorly organized.


  • ♿ (Parody)

    @ScholRLEA said:

    If you sit down to an unfamiliar terminal with only a shell interface, what command do you use to display a list of files? Is it 'catalog'? 'ls'? 'dir'? 'sys.os.listdir()'? What is the command to get help? How do you tell if a program has completed correctly or not? To a newcomer, these are a lot more intimidating questions than to an experienced programmer, yet in this situation the experience coder is just as lost as the novice.

    I remember having a similar experience nearly 20 years ago when I sat down in front of a friend's Mac. Yes, the saintly Mac Classic was incomprehensible and didn't do anything sensible from my perspective.


  • Discourse touched me in a no-no place

    @ScholRLEA said:

    To a non-programmer, the learning curve is high, and the gulfs of execution and evaluation too broad to be easily crossed.

    But if they're not the intended market, is the investment required to bridge those gulfs justifiable?


  • BINNED

    @dkf said:

    The Unix Way ought to be recognisable to anyone with functional programming language experience: it's just straight function composition. The functions happen to be programs, but the concept is pretty exact.

    Which would explain why The Unix Way is universally hated outside of the *NIX world. Most developers don't like functional programming.



  • @ScholRLEA said:

    If you sit down to an unfamiliar terminal with only a shell interface, what command do you use to display a list of files? Is it 'catalog'? 'ls'? 'dir'? 'sys.os.listdir()'? What is the command to get help? How do you tell if a program has completed correctly or not? To a newcomer, these are a lot more intimidating questions than to an experienced programmer, yet in this situation the experience coder is just as lost as the novice.

    The point is, there are ways of making things a lot more explicit than your typical shell interface does. True, many GUIs are poorly conceived or poorly organized, maybe even most, but they are still much easier to learn on the whole, even if they are poorly organized.

    Yes. Keyboard-driven interfaces (which do not necessarily have to be CLIs in the shell-ish sense, they can be textual menu systems or keyboard-driven GUIs) are going to be faster for the experienced user. A classical GUI is going to be much more discoverable then a shell-ish, though, and this is what people who are unfamiliar with a system need more than anything else.

    @ScholRLEA said:

    To be fair, most university courses teach programming in a vacuum, writing a lot of small programs that don't actually interact with the user beyond command invocation, and while they may give lip service to 'UI first', they rarely actually teach how to design a UI, whether CLI or GUI. They are so focused on filling their students' brains with language syntax, algorithms, and data structures that they rarely get around to explaining how to actually use them in a workable program. This explains (in part) the often observed but usually misunderstood phenomenon of self-taught programmers often being better than University graduates (the other part of the equation is that the auto-didactic is usually doing so out of personal interest in programming, whereas many college students go into CS/SE because they think money is falling out of the sky in that field - which is about as successful a reason for choosing a field as it is in Law and Medicine, which is to say, it results in a lot of self-important assholes trying to take the money and run rather than actually practicing their craft).

    We need far better coursework in UI design, yes. I'm extremely fortunate in that I taught myself a fair bit of programming and then went and got my CS degree, so I in some sense got the best of both worlds.

    @blakeyrat said:

    Again, this is a problem that was completely solved in 1994 or so with AppleScript and has since actually been unsolved. Automator in current Macs is not even close to what you could do with Script Recorder and AppleScript in general back in 1994.

    BTW: I've used AppleScript on Classic Mac before. While it was fantastic at driving applications around (better than VBScript/VBA, even), it lacked one thing, and that was the ability to do things outside an application context. You didn't exactly have a good way to do basic text slicing/dicing/blending, never mind getting at things like standard dialog boxes.

    Also, is it just me, or were Mac development tools limited to a priesthood? You didn't even have primitive tinker-toys with the Classic Mac OS, save for AppleScript.


  • Discourse touched me in a no-no place

    @tarunik said:

    Also, is it just me, or were Mac development tools limited to a priesthood?

    The Benificent Cult of the Blessed Jobs.



  • @ScholRLEA said:

    As I've said before, I've always liked Wirth's solution as used in Oberon: make it possible to take any piece of text, select it, compile it on the fly, and run it as a script. You could make a button just by selecting the code you want it to run and bind it to a point in the display. Simple.

    That sounds like a terrible idea.

    @ScholRLEA said:

    The main thing lacking was an action recorder, which is really what most people need more than scripting, but since Wirth was a language designer and not an interfaces expert it was an understandable oversight.

    Apple started with the Script Recorder, which produced the text script which you could then edit to add looping or variables or whatever. That is the more correct direction here.

    The goal is to give people using computers useful tools to help them do their day-to-day work. Not to give geek-nerds more geeky-nerdy stuff that only geek-nerds could ever want or appreciate.

    @cartman82 said:

    So basically, there was one good OS in the world - Mac Classic. And it was all downhill ever since. I never had that computer, so I can't tell if it's nostalgia or you're on to something.

    I am on to something.

    @cartman82 said:

    The fact remains you need a lot more code to set up GUI than CLI. Not talking about API, talking about the mountain of stuff underneath. On the other hand, I imagine Apple might be able to pull it off, as they control their own hardware (so at least there's one layer of complexity that goes away).

    Apple crammed their entire GUI toolkit in a 64k ROM. The multitasking-capable version in 128k. The world was a different place in the mid-80s.

    @ScholRLEA said:

    A handful of designers have, like Ted Nelson and Alan Kay and Jef Raskin, but most have simply never learned that UIs are something that need to be designed. Schools don't teach it, programmer culture doesn't support it, and few if any bother to figure it out on their own.

    Yes. Then again, my school didn't teach me anything useful for a career in writing software. (Except relational databases.)

    @ScholRLEA said:

    Fun fact: the original Macintosh Toolbox was crammed into a single 64K ROM (I doubt you could compile "Hello World!" into 64K with some of the compilers out there today) and ran in just 128K of system memory on the very first model.

    Damn you people who wake up at 3 AM or whatever just to steal my forumpointzzz.

    @dkf said:

    Git is actively misanthropic. Enough about it already.

    It also exists and is popular, which illustrates everything wrong with IT right now.

    @ScholRLEA said:

    That last part is the relevant issue: not arrogance, not a High Priesthood, merely a lack of foresight and the results of the Law of Unintended Consequences.

    Bullshit. It became High Priesthood as soon as those guys found out they could get paid cash money for their computer expertise.

    @dkf said:

    The Unix Way ought to be recognisable to anyone with functional programming language experience: it's just straight function composition. The functions happen to be programs, but the concept is pretty exact.

    Who cares about the "concept" if the end-result is shit?

    @tarunik said:

    I can't make a .NET app into a single, self-contained binary that can be xcopy'ed to a box, pointed at a database via an ODBC DSN or two, and run. (At least, as far as I know. If you know of the .NET version of Python's cx_Freeze/..., please let me know.)

    ... what's stopping you? I've done that with .net programs a hundred times.

    @tarunik said:

    I'm the 2nd dev working on this code. The first dev has been doing Windows API development since Windows 1.0. And he's no slouch at it either. We even have it so that the UI is not blocked by processing tasks in a single-threaded program.

    Why is that shocking? All us Mac Classic programmers "magically" managed that also. In fact, before Windows 95, I think it's safe to say all multitasking on desktop machines was done without threads.

    Cooperative Multitasking isn't all that great, but it works for multitasking. Your MFC buddy is probably just using the Windows 3.x event calls, which still work because Microsoft is awesome about backwards-compatibility.

    @tarunik said:

    So, you're saying that we should assume that .NET is part of Windows now?

    Has been for a long time.

    @ScholRLEA said:

    The problem is that it is a brilliant way to organize a UI if and only if only experienced programmers are going to use it.

    I'm a programmer. I can't use it.

    @ScholRLEA said:

    The point is, there are ways of making things a lot more explicit than your typical shell interface does. True, many GUIs are poorly conceived or poorly organized, maybe even most, but they are still much easier to learn on the whole, even if they are poorly organized.

    Yet another point I brought up on the old forum a dozen times: there's no reason a CLI couldn't be made relatively friendly compared to current ones. The type of people who like CLIs just don't give a shit, combined with the legion of applications that use CLI interfaces as APIs (meaning, it's virtually impossible to change how a CLI behaves without breaking software): nothing in that world ever progresses. Ever. It's completely stagnant.

    @tarunik said:

    BTW: I've used AppleScript on Classic Mac before. While it was fantastic at driving applications around (better than VBScript/VBA, even), it lacked one thing, and that was the ability to do things outside an application context. You didn't exactly have a good way to do basic text slicing/dicing/blending, never mind getting at things like standard dialog boxes.

    AppleScript had text handling code, although I'm not going to pretend the language was good at that.

    As for "getting at" things like standard dialog boxes, I have no idea what you're referring to there. The scripting interface didn't use dialogs. I think you might have been working with an app that wasn't correctly scripting-aware.

    That said, you could certainly have AppleScript prompt for a path and show a "Open" or "Save" dialog. Not sure what other OS dialogs it had access to, but I used those two from AppleScript many times.

    @tarunik said:

    Also, is it just me, or were Mac development tools limited to a priesthood? You didn't even have primitive tinker-toys with the Classic Mac OS, save for AppleScript.

    Yes kind of, in that the best ones were always commercial. (MPW, the Apple free IDE, sucked shit. Everybody used THINK C, or THINK PASCAL, or CodeWarrior back when I was writing Mac Classic code. Or even Microsoft's Mac Basic, which was much better than it had any right to be.)

    I don't know what you mean by "primitive tinker toys." Except I'll just point out again that HyperCard shipped with the OS for most of its existence, and AppleScript shipped with it when HyperCard didn't. That's a fuckload more dev tool than Microsoft ever shipped for free, until VS Express came along in 2003ish.



  • @blakeyrat said:

    As for "getting at" things like standard dialog boxes, I have no idea what you're referring to there. The scripting interface didn't use dialogs. I think you might have been working with an app that wasn't correctly scripting-aware.

    What was/is the AppleScript for showing the standard Mac OS Toolbox File Open dialog?

    @blakeyrat said:

    Has been for a long time.

    In that "there are things that ship with Windows that have a .NET dependency?" Yes. In that "you can safely assume that Windows is a .NET delivery mechanism?" No.

    @blakeyrat said:

    what's stopping you? I've done that with .net programs a hundred times.

    Have you considered that you can't xcopy deploy the .NET runtime itself? (Although, in our particular case, Orrible moots it, the general point still holds.)

    @blakeyrat said:

    Why is that shocking? All us Mac Classic programmers "magically" managed that also. In fact, before Windows 95, I think it's safe to say all multitasking on desktop machines was done without threads.

    Cooperative Multitasking isn't all that great, but it works for multitasking. Your MFC buddy is probably just using the Windows 3.x event calls, which still work because Microsoft is awesome about backwards-compatibility.


    It's not that shocking to me, and that's exactly what he's doing. I'm just saying that 99% of programmers seem to have utterly forgotten such things are possible. Which goes to your point that programmers never seem to stand on the shoulders of giants.

    As to why that point is so, and why CS education propagates it: CS education went from a branch of the math dept. (when people actually did stand on the shoulders of giants, at least a little bit), to a commercial-pressure-driven code-monkey-factory because businesses wanted people who can implement whatever inane rules they come up with while kowtowing to whatever inane commercial decisions the clueless bigwigs make about what technologies to buy into. For this to work though, said code monkeys had to be isolated from any and all revolutionary ideas, no matter how old or new they were. Everything from Lisp and the works of Tufte on visual presentation (read Beautiful Evidence sometime @blakeyrat if you haven't already), to Codd and Date's relational theory and the seminal works of Knuth et al. regarding formal proof and analysis of algorithms, and even a proper understanding of the very machines that run our code, is being erased from the curriculum, replaced by a steady stream of inanity regarding the weekly fads of the programming world, cargo-cult 'knowledge' that placates students while denying them the conceptual understanding they need to wrap their heads around what the giants said and continue to say, and useless projects that serve only to keep them from experimenting on their own time.


  • FoxDev

    I was going to say: Cue Blakeyrant, but i was too slow.

    Not that i disagree with him in this case (rare, i know, but it does happen) the CLI sucks.

    however it can't be fixed because of the billions of lines of scripts that rely on the command line acting the way it does (i'm counting all CLIs here, bash, BAT, sh, powershell, etc)

    if we fix the CLI, even putting aside the retraining effort required for humans, we literally could not afford the cost of rewriting all those scripts to use the fixed CLI. So we're stuck with it.

    and when you show me a perfectly automatable UI that just works, even when the UI is redesigned, changed, or customized.. well then i'll show you a Strong AI, because that's what it would take.

    the CLI may be terrible to the point of being broken, but it's there, it is scriptable perfectly, and... well yeah. It sucks but it sucks less than trying to write all those scripts to automate the UI.



  • @accalia said:

    and when you show me a perfectly automatable UI that just works, even when the UI is redesigned, changed, or customized.. well then i'll show you a Strong AI, because that's what it would take.

    You put the automation one layer under the UI. See: AppleScript, VBA.

    (Python actually has some decent wrappers for accessing COM Automation I hear, if you don't want to deal with the steaming pile of bollocks that is VBA.)



  • @tarunik said:

    What was/is the AppleScript for showing the standard Mac OS Toolbox File Open dialog?

    I don't remember anymore.

    @tarunik said:

    In that "there are things that ship with Windows that have a .NET dependency?" Yes. In that "you can safely assume that Windows is a .NET delivery mechanism?" No.

    Ok but it's still stupid to spend 10 times longer developing the application when you could just spend 5 minutes installing .net. The click-and-run installer will even do it for you/your clients.

    @tarunik said:

    Have you considered that you can't xcopy deploy the .NET runtime itself? (Although, in our particular case, Orrible moots it, the general point still holds.)

    So don't do that then?

    @tarunik said:

    I'm just saying that 99% of programmers seem to have utterly forgotten such things are possible.

    Because they are shitty and don't learn from the past. I think we've gone over this already a few dozen million times.


  • FoxDev

    yes. that's the proper way to do it.

    how many programs implement that layer properly, do any programs implement that layer so that there is literally nothing that you can do in the UI that isn't exposed in the automation layer?

    Can you name one? I've tried and failed to name even one,



  • @accalia said:

    and when you show me a perfectly automatable UI that just works, even when the UI is redesigned, changed, or customized.. well then i'll show you a Strong AI, because that's what it would take.

    The MacClassic AppleScript interface would work ... well, I was going to say as long as the OS was even vaguely WIMP-like, but frankly it was easy to add new AppleEvents, so I don't see any reason it wouldn't work on Jupiterian OSes from the year 3000.

    I don't know why you think a strong AI is needed.

    @accalia said:

    the CLI may be terrible to the point of being broken, but it's there, it is scriptable perfectly, and... well yeah. It sucks but it sucks less than trying to write all those scripts to automate the UI.

    You've never used a GUI with a great scripting environment.

    @tarunik said:

    You put the automation one layer under the UI. See: AppleScript, VBA.

    Exactly. Which is exactly why AppleScript didn't deal with dialogs-- it passed-in stuff like file paths and printer options directly to the scripting layer. If you needed a script to prompt the user for a file path, the prompting was in the script, not in the application.



  • @accalia said:

    how many programs implement that layer properly, do any programs implement that layer so that there is literally nothing that you can do in the UI that isn't exposed in the automation layer?

    The reason Mac Classic apps were high-quality is because they made it easier to write correct code than it is to write incorrect code. You actually have to go waaay out-of-your-way in Mac Classic to write a, for example, pull-down menu that doesn't work. (You know, how Audacity and Notepad++ do on Windows.)

    The key is to design your API so that the inherent laziness of programmers works for you instead of against you.

    To answer your question literally: virtually all Mac Classic applications released between 1995-ish and 2001-ish.


  • FoxDev

    It's a shame about Macs then.

    great hardware (phenomenal really), pretty solid software, TERRIBLE business practices.

    now no one is perfect i'll admit but paying attention to the news it's quite apparent how little control Apple has over their external contractors (they're STILL finding that contractors in asia have workers as young as 6 working in extremely hazardous conditions, almost 10 years after first promissing to fire any supplier/contractor that was caught with underage or unpaid workers)

    so yeah. sexy hardware, sexy software, but not getting one red penny from me.



  • @blakeyrat said:

    Exactly. Which is exactly why AppleScript didn't deal with dialogs-- it passed-in stuff like file paths and printer options directly to the scripting layer. If you needed a script to prompt the user for a file path, the prompting was in the script, not in the application.

    I'm talking about how the script prompts the user for a file path. The Classic Mac OS path syntax is...arcane, to say the least. Most users rightfully expect a File Open dialog instead of having to type a path in by hand.

    @blakeyrat said:

    The reason Mac Classic apps were high-quality is because they made it easier to write correct code than it is to write incorrect code. You actually have to go waaay out-of-your-way in Mac Classic to write a, for example, pull-down menu that doesn't work. (You know, how Audacity and Notepad++ do on Windows.)

    The key is to design your API so that the inherent laziness of programmers works for you instead of against you.

    THIS! Make it so that the obvious thing works like you'd expect, and that you have to go out of your way to explicitly break stuff. Any other API design is broken.



  • @accalia said:

    great hardware (phenomenal really), pretty solid software, TERRIBLE business practices.

    Back then it was mediocre hardware (the 68k series wasn't exactly turbo-speed, even the 68040, and Apple had the annoying habit of removing the FPU to save a few bucks), great software, and clueless leadership (they kept joining all these "alliances", presumably designed to combat Microsoft, but none of them turned into any actual products-- Taligent, AIM, OpenDoc Consortium. Then when push came to shove, the Taligent project failed miserably and they had no "next gen" OS ready about 5 years after they needed one-- then Windows 2000 came out and smashed them, technically.)



  • @tarunik said:

    I'm talking about how the script prompts the user for a file path.

    It opens a File -> Open dialog. Or the use dragged-and-dropped a file onto the script's icon. AFAIK, those were the only way for a user to provide a file path in Mac Classic.

    @tarunik said:

    The Classic Mac OS path syntax is...arcane, to say the least. Most users rightfully expect a File Open dialog instead of having to type a path in by hand.

    Well derp. You never, ever, EVER, typed a path by hand in Classic Mac. Having to is a failure of the GUI, and it was a pretty good GUI.

    @tarunik said:

    THIS! Make it so that the obvious thing works like you'd expect, and that you have to go out of your way to explicitly break stuff. Any other API design is broken.

    The real cleverness of AppleEvents/AppleScript is when Apple was pitching this to developers, they were like:

    "Here's the deal: to get Apple Certified, which still matters because it's 1992 and that's how you get into Egghead Software and Computer City, you gotta implement at least the 4 required AppleEvents: Open, Save, Print, Quit. Oh, and, BTW, since you gotta implement the AppleEvent handling code anyway, you'd save time just having your native GUI send AppleEvents to itself."

    Bam.

    In one swoop, Apple software doesn't just become scriptable, but it also becomes recordable. The OS can just look at the events an application is sending to itself, and done! And you can finally compete with Microsoft Office's Macro Recorder, because you have the same thing-- but it's SYSTEM-WIDE.



  • @blakeyrat said:

    The real cleverness of AppleEvents/AppleScript is when Apple was pitching this to developers, they were like:

    "Here's the deal: to get Apple Certified, which still matters because it's 1992 and that's how you get into Egghead Software and Computer City, you gotta implement at least the 4 required AppleEvents: Open, Save, Print, Quit. Oh, and, BTW, since you gotta implement the AppleEvent handling code anyway, you'd save time just having your native GUI send AppleEvents to itself."

    Bam.

    In one swoop, Apple software doesn't just become scriptable, but it also becomes recordable. The OS can just look at the events an application is sending to itself, and done! And you can finally compete with Microsoft Office's Macro Recorder, because you have the same thing-- but it's SYSTEM-WIDE.

    So, basically, it'd be like having something that could sit in the message pump of a Windows app, record streams of Windows messages, and play them back to the app. Now, why doesn't Windows have this?

    @blakeyrat said:

    Then when push came to shove, the Taligent project failed miserably and they had no "next gen" OS ready about 5 years after they needed one-- then Windows 2000 came out and smashed them, technically.)

    Technically speaking, Classic Mac was a disaster compared to even the early versions of Windows NT. It just took NT a while to come close to doing anything reasonable on other fronts. (W2k was just the NT lineage's coming-out-of-the-enterprise-broom-closet party, followed by XP, which finally drove a stake into the heart of Windows being a 32-bit patch to a...you know how the line goes, right?)



  • @tarunik said:

    So, basically, it'd be like having something that could sit in the message pump of a Windows app, record streams of Windows messages, and play them back to the app. Now, why doesn't Windows have this?

    It's a little more to it than that, but that's the gist. It also had a defined format, so app-defined Events could be understood by the scripting engine.

    (For example, Photoshop probably has/had a "Radial Blur" AppleEvent, with associated required fields, that an application like, say, MS Word would never have or need.)

    As to why that doesn't exist on Windows, I don't know. I get the sense that VBScript/JScript was originally intended to work in such a way. My guess is like everything else in Windows, application developers did everything fucking wrong and awful and basically ruined it for Microsoft.



  • @tarunik said:

    Technically speaking, Classic Mac

    Nobody using Classic Mac gave a shit about "technical details". There's a philosophy at work here. We didn't care that it was cooperative multitasking, we cared that the multitasking worked. Which it did, on-par with Windows OSes (better in many ways-- none of the HD thrashing that Win 95 and 98 had) up until the NT kernel became mainstream.



  • @blakeyrat said:

    We didn't care that it was cooperative multitasking, we cared that the multitasking worked. Which it did, on-par with Windows OSes (better in many ways-- none of the HD thrashing that Win 95 and 98 had) up until the NT kernel became mainstream.

    I agree that Classic Mac did a better job of multitasking than W9x, but that was because W9x was an even bigger pile of technical bollocks!

    The big thing that Classic Mac never was, and never could be, though, was multi-user. And that is what made its technical foundations rot out from under it with the rise of internetworking, and of security as a growing concern.



  • @tarunik said:

    The big thing that Classic Mac never was, and never could be, though, was multi-user. And that is what made its technical foundations rot out from under it with the rise of internetworking, and of security as a growing concern.

    True. It also was a big contributor to what killed-off BeOS. That, and BeOS' pig-headed refusal to sell to Apple because of some fantasy that Sun or IBM would swoop in and buy them... hah!

    BTW the alternative universe where Apple bought BeOS instead of NeXT? It's a utopia.



  • @blakeyrat said:

    ScholRLEA said:
    As I've said before, I've always liked Wirth's solution as used in Oberon: make it possible to take any piece of text, select it, compile it on the fly, and run it as a script. You could make a button just by selecting the code you want it to run and bind it to a point in the display. Simple.

    That sounds like a terrible idea.

    Could you give more detail on this, please? It is something I intend to incorporate into ThelemaOS, and if it is something I should avoid, I would like to know why.

    Mind you, I do not intend this for day-to-day use, nor would I take as naive an approach as Oberon OS did. It is part of the development system, primarily, and is used to replace the need for separation of applications. One of the principles of the Thelema UI design is that at least 90% of the user work is done in 'layouts', overlays which modify or extend the base document creation/display/editing system, rather than independent applications. The layouts in turn would be constructed from these scripts, buttons and so on. Different types of documents could have one or more layouts the user could choose from to work on and display the document. The goal is that you would have a unitary interface which could be switched around, and tools which could inter-work with each other to make build the particular user interfaces needed without having to write a full application. While it would be available to the user, it would be mainly a layout developers' tool.


  • BINNED

    I'll just leave this here:


  • đźš˝ Regular

    @blakeyrat said:

    CADT

    Center for Alcohol and Drug Treatment? Define your acronyms!

    I'm guessing it's Cascade of Attention-Deficit Teenagers.



  • @Zecc said:

    I'm guessing it's Cascade of Attention-Deficit Teenagers.

    Correct.



  • @ScholRLEA said:

    Could you give more detail on this, please?

    Maybe you should expand on it first. All I saw was, "any piece of code that's in any editable text control can be selected, compiled, and executed in-place". Which sounds stupid as shit.

    Unless you're making an environment for big brained geeknerds to do geeknerdy stuff for other geeknerds, in which case you should ignore all my opinions because to me that's basically every circle of hell combined into one big-ass hellsphere.

    @ScholRLEA said:

    One of the principles of the Thelema UI design is that at least 90% of the user work is done in 'layouts', overlays which modify or extend the base document creation/display/editing system, rather than independent applications.

    Yeah; the OpenDoc model failed. I already named-dropped that shit.

    The Microsoft version, OLE, does work in Microsoft Office, probably because a single company controls all relevant applications. You leave the Office and IE ecosystem, and it's an utter failure.

    @ScholRLEA said:

    The layouts in turn would be constructed from these scripts, buttons and so on. Different types of documents could have one or more layouts the user could choose from to work on and display the document. The goal is that you would have a unitary interface which could be switched around, and tools which could inter-work with each other to make build the particular user interfaces needed without having to write a full application. While it would be available to the user, it would be mainly a layout developers' tool.

    Make a YouTube, I can't visualize what you're talking about here. Now it's sounding more like HyperCard.



  • @dkf said:

    A usable interface for a developer is something like a highly logical and well designed API, or a clear and powerful IDE. It can reasonably demand some learning to use well; developers are usually at least power users.

    An interface aimed at end users is very different, especially as it is rare that those end users will consider it important to actually learn in depth what you're doing. This means that the developer of the code concerned has to pay a lot more attention to how to present things to users, how to guide them through the decisions that need to be taken and what decisions ought to be concealed entirely through the use of auto-detection and sensible defaults (unless someone self-identifies as wanting this extra level of detail).

    The trouble with arguing this with @blakeyrat is that, AFAICT, he insists that there should be no difference between developers and end users, because everyone should be able to be a developer. All tools should be so simple and easy to use that your grandmother can debug an ecommerce web application or administer Oracle Server as easily as she can find Grumpy Cat videos on YouTube. A bit of hyperbole, perhaps, but that seems to be the utopia he dreams of.



  • @blakeyrat said:

    Make a YouTube, I can't visualize what you're talking about here.

    That's actually an excellent idea. It may take me a bit of time, but I'll get started on it ASAP.

    @blakeyrat said:

    Yeah; the OpenDoc model failed. I already named-dropped that shit.

    The Microsoft version, OLE, does work in Microsoft Office, probably because a single company controls all relevant applications. You leave the Office and IE ecosystem, and it's an utter failure.


    While I can see the comparison to those on a rough level, it sort of reverses the relationship. Instead of having a tool or protocol for communicating data from one stand-alone application to another, this approach - which comes in part out of Project Xanadu, in part from Emacs, and in part from the early Smalltalk user environment - has a single 'application' and a flurry of tools that can be applied to it. While this by itself would be both good and bad - it would be powerful, but just too chaotic and confusing to use the tools loosely without some sort of structure - so the tools can be bound to buttons, menus, and so forth. More importantly, they can be bound into a kind of viewport or layout (called a vision in Thelema terminology) which can structure both the tools that can be used and the data being viewed.

    An important part of the idea is that, as in Xanadu (whose technology I mean to license, if I can, even though I'd be re-writing the code to suit the overall system), data is not stored in files, but as hypertextual chunks (containing metadata about the creator, the chunk size, it's original location, what locations it is store at, and visibility limits, among other things), and can be organized in various ways through parallel linkages. So if you have a set of document comprising data about, say, sports team averages, you could view it as a written text, as a spreadsheet, as a source for a pie chart's layout, among other things. It isn't data being transferred between applications; it is the same data, unchanged and immutable, just viewed differently. A possible, if somewhat weak, comparison might be to views of a database, where the same data in one or more tables is re-organized for convenient use, but the underlying data remains the same.

    So, a layout is really two things: a view into the data, allowing it to be re-arranged, added to, and edited (though the original, again, isn't changed - quotes of parts are held in links which define how much and where in the original the quote is taken), and a scaffolding for the tools commonly associated with that view.



  • @HardwareGeek said:

    The trouble with arguing this with @blakeyrat is that, AFAICT, he insists that there should be no difference between developers and end users, because everyone should be able to be a developer.

    Yes, but what people here on this forum don't get is that that's the ideal.

    People don't seem to know what the word "ideal" means, which is a communication gap I experience pretty often.

    @ScholRLEA said:

    So, a layout is really two things: a view into the data, allowing it to be re-arranged, added to, and edited (though the original, again, isn't changed - quotes of parts are held in links which define how much and where in the original the quote is taken), and a scaffolding for the tools commonly associated with that view.

    Ok; I'm an office drone writing a report on fleet fuel economy. How does this feature help me?

    You're SOOOO fucking far into this abstraction world of abstract abstractness that I can't relate ANYTHING you're saying to ANY actual real-world tasks people might want to do. You sound like fucking Ray Ozzie trying to explain-- well anything really.



  • @blakeyrat said:

    You're SOOOO fucking far into this abstraction world of abstract abstractness that I can't relate ANYTHING you're saying to ANY actual real-world tasks people might want to do.

    Unfortunately, you're right, and you're not the first person to point it out. I need to work on that, and seriously, as it is a major impediment to - well, almost everything. Not really sure where to begin, though.



  • Start with the problem, not the solution. WHAT IS THE PROBLEM YOU ARE TRYING TO SOLVE!


  • Garbage Person

    For what it's worth, I agree with Blakey. Entirely. System 7.5 was the pinnacle of OS and UI design. It was my first actual OS (everything to that point was self contained) and I was basically a power user instantaneously.

    All developer tools are crap.
    Most developers are crap.
    This shit should not be this hard.

    I'm just less vocal about it because I'm lazy and he rants so nicely.



  • @blakeyrat said:

    Yes, but what people here on this forum don't get is that that's the ideal.

    Perhaps it is, but I'm not at all sure I agree. Even if it were achievable, which I think is very, very unlikely, I suspect it would not be the utopia you envision.

    Some people are simply incapable of breaking a solution into a sequence of logical steps that can be implemented, if they can even define the problem in the first place. These people are not going to develop anything useful, no matter how simple and easy to understand the development tools may be. Even a perfect, completely automated CASE tool, if such a thing existed, would be useless if the user can't define the problem he's trying to solve.


  • ♿ (Parody)

    @blakeyrat said:

    Yes, but what people here on this forum don't get is that that's the ideal.

    @HardwareGeek, it's more like talking to a communist, who can't grasp that his schemes won't work no matter how slick the planners tools are, you'd have to have people who aren't anything like people really are.


  • ♿ (Parody)

    @Weng said:

    For what it's worth, I agree with Blakey. Entirely. System 7.5 was the pinnacle of OS and UI design.

    It was all downhill after a two key combination to produce LOAD "*", 8, 1.



  • @boomzilla said:

    you'd have to have people who aren't anything like people really are.

    Yes, see my second paragraph in the post right above yours.


  • ♿ (Parody)

    @HardwareGeek said:

    Yes, see my second paragraph in the post right above yours.

    Yes, I saw it after I posted. Which is the best time, because my post wasn't contaminated, and yet we reached similar conclusions, but mine was more trollish.



  • @HardwareGeek said:

    Some people are simply incapable of breaking a solution into a sequence of logical steps that can be implemented, if they can even define the problem in the first place. These people are not going to develop anything useful, no matter how simple and easy to understand the development tools may be. Even a perfect, completely automated CASE tool, if such a thing existed, would be useless if the user can't define the problem he's trying to solve.

    Ok. So what?

    People don't have to write software if they don't want to. Maybe people just won't be any good at it. Who knows. I'm not proposing we hold a gun against some old lady's head and say, "write a word processor or I pull this trigger!"

    The point I'm getting at is, anybody who wants to be able to should be able to.



  • @blakeyrat said:

    The point I'm getting at is, anybody who wants to be able to should be able to.

    The point I'm getting at is, some people may want to be able to, but actually be able to produce nothing but WTF. There is more than enough WTF software out there without enabling stupid people to write more.

    I don't disagree with you entirely, but I don't think tools that require some effort to learn are unreasonable.

    Enable a domain knowledge expert to write non-WTF software in his field with minimal effort. Great. He'd probably write it anyway, and allowing him to do it with less WTF is a good thing.

    Enable Grandma Ida to write something to search YouTube for Grumpy Cat. Um, maybe. Probably harmless even if it isn't very well thought-out.

    Enable PHB to write a bug tracking system that is code-correct but imposes an insane workflow on his underlings. Not such a great idea. The problem here, of course, is that lowering the bar to Dr. Physicist and Grandma Ida lowers the bar for idiot PHBs, too. I don't see a way around this; sucks to be an underling, I guess.

    @blakeyrat said:

    People don't have to write software if they don't want to. Maybe people just won't be any good at it.

    The problem I see is the people who want to, but just aren't any good at it. The worst will probably be excluded by even the easiest-to-use tools I can imagine, because they are simply not good enough at logical thinking to understand what the tools are trying to help them do. A level above that are code monkeys. They, of course, exist in large numbers even without the kind of tools you envision to hold their hands; I see a lot more of them in your world. You have yet to persuade me this is a good thing.



  • @blakeyrat said:

    The point I'm getting at is, anybody who wants to be able to should be able to.

    I want to be an astronaut but I'm too tall. :'(

    @HardwareGeek said:

    Enable a domain knowledge expert to write non-WTF software in his field with minimal effort. Great. He'd probably write it anyway, and allowing him to do it with less WTF is a good thing.

    This is okay as long as this imaginary magic pixie dust platform has mind-reading abilities so that no inefficient algorithms are selected, the expert has no biases which might lead to a suboptimal solution, and the platform selects an appropriate strategy, sometimes in defiance of what the expert thinks he knows, to accomplish the solution.

    @HardwareGeek said:

    Enable Grandma Ida to write something to search YouTube for Grumpy Cat. Um, maybe. Probably harmless even if it isn't very well thought-out.

    If the magical pixie dust platform produces something that opens a browser window to YouTube with a search, then it has met its requirements.

    @HardwareGeek said:

    Enable PHB to write a bug tracking system that is code-correct but imposes an insane workflow on his underlings. Not such a great idea.

    Hey, if it makes the PHB happy, it is!

    @HardwareGeek said:

    The problem I see is the people who want to, but just aren't any good at it.

    It also takes a non-trivial amount of time to master a trade. And yes, I am referring to software development.



  • @Groaner said:

    Hey, if it makes the PHB happy, it is!

    The underlings might disagree.



  • @HardwareGeek said:

    The underlings might disagree.

    The needs of the many outweigh the needs of the few, etc.? I guess you Kant please everyone.


  • Discourse touched me in a no-no place

    @HardwareGeek said:

    Some people are simply incapable of breaking a solution into a sequence of logical steps that can be implemented, if they can even define the problem in the first place.

    Case in point: one of our software products is a scientific workflow programming environment. We've got what is effectively an IDE for workflows and a server for executing them, and we target supporting a wide range of data-intensive sciences. Our target users are intelligent people with at least a science-based degree of some kind — they are definitely always going to be above the population average for smarts — but a significant fraction of them (not a majority) simply find doing anything other than the most basic of do-this-then-do-that to be too hard.

    God help you if you want to explain real asynchronous processing to them. If someone gets that, they can handle being a software developer and using normal developer tools. They might think the tools suck, and they might be right, but they can still cope. They have the right mind for programming. Most people don't, at least when you're talking about adults in the past 10 years. (I've no idea whether it would be different with people who had learned to do simple programs in early grade school, when the brain is a lot more plastic than in adulthood. I guess that could be considered to be an experiment that is running now…)


  • BINNED

    @HardwareGeek said:

    The problem I see is the people who want to, but just aren't any good at it. The worst will probably be excluded by even the easiest-to-use tools I can imagine, because they are simply not good enough at logical thinking to understand what the tools are trying to help them do. A level above that are code monkeys. They, of course, exist in large numbers even without the kind of tools you envision to hold their hands; I see a lot more of them in your world. You have yet to persuade me this is a good thing.

    A simple "Like" isn't enough for this post so have a 🍺 .



  • @HardwareGeek said:

    Some people are simply incapable of breaking a solution into a sequence of logical steps that can be implemented, if they can even define the problem in the first place. These people are not going to develop anything useful, no matter how simple and easy to understand the development tools may be. Even a perfect, completely automated CASE tool, if such a thing existed, would be useless if the user can't define the problem he's trying to solve.

    Yes. They have neither a developed logical mind capable of doing this chunking, nor the structural intelligence needed to discipline themselves into being able to chunk the problem. (Not saying they can't be smart in other ways, but socioemotional smarts don't help you define a problem in a way that can be solved by a metal box with silicon chips with it. I'd say they even hinder it as they impair one's ability to come up with an accurate mental model of said metal box's operation.)

    Considering that that chunking task (I call it 'factoring' from having some FORTH knowledge, but it's really the same thing no matter what you call it) is the core of all of software design, some people really shouldn't be programmers no matter how hard you try to force them into that hole.


Log in to reply