Microsoft still hasn't figured it out.



  • @Arnavion said:

    For example, a KDE upgrade...
     

    And that's the point where you deviate from all those people. First, a computer with monts of uptime usualy does not have a GUI instaled. GUIs are for desktops, and desktops can be restarted at will (but don't make it mandatory, ok). Second, KDE is not "a regular library". Your DE is second only to your libc in how hard it is to upgrade.

    That said, you don't need to restart your computer after upgrading your DE. You just need to restart the DE. What is painfull enough, but we are talking about restarting after software instalation, not after upgrading the most central pieces of the system.



  • @Cassidy said:

    I'm also struggling to understand what a "read lock" is - how does this differ to a "write lock"...?
     

    You aquire a read lock when you want to read data. Several processess can have read locks at the same time. You aquire a write lock when you want to change data. While a process have a write lock to some resource, no other process can have any kind of lock (read or write) to the same resource.

    Replace "process" with "thread", "worker", "agent" or whatever working unit you paralell architecture supports.



  • @Mcoder said:

    @Arnavion said:

    For example, a KDE upgrade...
     

    And that's the point where you deviate from all those people. First, a computer with monts of uptime usualy does not have a GUI instaled. GUIs are for desktops, and desktops can be restarted at will (but don't make it mandatory, ok). Second, KDE is not "a regular library". Your DE is second only to your libc in how hard it is to upgrade.

    That said, you don't need to restart your computer after upgrading your DE. You just need to restart the DE. What is painfull enough, but we are talking about restarting after software instalation, not after upgrading the most central pieces of the system.

    • This thread has been about desktops all along. Why are you talking about non-desktops?
    • The words "For example" mean that what I gave is an example. In fact, nothing I said applies only to DEs. The reason I used KDE as an example is because KDE applications share a lot of libraries, but this is true of any software suite which shares libraries between applications.
    • I already mentioned that the KDE example is resolved by logging out and back in, and in the generic case (aka non-DE case) it atleast requires restarting the process in question to be able to observe the effects of the update.


  • @joe.edwards said:

    Escaping perfectly is damned tricky.
    Only because *nix is retarded beyond belief.  I don't care if its 2013 or 1963, if you are a programmer, and you create an operating system that will let you put control characters in filenames, you should be shot in the face with a hammer.



  • @RangerNS said:

    Oh, except that their is no (easy?) way to know which processes have that write-lock in place, and/or no easy way to get those processes to release the lock. Thus, this problem. Windows has some extra complicated "schedule this file to change later" process, used (especially) for installs/upgrades, with processes actually being scheduled at reboot.

     Restart Manager API let's you see which processes have a given file open.

     



  •  This is partly because of the .NET 4.5 upgrade. Unlike all previous side-by-side, this is an in-place update [when it is done, the original 4.0 does not exist any more]. Therefore it was necessary to do a reboot.



  • @Cassidy said:

    I'm also struggling to understand what a "read lock" is - how does this differ to a "write lock"...?

    A file can be locked for reading if it is not locked for writing.

    A file can be locked for writing if it is not locked for reading AND it is not locked for writing.



  • @TGV said:

    @skotl said:
    And, assuming that the installer needs to overwrite files that are in use by another app, imagine how all twisted your underwear would get if it started closing all these apps for you, so it could overwrite the files? You'd be on here getting all mildly-moist and shouting "...and it closed WORD! And CHROME! And Outlook! And ((shudder)) STEAM! BASTARDS!"
    Still, why should an IDE replace widely used DLLs?

    So you don't have to go through the annoying process as when installing Java IDE, which is either:

    1. Install IDE
    2. Run IDE
    3. Find out you're missing Java SDK
    4. Go google Java SDK
    5. Download Java SDK
    6. Manually install Java SDK
    7. Dig into IDE options, preferences, settings, paths, SDK, or wherever the setting is, and manually locate&set Java SDK path

    or

    1. Go google Java SDK
    2. donload Java SDK
    3. Manually install Java SDK
    4. Install IDE
    5. Hope IDE installer is smart enough to locate and set Java SDK path automatically
    6. If not, dig into IDE options, preferences, settings, paths, SDK, or wherever the setting is, and manually locage & set Java SDK path;

    I don't know about you, but to me, a process consisting of:

    1. Install IDE
    2. Possibly maybe restart in about 10% of the cases

    seems much more user-friendly.


  • Discourse touched me in a no-no place

    @El_Heffe said:

    But, not every file that is being used by one or more processes has an open window. (most don't)  And that's the problem.

    Raymond Chen

    Did you only read the first part of the article and ignore the rest which explains why the bit you did quote is no longer true?


  • Discourse touched me in a no-no place

    @Arnavion said:

    Too bad the "underlying runtime" dependency means Lunitards would rather bash it than understand its usefulness.
    If I was going to bash that, I'd start with the apparent inconsistency as to whether commas are used to separate arguments or not. Fix one problem, introduce another. “Wonderful.”

    Real programmers try to avoid writing much code actually using the Unix shell; it's possible, but it's hard. Not as bad as using CMD.EXE or the long-obsolete COMMAND.COM — yes, I've done that in the past — but still damn tricky. Yet for most interactive work, the Unix shell is pretty reasonable; that is its primary use-case. The only major programs I've seen using the bourne shell have been generated using the autoconf suite, and that's only because sometimes you've got to bootstrap from virtually nothing. (Either that or cross-compile, which is even more annoying.)



  • @dkf said:

    If I was going to bash that, I'd start with the apparent inconsistency as to whether commas are used to separate arguments or not.

    There is no inconsistency. Spaces separate multiple parameters - the function I gave has two parameters. The comma is to separate elements of an array - the array itself is a single parameter for the function. That said, the thing I was showing in that snippet was actually just the line with the comment, where I take a string I received from somewhere else (say) and executed it as the name of a program, without needing to worry about whether the string contains spaces or not.

    @dkf said:

    Yet for most interactive work, the Unix shell is pretty reasonable; that is its primary use-case. The only major programs I've seen using the bourne shell have been generated using the autoconf suite, and that's only because sometimes you've got to bootstrap from virtually nothing.

    My first ever contribution to an OSS project was to fix an autoconf include file that did

    if test "x$foo"="x$bar"
    instead of
    if test "x$foo" = "x$bar"
    The former is always true regardless of the values of $foo and $bar. It marvels my mind that shells (*) can be so retarded. I of course use the shell somewhat frequently for simple tasks, for which as you said, it is pretty reasonable. But there are things that just stink. Take the magic syntax "$@" commonly used in wrapper scripts (take in some parameters and pass them to another script). $@ by itself represents the array of all parameters, but the "" around it are necessary, otherwise the shell will *re-parse* the array and thus split it on spaces *again*. A design that requires magic incantations like this is sheer stupidity, which is why I like how PowerShell has types. Just using strings and arrays means you'll find you never have to worry about whether you need to quote something or not.

    (*) Well, the test binary and not the shell, but I'm ranting about the whole ecosystem here. Also, I think test is usually a shell-builtin anyway.

    Edit: Also, regarding bootstrapping - Yes of course, a PowerShell analogue is useless as a bare minimum shell. If your system can't even boot to runlevel 3 it's unlikely you'll be able to get a runtime running that can support something like PS. Even on Windows PS can't replace cmd.exe for low-level purposes like the shell on the install CD.



  • @Arnavion said:

    The former is always true regardless of the values of $foo and $bar. It marvels my mind that shells (*) can be so retarded.

    That's because they're not implemented the way people expect programming languages to be. Instead, they're based on string rewriting tricks (partly due to the Unix philosophy being basically "everything must always be treated as a raw string"), which inevitably results in problems like these.

    If I made a scripting language, it would probably be a mix of Python and Bash.



  • @Arnavion said:

    The components that have to be written to are core OS components. The resolution is to restart the OS.
    Very well, but that still doesn't answer the question why installing an IDE has to replace core OS components.

    Ok, perhaps debugging services were not in place for some weird reason. Then nobody can be using them, right? And otherwise, tell the person who's installing: you can use VS right now, but until you restart you cannot debug/bla/bla. Much crisper.



  • @TGV said:

    but that still doesn't answer the question why installing an IDE has to replace core OS components.
    It shouldn't have to. An IDE/Compiler/Whatever could be (should be?) an independent stand-alone program. But that is not how Microsoft chose to design it. They seem to want everything to be interwoven into the operating system, similar to how the Windows desktop, Start Menu and Internet Explorer are all part of Windows Explorer.



  • @Salamander said:

    @Ben L. said:
    Since Australium's last reboot, I've uninstalled the kernel, changed the distribution from testing to unstable, installed a new kernel, and updated all my software several times. All over ssh.

    You realise you still need to do a reboot for those kernel changes to take effect, right?

    Depends how it's done. There are ways.


  • Discourse touched me in a no-no place

    @anonymous235 said:

    If I made a scripting language, it would probably be a mix of Python and Bash.
    You mean it would be like a slightly retarded version of Ruby? The sane semantics of Python with the speed of Bash. Bleah.



  • @Arnavion said:

    $@ by itself represents the array of all parameters, but the "" around it are necessary, otherwise the shell will re-parse the array and thus split it on spaces again.

    Not actually what happens. There is no "re-parsing". What happens after the expansion of $@ is the same thing that happens after the expansion of $anything: word splitting followed by filename expansion (globbing) followed by quote removal.

    The bash manual is perfectly clear about the order in which these expansion steps are performed, and if you haven't committed that expansion process to memory you've no business writing shell scripts that other people might have to use.

    Standard POSIX shells support literal character quoting (with a \ prefix) and two kinds of string quoting: single quotes turn off all expansions for material between the quotes, while double quotes turn off only word splitting and filename expansion. Bash also offers the $'quoted' form, which works like single quoting except that C-style character escapes such as \n and \t get converted to the corresponding control characters, and $"quoted" that does something clever with locales that I've never needed to use.

    @Arnavion said:

    A design that requires magic incantations like this is sheer stupidity

    There is nothing magic about punctuation marks meaning things, and the fact that you don't like the way POSIX shells handle tokenization and punctuation doesn't make them stupid.

    @Arnavion said:

    I like how PowerShell has types. Just using strings and arrays means you'll find you never have to worry about whether you need to quote something or not.

    You don't need to "worry" about it in bash either: just quote anything you don't want globbed and word-split, and you can be sure that bash will hand it off to whatever you're invoking as a single argv[]] string.

    Bash has both indexed and associative (hashmap) arrays too, if you need them.

    PowerShell has a comparably arcane set of punctuation rules for variable expansion in string literals; in fact much of it seems to have been lifted straight from Bourne shell, so I'm guessing Jeffrey Snover disagrees with you about its stupidity.

    @Arnavion said:

    My first ever contribution to an OSS project was to fix an autoconf include file that did

    if test "x$foo"="x$bar"
    instead of
    if test "x$foo" = "x$bar"
    The former is always true regardless of the values of $foo and $bar. It marvels my mind that shells (*) can be so retarded.

    The shell's input tokenization rules are a little unusual compared to many other languages, in that only a very restricted set of non-alphabetic characters marks token boundaries, but I wouldn't go so far as to call them "retarded" because they're applied consistently by a competent parser. "Retarded" is a description I personally reserve for the not-really-a-parser at the heart of cmd, and the incredible amount of derp that's been bolted on to work around its deficiencies. I shudder to imagine how ugly PowerShell would have been if cmd, rather than Bourne shell, had been its main syntactical influence.



  • @flabdablet said:

    There is no "re-parsing". What happens after the expansion of $@ is the same thing that happens after the expansion of $anything: word splitting followed by filename expansion (globbing) followed by quote removal.

    AKA re-parsing.

    @flabdablet said:

    The bash manual is perfectly clear about the order in which these expansion steps are performed, and if you haven't committed that expansion process to memory you've no business writing shell scripts that other people might have to use. <Paragraph about how quoting works>

    I don't need the lesson since I've already demonstrated I know how it works. I'm calling it out on being stupid because I know how it works.

    @flabdablet said:

    There is nothing magic about punctuation marks meaning things, and the fact that you don't like the way POSIX shells handle tokenization and punctuation doesn't make them stupid.

    Give me one genuine use-case where you want the behavior of $@ instead of "$@". In case you actually have one (I can't think of any but what do I know), convince me that that behavior is also so important that it should be the default way of implementing $@ so that the other use-case of "Pass all arguments unchanged" has to be associated with the longer, more complicated, remember-you-have-to-quote-everything-otherwise-it-won't-work version that is "$@"

    @flabdablet said:

    You don't need to "worry" about it in bash either: just quote anything you don't want globbed and word-split, and you can be sure that bash will hand it off to whatever you're invoking as a single argv[] string.

    Of course. Because having a default behavior that nobody wants is perfectly sane and not stupid at all.

    @flabdablet said:

    PowerShell has a comparably arcane set of punctuation rules for variable expansion in string literals; in fact much of it seems to have been lifted straight from Bourne shell, so I'm guessing Jeffrey Snover disagrees with you about its stupidity.

    I'm not sure why we're talking about variable expansion now, since I never complained about it in Bash and I do know it works almost the same way as in PS. I also don't find either one's rules "arcane".

    @flabdablet said:

    The shell's input tokenization rules are a little unusual compared to many other languages, in that only a very restricted set of non-alphabetic characters marks token boundaries, but I wouldn't go so far as to call them "retarded" because they're applied consistently by a competent parser.

    Your link has nothing to do with how the test executable (or built-in) should interpret its input.

    @flabdablet said:

    "Retarded" is a description I personally reserve for the not-really-a-parser at the heart of cmd, and the incredible amount of derp that's been bolted on to work around its deficiencies. I shudder to imagine how ugly PowerShell would have been if cmd, rather than Bourne shell, had been its main syntactical influence.

    I agree cmd's parser is retarded also.



  • @TGV said:

    Very well, but that still doesn't answer the question why installing an IDE has to replace core OS components.

    Yeah man, it's not like the thread has already mentioned it multiple times or anything.



  • @El_Heffe said:

    They seem to want everything to be interwoven into the operating system, similar to how the Windows desktop, Start Menu and Internet Explorer are all part of Windows Explorer.

    The Windows desktop and the Start Menu should of course be a part of Windows Explorer, since Explorer is the equivalent of the DE you'd install on *nix. What you probably meant to say is that Explorer shell is interwoven to the OS, but even that can be replaced although I've never tried those or seen them in action.

    IE is not a part of Explorer. A part of it (the rendering engine) is part of the OS for use in other in-built things. IE can be uninstalled, which removes the IE binary but of course leaves the rendering engine DLLs behind, for that reason.



  • @Arnavion said:

    @TGV said:
    Very well, but that still doesn't answer the question why installing an IDE has to replace core OS components.

    Yeah man, it's not like the thread has already mentioned it multiple times or anything.

    O yeah, sorry man. You're answer was the wisest. How did I dare question its relevance to the topic starter. 



  • @Arnavion said:

    @flabdablet said:
    There is no "re-parsing". What happens after the expansion of $@ is the same thing that happens after the expansion of $anything: word splitting followed by filename expansion (globbing) followed by quote removal.
    AKA re-parsing.

    No, it's not "re-parsing". The script parser's rules for breaking script into tokens are not reapplied. A well specified set of expansions is applied after initial tokenization.

    @Arnavion said:

    convince me that that behavior is also so important that it should be the default way of implementing $@ so that the other use-case of "Pass all arguments unchanged" has to be associated with the longer, more complicated, remember-you-have-to-quote-everything-otherwise-it-won't-work version that is "$@"

    Given that "$@" has special semantics (expanding to an unknown number of replacement tokens, rather than only one as for every other parameter expansion), and given that $* already gives you the ability to apply word splitting and globbing to the script or function's collected arguments, I can see why you might prefer having $@ do what "$@" does. Personally I think maintaining the consistency of the rule that says you always use quotes if you want to turn off word splitting and globbing was the right call. I've never been annoyed by needing to type "$@" instead of $@. I also don't see the shorter form as any kind of "default" for the longer one.

    @Arnavion said:

    I also don't find either one's rules "arcane".

    words=(apple banana catalog dormant eagle fruit goose hat icicle)
    for k in "${!words[@]}"
    do
        echo "words[$k]: ${words[k]}"
    done
    looks a little arcane to me, as does the whole perl-inspired @ vs $ thing that PS has going on.

    @Arnavion said:

    Your link has nothing to do with how the test executable (or built-in) should interpret its input.

    The test command treats each of its argument is a single token. That means that it can handle comparing strings with arbitrary content (unless they start with a minus sign, which can cause them to be misinterpreted as operators; this is easily avoided by prefixing as in your example). If test were able to re-tokenize its own arguments in order to parse individual arguments as testable expressions, it would also need its own string-literal quoting/escaping rules independent of those implemented in the shell. You'd need to pre-apply those rules to any arguments you passed to it and this would, I think, cause more errors than simply needing to ensure that expressions you evaluate with test have whitespace around the operators.

    Note that the let builtin does internally tokenize its own arguments. It can do this because those arguments are arithmetic expressions, and are therefore not allowed to contain arbitrary string literals.

    @Arnavion said:

    I agree cmd's parser is retarded also.

    I disagree that what cmd has can accurately be described as a parser :-)



  • @RobFreundlich said:

    Getting authorization is exactly what it should do. "The following applications are using files that need to be replaced: <list>. Installation of Visual Studio will not complete until those applications are closed. Would you like the installer to close them for you?"
    The problem is that this is prone to race conditions, and since VS is made up of a bunch of components checking all those files could take several minutes (one of my own installers that installs roughly 5000 files takes about half a minute to do the check; at a quick glance, VS installs some 40.000 files).
    @ip-guru said:
    windows has not been writen to allow it because it is sdesigned as a single user
    Huh? NT has been designed from start to support multiple users.



  • @TGV said:

    @Arnavion said:

    The components that have to be written to are core OS components. The resolution is to restart the OS.
    Very well, but that still doesn't answer the question why installing an IDE has to replace core OS components.

    Ok, perhaps debugging services were not in place for some weird reason. Then nobody can be using them, right? And otherwise, tell the person who's installing: you can use VS right now, but until you restart you cannot debug/bla/bla. Much crisper.

    Visual Studio installs all the runtimes programs written in it need to run.  This includes .NET (as previously mentioned) [b]and[/b] C++ components such as mscvrt.dll.  Which just happen to be used in a LOT of programs.

     



  • @flabdablet said:

    No, it's not "re-parsing". The script parser's rules for breaking script into tokens are not reapplied. A well specified set of expansions is applied after initial tokenization.

    Oh, fine. Call it what you want. What I meant by calling it "re-parsing" is "does not pass (the arguments array) unchanged".

    @flabdablet said:

    Personally I think maintaining the consistency of the rule that says you always use quotes if you want to turn off word splitting and globbing was the right call.

    I actually find it weird that I have to quote to prevent re-parsing splitting and globbing on anything, not just $@. I honestly can't think of any reason why someone would want their arguments to be re-parsed effectively double-evaluated. In every single script where I've wanted to forward a variable to another script / executable, I've always wanted it to be passed unchanged in any form from how I got them in the first place. Can you give a use-case where you'd want the arguments to be evaluated?

    @flabdablet said:

    If test were able to re-tokenize its own arguments in order to parse individual arguments as testable expressions,

    That is what I want, yes.

    @flabdablet said:

    it would also need its own string-literal quoting/escaping rules independent of those implemented in the shell. You'd need to pre-apply those rules to any arguments you passed to it and this would, I think, cause more errors than simply needing to ensure that expressions you evaluate with test have whitespace around the operators.

    Why would test have to do all that? If foo is a and bar is b, then test "x$foo"="x$bar" is evaluated by the shell and the resulting invocation is test 'xa=xb'. test just needs to realize that xa=xb is an expression, not a single token and treat it in the same manner it would have treated the invocation test 'xa' '=' 'xb'. test doesn't need to know about variable expansion at all.


  • ♿ (Parody)

    @powerlord said:

    This includes .NET (as previously mentioned) and C++ components such as mscvrt.dll.  Which just happen to be used in a LOT of programs.

    I'm with you on the .NET stuff, but don't the various C++ runtimes peacefully coexist? If you're just plopping yet another runtime dll down, then nothing currently uses it, so there's no problem. And if it already exists, then you don't have to do anything, so it's no problem.


  • ♿ (Parody)

    @Arnavion said:

    test just needs to realize that xa=xb is an expression, not a single token and treat it in the same manner it would have treated the invocation test 'xa' '=' 'xb'. test doesn't need to know about variable expansion at all.

    Shit...now we have to worry about escaping equals signs?



  • @flabdablet said:

    words=(apple banana catalog dormant eagle fruit goose hat icicle)
    for k in "${!words[@]}"
    do
    echo "words[$k]: ${words[k]}"
    done
    looks a little arcane to me

    In PS:

    
    $words = 'apple', 'banana', 'catalog', 'dormant', 'eagle', 'fruit', 'goose', 'hat', 'icicle'  # Surrounding the RHS with @() is optional
    for ($i = 0; $i -lt $words.Length; $i++) {
        echo "words[$i] = $($words[$i])"
    }
    

    I think the PS way requires less voodoo with respect to the iteration, so not really arcane.

    @flabdablet said:

    as does the whole perl-inspired @ vs $ thing that PS has going on.

    PS doesn't have the concept of scalar or list contexts like Perl does. The @() operator is simply a way to make arrays and hashmaps, nothing more.



  • @boomzilla said:

    @Arnavion said:
    test just needs to realize that xa=xb is an expression, not a single token and treat it in the same manner it would have treated the invocation test 'xa' '=' 'xb'. test doesn't need to know about variable expansion at all.

    Shit...now we have to worry about escaping equals signs?

    Huh? If you're talking about how I surrounded all the arguments with single quotes, I was merely being explicit. My point is unchanged if you don't write those single quotes.



  • @TGV said:

    Very well, but that still doesn't answer the question why installing an IDE has to replace core OS components.

    Visual Studio contains a special preflight installer that installs its prerequisites, e.g. , updated .NET libraries. Visual Studio itself strongly depends on .NET for its extension manager, visual designer widgets, and even for the MSBuild engine. The .NET 4.5 framework was designed as an in-place upgrade to .NET 4.0 and while .NET 4.0 is not part of the family of core system libraries* you may have applications or services running on your system that are making use of the .NET 4.0 framework. ATI's Catalyst Control Center has it as a dependency on it, for instance. Several triple-A games sold on Steam as well. (Not that you are likely to be running those while you are installing Visual Studio; but this is just to illustrate the point that there are more applications that use .NET 4.0 than you might first realise.)

    It is certainly possible to architect a system where an installer can replace files and have any open references to libraries used by running processes retained. For instance; you could explicitly load DLLs from shadow copies or you could use a filesystem with an inode/filename distinction. This, however, is not where the problem is. The problems is; what do you do when code has dependencies spread over multiple DLLs that have version inter-dependencies? It's not possible to cover the arbitrary case where one DLL relies on an exactly paired version of another DLL by simply retaining open references, because dependencies might be late bound and not have been opened yet.

    So, next solution; report which processes have files locked that need to be updated and ask the user if those processes can be restarted. There are two problems with that.
    The first problem is that users are stupid. The general consumer has no idea what a 'process' or a 'service' is, they'll just shrug and click 'OK' / 'Yes', which can have disastrous consequences if one of the processes being restarted is a system service (for instance; part of a graphics driver control program, or a software raid solution housing the drive on which the OS is installed) that does not restart cleanly and takes the system down with a hard crash and potential data loss. Is it possible to protect the user from this kind of thing? Sure; don't allow them to restart system level services and demand a reboot instead.

    The second problem is that of a race condition: inbetween asking for confirmation and actually getting it, another process could've taken a lock on a file. Even inbetween checking for locks another process could've taken a lock. You'd need a full synchronization across all running processes on the system, blocking on a user facing UI dialog. (I'm sure you see the problem with that, right?) Could you work around this? Sure; replace the files when the bare minimum of processes to keep a live system is running, when the OS can take full control and make processes run in lock step without parallelism, i.e. at boot time.

    Microsoft has the right idea here; updating files that are in use can only be done reliably at boot time, when the OS doesn't have to make potentially false assumptions about the state of unknown, black box processes. Better to err on the side of caution and be safe rather than sorry.
    It's the rest of the world that hasn't figured it out yet.

    *) Maybe this changed with Windows 8, but in Windows 7 only a limited subset of the .NET 3.5 framework is used as part of the core system, which is why you cannot uninstall it fully.



  • @Arnavion said:

    Why would test have to do all that? If foo is a and bar is b, then test "x$foo"="x$bar" is evaluated by the shell and the resulting invocation is test 'xa=xb'. test just needs to realize that xa=xb is an expression, not a single token and treat it in the same manner it would have treated the invocation test 'xa' '=' 'xb'. test doesn't need to know about variable expansion at all.

    What about when you're looping through an HTML file, or similar file full of "=" symbols that you'd just want treated as plaintext?

    I mean, I understand that the goal should generally be to make the Common Use Case the fastest and easiest, but you shouldn't do that at the expense of making other reasonable use cases impossible or giving them incredibly un-obvious gotchas.


  • ♿ (Parody)

    @Arnavion said:

    @boomzilla said:
    @Arnavion said:
    test just needs to realize that xa=xb is an expression, not a single token and treat it in the same manner it would have treated the invocation test 'xa' '=' 'xb'. test doesn't need to know about variable expansion at all.

    Shit...now we have to worry about escaping equals signs?

    Huh? If you're talking about how I surrounded all the arguments with single quotes, I was merely being explicit. My point is unchanged if you don't write those single quotes.

    So...how do we determine that 'xa=xb' is an expression and not a token?



  • @flabdablet said:

    unless they start with a minus sign, which can cause them to be misinterpreted as operators; this is easily avoided by prefixing as in your example
    AFAIK, the main reason parameters are prefixed by X is because of a bug in some shell (or maybe test) that doesn't like empty parameters.
    @Ragnax said:
    For instance; you could explicitly load DLLs from shadow copies or you could use a filesystem with an inode/filename distinction.
    Even easier: Windows lets you rename the libraries and .exe files of currently running programs - you just can't delete them.
    @Ragnax said:
    The general consumer has no idea what a 'process' or a 'service' is, they'll just shrug and click 'OK' / 'Yes', which can have disastrous consequences if one of the processes being restarted is a system service (for instance; part of a graphics driver control program, or a software raid solution housing the drive on which the OS is installed) that does not restart cleanly and takes the system down with a hard crash and potential data loss.
    Drivers don't run in user-space, and don't rely on those libraries; you also can't stop a driver in the same way you can stop a process (it's possible, but it requires the Service Manager APIs).



  • @Buttembly Coder said:

    What about when you're looping through an HTML file, or similar file full of "=" symbols that you'd just want treated as plaintext?

    @boomzilla said:
    So...how do we determine that 'xa=xb' is an expression and not a token?

    In PS where test can be considered as a built-in and = is -eq :

    
    $foo = 'xa -eq xb' # Suppose this is the unknown input that may contain something that PS understands.
    # Here I put it in single-quotes but in the general case you'd read it from a file. Either way, it will not be evaluated by PS at this point.
    
    $bar = 'xa' # The data you want to compare $foo to
    $baz = 'xb' # The data you want to compare $foo to
    
    if ($foo -eq "$bar -eq $baz") {
        echo '1'   # This will get output. It comapres the literal strings. It does not evaluate the RHS to '$true'
    }
    
    if ($true -eq "$bar -eq $bar") {
        echo '2'   # This will get output, but not because the RHS evaluates to '$true'. Only because non-empty strings are equal to $true (The boolean $true, not the string '$true')
    }
    

    Or am I misunderstanding your questions?



  • Perhaps the conditions in that example are a little unclear. Here's a better one:

    
    $foo = 'xa -eq xb' # Suppose this is the unknown input that may contain something that PS understands.
    # Here I put it in single-quotes but in the general case you'd read it from a file. Either way, it will not be evaluated by PS at this point.
    
    $bar = 'xa' # The data you want to compare $foo to
    $baz = 'xb' # The data you want to compare $foo to
    
    if ($foo -eq "$bar -eq $baz") {
        echo '1'   # This will get output. It compares the literal strings. It does not evaluate the RHS to the string '$true'
    }
    
    if ('$false' -eq "$bar -eq $baz") {
        echo '2'   # This will not get output, because the RHS does not get evaluated to the string '$false'.
    }
    
    if ($false -eq "$bar -eq $baz") {
        echo '3'   # This will not get output either, because the RHS does not get evaluated to the boolean $false either
    }
    
    if ($true -eq "$bar -eq $baz") {
        echo '4'   # This does get output, because PS treats non-empty strings as true-y values like other scripting languages.
    # The RHS has not gotten evaluated to anything other than the literal string that is the concatenation of two variables and the string -eq
    }
    
    



  • ............................................________
    ....................................,.-'"...................``~.,
    .............................,.-"..................................."-.,
    .........................,/...............................................":,
    .....................,?......................................................,
    .................../...........................................................,}
    ................./......................................................,:`^`..}
    .............../...................................................,:"........./
    ..............?.....__.........................................:`.........../
    ............./__.(....."~-,_..............................,:`........../
    .........../(_...."~,_........"~,_....................,:`........_/
    ..........{.._$;_......"=,_......."-,_.......,.-~-,},.~";/....}
    ...........((.....*~_......."=-._......";,,./`..../"............../
    ...,,,___.`~,......"~.,....................`.....}............../
    ............(....`=-,,.......`........................(......;_,,-"
    ............/.`~,......`-...................................../
    .............`~.*-,.....................................|,./.....,__
    ,,_..........}.>-._...................................|..............`=~-,
    .....`=~-,__......`,.................................
    ...................`=~-,,.,...............................
    ................................`:,,...........................`..............__
    .....................................`=-,...................,%`>--==``
    ........................................_..........._,-%.......`
    ...................................,

    You're kind of just... I mean...

  • ♿ (Parody)

    @Arnavion said:

    Or am I misunderstanding your questions?

    Someone is misunderstanding something. Your diversion into PS makes no sense to me. That is, I can't figure out the relevance of all that stuff. You said that test should recognize that xa=xb is an expression and not a token. The implication here is that you're going to look for operators and add whitespace to the token and reinterpret it as multiple tokens. But...what if the equals sign is just part of the text and not meant to be an operator.

    How do you let test know that it isn't an operator but just another character? Do you need to pass some sort of optional argument or flag or whatever? Do you escape the equals sign somehow? How is this not worse than what we have now?

    This is not much different than saying that cd/home/arnavion should be the same as cd /home/arnavion. In short...WTF.



  • @boomzilla said:

    @powerlord said:
    This includes .NET (as previously mentioned) and C++ components such as mscvrt.dll.  Which just happen to be used in a LOT of programs.

    I'm with you on the .NET stuff, but don't the various C++ runtimes peacefully coexist? If you're just plopping yet another runtime dll down, then nothing currently uses it, so there's no problem. And if it already exists, then you don't have to do anything, so it's no problem.

    It's [i]supposed[/i] to these days...

    Then again, it was [b]always[/b] supposed to support multiple versions by virtue of naming them based on the VS version.  Then Microsoft decided not to and shipped mfc42.dll for 10 years (1995-2005, or 5 VS versions including across the Win98 to WinXP changeover) until the release of Visual Studio 2005.  Including shipping a completely incompatible version in Visual Studio 6.


  • Considered Harmful

    @boomzilla said:

    @Arnavion said:
    Or am I misunderstanding your questions?

    Someone is misunderstanding something. Your diversion into PS makes no sense to me. That is, I can't figure out the relevance of all that stuff. You said that test should recognize that xa=xb is an expression and not a token. The implication here is that you're going to look for operators and add whitespace to the token and reinterpret it as multiple tokens. But...what if the equals sign is just part of the text and not meant to be an operator.

    How do you let test know that it isn't an operator but just another character? Do you need to pass some sort of optional argument or flag or whatever? Do you escape the equals sign somehow? How is this not worse than what we have now?

    This is not much different than saying that cd/home/arnavion should be the same as cd /home/arnavion. In short...WTF.

    I think he's saying that variables should be passed directly to operators without being evaluated inline by the shell first, so the operator has more information available to it to make the determination. In this case, the PS operator sees "this is a variable, it's an atomic unit that can't contain expressions" and can differentiate it from a string literal which may contain expressions.


  • @joe.edwards said:

    I think he's saying that variables should be passed directly to operators without being evaluated inline by the shell first, so the operator has more information available to it to make the determination.

    Yes, that is what I meant to say. I'm sorry it wasn't clear. I couldn't give an example in shell because test doesn't work that way, and more importantly, shell doesn't provide a way to get the value of a variable from its name like a built-in effectively can. (PowerShell doesn't either, but this is a limitation only when we're not talking about a built-in.)



  • @ender said:

    Drivers don't run in user-space, and don't rely on those libraries; you also can't stop a driver in the same way you can stop a process (it's possible, but it requires the Service Manager APIs).

    Drivers can run in user-space in Windows using the User Mode Driver Framework (UMDF). Drivers for USB storage devices can run in user-space, drivers for virtual devices and ports can run in user-space, etc. As of Vista audio drivers run (atleast partially) in user mode as well. (This change is probably one of the biggest reasons it took Creative forever to cough up a driver for Vista, back when it launched.)

    Real problems can occur when part of a driver runs in user-space and part of it runs in kernel-space and both have to communicate with one another to create a complete device driver. Such a scenario of paired driver components isn't that far-fetched; I'll take graphics drivers as the most obvious example, as they typically interact with a user-space component that handles per-user application profile management (for instance to force certain anti-aliasing settings through the driver for certain programs), allows adjustment of the cards clock speeds with overclocking controls, or reports back core temperature and fan speed. (And yes; Catalyst Control Center is definitely written with dependencies on CLR assemblies. Maybe C# or maybe Managed C++, but it definitely uses the .NET framework.)

    If your software is robust enough, you'll be capable of handling a failure or restart in the user-space component gracefully, but that doesn't always have to be the case. Microsoft has to take into account any sloppily programmed driver, including what comes with the occasional odd device bought off of the korean or chinese market. (Why do you think they tried to push for driver certification?)


  • ♿ (Parody)

    @Arnavion said:

    @joe.edwards said:
    I think he's saying that variables should be passed directly to operators without being evaluated inline by the shell first, so the operator has more information available to it to make the determination.

    Yes, that is what I meant to say. I'm sorry it wasn't clear. I couldn't give an example in shell because test doesn't work that way, and more importantly, shell doesn't provide a way to get the value of a variable from its name like a built-in effectively can. (PowerShell doesn't either, but this is a limitation only when we're not talking about a built-in.)

    Bah! Have a mug. The topic was shell scripts, but you want shell scripts to avoid the shell. Just use something else instead.

    To repeat: WTF.



  • @Arnavion said:

    I honestly can't think of any reason why someone would want their arguments to be re-parsed effectively double-evaluated. In every single script where I've wanted to forward a variable to another script / executable, I've always wanted it to be passed unchanged in any form from how I got them in the first place. Can you give a use-case where you'd want the arguments to be evaluated?

    cp *.html /var/www/newsite
    

    Bourne shell was designed as a concise CLI first and a competent scripting language second, and its syntax reflects that orientation. The quoting and quote removal rules work together well enough that simply quoting stuff you don't want split and globbed is always enough; the lack of such rules is the main thing that makes cmd (and its predecessor command.com) such a pain in the arse by comparison.

    @Arnavion said:

    @flabdablet said:
    If test were able to re-tokenize its own arguments in order to parse individual arguments as testable expressions,

    That is what I want, yes.

    @flabdablet said:

    it would also need its own string-literal quoting/escaping rules independent of those implemented in the shell. You'd need to pre-apply those rules to any arguments you passed to it and this would, I think, cause more errors than simply needing to ensure that expressions you evaluate with test have whitespace around the operators.

    Why would test have to do all that? If foo is a and bar is b, then test "x$foo"="x$bar" is evaluated by the shell and the resulting invocation is test 'xa=xb'. test just needs to realize that xa=xb is an expression, not a single token and treat it in the same manner it would have treated the invocation test 'xa' '=' 'xb'. test doesn't need to know about variable expansion at all.

    If test is going to treat its arguments as candidates for expression evaluation, it has exactly the same problem you've just been complaining about in the shell generally: extra layers of string processing potentially resulting in unexpected content-dependent behavior. Except that it's worse, because test would then have no reasonable way to turn the extra processing off.

    I guess you could redefine its behavior when presented with a single argument, so that test "$something" is no longer synonymous with test -n "$something" but causes the argument to be tokenized instead. I would expect doing so to result in a more annoying set of subtle gotchas than the present requirement to pre-tokenize test's arguments does.

    Any language design needs to strike a balance between global rules that work well everywhere and special-case syntax to deal with edge cases. I've written lots of scripts for lots of shells, and I've often been struck by how much less trouble Bourne shell causes me than other shells of comparable age like csh and cmd, both of which demand much more edge-case awareness.



  • @Ragnax said:

    The problems is; what do you do when code has dependencies spread over multiple DLLs that have version inter-dependencies? It's not possible to cover the arbitrary case where one DLL relies on an exactly paired version of another DLL by simply retaining open references, because dependencies might be late bound and not have been opened yet.

    Anybody who doesn't believe that this is also an issue for Linux hasn't tried updating a running Linux box by rsyncing from another pre-updated box's root filesystem. Abuse any system badly enough and it will break.



  • @El_Heffe said:

    @joe.edwards said:

    Escaping perfectly is damned tricky.
    Only because *nix is retarded beyond belief.  I don't care if its 2013 or 1963, if you are a programmer, and you create an operating system that will let you put control characters in filenames, you should be shot in the face with a hammer.

    If you're working in a Bourne shell derivative, escaping perfectly is not "tricky" at all: any parameter expansion you wrap in double quotes will just work, regardless of what characters the parameter contains. The only limitation is that because all shell strings are C strings, they can't contain embedded NULs; but since Unix filenames are also passed around internally as C strings, those can't contain embedded NULs either.

    Even the oft-bemoaned embedded newline poses no special escaping difficulty:

    stephen@kitchen:/tmp/foo$ sh
    $ cd /tmp
    $ mkdir foo
    $ cd foo
    $ touch ' '
    $ touch '
    > '
    $ touch '
    >  '
    $ ls -l
    total 0
    -rw-r--r-- 1 stephen stephen 0 Sep 11 14:31  
    -rw-r--r-- 1 stephen stephen 0 Sep 11 14:31 ?
    -rw-r--r-- 1 stephen stephen 0 Sep 11 14:31 ? 
    $ for f in *; do echo "[$f]"; done
    [
    ]
    [
     ]
    [ ]
    $ 
    

    It is inconvenient not to be able to use newline-delimited files to hold lists of raw filenames, and the fact that the standard find utility emits such files by default is annoying. You can make find emit NUL-delimited files instead using -print0, but portable shell scripts don't have convenient facilities for processing records from those. Bash can do it, but it is indeed tricky - lots of defaults need to be turned off:

    $ bash
    stephen@kitchen:/tmp/foo$ find . -print0 | while IFS= read -d '' -r f; do echo "[$f]"; done
    [.]
    [./ ]
    [./
     ]
    [./
    ]
    stephen@kitchen:/tmp/foo$
    

    The -d option to read is a bashism, not portable to all POSIX shells. GNU sed has a -z option to let it work with NUL-delimited streams, also not portable.

    I'm not sure any of this merits explosive face hammering, but in hindsight it would indeed have been better not to have let the possibility of non-printing characters existing in filenames become a Unix cultural norm, and the fact that subsequent systems have worse deficiencies doesn't alter that.



  • @ender said:

    @flabdablet said:
    unless they start with a minus sign, which can cause them to be misinterpreted as operators; this is easily avoided by prefixing as in your example
    AFAIK, the main reason parameters are prefixed by X is because of a bug in some shell (or maybe test) that doesn't like empty parameters.

    Not so: expanding a quoted empty string has always resulted in the passing of an empty argument, and an X prefix is no substitute for proper quoting. Some very old versions of test were susceptible to getting confused by constructs like test "$var" = value if "$var" happened to expand to a string that test defines as an operator, like -a or !. I have yet to find a way to make a post-POSIX shell or /usr/bin/test do the wrong thing, but you still see the X prefix convention crop up in scripts that need to be very portable.



  • @Arnavion said:

    In PS where test can be considered as a built-in and = is -eq :

    
    $foo = 'xa -eq xb' # Suppose this is the unknown input that may contain something that PS understands.
    # Here I put it in single-quotes but in the general case you'd read it from a file. Either way, it will not be evaluated by PS at this point.
    
    $bar = 'xa' # The data you want to compare $foo to
    $baz = 'xb' # The data you want to compare $foo to
    
    if ($foo -eq "$bar -eq $baz") {
        echo '1'   # This will get output. It comapres the literal strings. It does not evaluate the RHS to '$true'
    }
    

    What does

    if ($foo-eq"$bar -eq $baz") {
        echo '1'
    }
    do?



  • @flabdablet said:

    What does

    if ($foo-eq"$bar -eq $baz") {
    echo '1'
    }

    do?



  • @Salamander said:

    @flabdablet said:

    What does

    if ($foo-eq"$bar -eq $baz") {
    echo '1'
    }

    do?

    So far so good. What happens if you rerun it after setting $baz to 'qux'?



  • Doesn't print anything; it's executing the command like:

    $temp = sprintf("%s -eq %s", $bar, $baz)
    if($foo == $temp) { echo 1 }
    


Log in to reply