Better way to compare booleans



  • @Mason Wheeler said:

    Simply because you apparently don't know how to marshal strings in Delphi doesn't mean it's hard to do.  It just means that you apparently never learned how it's done.

    I know very well how it's done.  My point is that it's hideously bug-prone if you're not relying on one of the Delphi "memory managers."  I don't mean COM interop (well, not necessarily), just talking to non-Delphi DLLs, and it doesn't sound like you've ever tried it.

    Your premise is invalid. A string isn't an atomic value, it's a strange, dual-natured entity that is both a primitive and an array at the same time.  This is part of the reason why working with strings is difficult, but trying to pretend like it isn't both a primitive and an array at the same time leads to ugly abstraction inversions and makes string manipulation much more difficult.
     

    Um, no.  It's only a "strange, dual-natured entity blah blah blah" IN DELPHI.  Once again you're confusing one particular implementation with the actual meaning.

    I asked you before and you still haven't responded, please give us an example of where you would ever need to deal with "ugly abstraction inversions" and "difficult string manipulation" due to strings not being directly mutable.  It takes exactly one of line of code to convert a string to or from a character array so I'd love to see this.  Have you ever even written a program in C#, Java, or any other language that doesn't leak the underlying implementation of a string?

    I repeat, an integer is just an array of bits/bytes.  How is this any different from a string being an array of characters?  Why should one be mutable and the other not?  Do you implement mutable structs/records too?



  • @Aaron said:

    @Mason Wheeler said:

    Simply because you apparently don't know how to marshal strings in Delphi doesn't mean it's hard to do.  It just means that you apparently never learned how it's done.

    I know very well how it's done.  My point is that it's hideously bug-prone if you're not relying on one of the Delphi "memory managers."  I don't mean COM interop (well, not necessarily), just talking to non-Delphi DLLs, and it doesn't sound like you've ever tried it.

    Sure I have.  I talk to C DLLs all the time for various things, and I've never in my life seen this vague "hideously bug-prone" behavior you keep insinuating is supposed to happen for some reason.  You take your string and cast it to a PChar.  Done.  No bugs, no hideousness.  You are inventing complexity where none exists, Aaron.

    Your premise is invalid. A string isn't an atomic value, it's a strange, dual-natured entity that is both a primitive and an array at the same time.  This is part of the reason why working with strings is difficult, but trying to pretend like it isn't both a primitive and an array at the same time leads to ugly abstraction inversions and makes string manipulation much more difficult.
     

    Um, no.  It's only a "strange, dual-natured entity blah blah blah" IN DELPHI.

    And in C (sorta), and in BASIC, and heck, even in Lisp.  A string is an array of characters that needs to also be able to be treated as a primitive.  That is what a string is.

    Once again you're confusing one particular implementation with the actual meaning.

    Am I?  See above.

    I asked you before and you still haven't responded, please give us an example of where you would ever need to deal with "ugly abstraction inversions" and "difficult string manipulation" due to strings not being directly mutable.

    I gave examples, and so did Dhromed.  Any string processing, from simple concatenation to replacing, requires you to be able to edit an existing string.  If the language tries to pretend you can't do that, then it ends up making all sorts of copies under the hood, which can be a very expensive operation.

    And as long as we're playing the "I asked for examples" game, I asked you what possible advantage is gained from throwing away the ability to manipulate a string.  Nobody has answered that one

    It takes exactly one of line of code to convert a string to or from a character array so I'd love to see this.

    And another one to convert it back when you're done.  That's two superfluous lines (per operation) and one superfluous extra data type that languages that don't try to pretend that a string is something other than what it is simply don't require. Again, what advantage are you gaining from all this extra overhead?

    Have you ever even written a program in C#, Java, or any other language that doesn't leak the underlying implementation of a string?

    Yeah, but I ended up going back to Delphi because too many basic concepts (string handling included) are just plain broken in "managed" languages.

    I repeat, an integer is just an array of bits/bytes.  How is this any different from a string being an array of characters?

    OK, now I know you're joking.  If I have an integer whose value is 0xF328A907, for example, and that number represents a real value (as opposed to being used as a bitmask or something like that,) then the number is truly atomic because the F3, 28, A9 and 07 have no real meaning outside the context of the integer.

    On the other hand, if I have a line of code that says "MyObject.IntegerValue := 97;", that's a string, but it contains six individual tokens, each of which mean something independent of the others.  If you truly don't understand this fundamental difference, then please don't take this the wrong way but you simply don't belong in this discussion.

    Do you implement mutable structs/records too?

    ...

    ...

    Umm... yeah.  How else are you supposed to change their values? For example, what am I supposed to do if I have a vector (as in the geometrical concept, not an array container) and I need to scale it?  The only alternative I can think of is to perform the calculation, create a new vector and throw away the old one, and if I wanted to do that, I'd be writing in Haskell.



  • @Mason Wheeler said:

    And as long as we're playing the "I asked for examples" game, I asked you what possible advantage is gained from throwing away the ability to manipulate a string.  Nobody has answered that one

    Immutable strings allow them to be implemented as a reference type.  It is much more common to assign a string to a new variable than to change its value.  With immutable strings, a copy of a string only requires a few bytes.  With mutable strings, you'd have to copy the whole string value.  I'd rather create copies on changes than copies on assignment.



  • @Jaime said:

    @Mason Wheeler said:

    And as long as we're playing the "I asked for examples" game, I asked you what possible advantage is gained from throwing away the ability to manipulate a string.  Nobody has answered that one

    Immutable strings allow them to be implemented as a reference type.  It is much more common to assign a string to a new variable than to change its value.  With immutable strings, a copy of a string only requires a few bytes.  With mutable strings, you'd have to copy the whole string value.  I'd rather create copies on changes than copies on assignment.

    Or you could do what Delphi does and put a reference count on the string and set up complier-managed copy-on-write semantics.  Then you get all the benefits you're talking about without losing the ability to manipulate the contents of a string.  If the reference count >1, it makes a unique copy and you edit it.  Otherwise, you just edit the string, no copying required.  With immutable strings, you need to make a copy every time you edit the string.  Thank you for playing, please try again next time.



  • @Mason Wheeler said:

    @Aaron said:

    Um, no.  It's only a "strange, dual-natured entity blah blah blah" IN DELPHI.

    And in C (sorta), and in BASIC, and heck, even in Lisp.  A string is an array of characters that needs to also be able to be treated as a primitive.  That is what a string is.

    Well that's blatantly not true for C, a C string is literally just a pointer to an array of characters.  The only BASIC that matters today is VB.NET, which doesn't do that either. And in Lisp... aside from that depending on which Lisp, come on, is that the best you can do?

    I asked you before and you still haven't responded, please give us an example of where you would ever need to deal with "ugly abstraction inversions" and "difficult string manipulation" due to strings not being directly mutable.

    I gave examples, and so did Dhromed.  Any string processing, from simple concatenation to replacing, requires you to be able to edit an existing string.  If the language tries to pretend you can't do that, then it ends up making all sorts of copies under the hood, which can be a very expensive operation.

    Those aren't practical examples, because they're things you should never be doing in production code. String concatenation should be done in a safe buffer (no overflows / subscript errors / O(N²) algorithms please) and the very idea of inline replacements makes me cringe.  I'm looking for an actual example of why you would ever need to do any of the things that you claim, that would take significantly more time and effort to deal with than the extra 1 line of code needed to create a buffer.

    And as long as we're playing the "I asked for examples" game, I asked you what possible advantage is gained from throwing away the ability to manipulate a string.  Nobody has answered that one

    That's not how a program designer is supposed to think - it's the same reason you don't go ahead and make all of a class's private fields public, because, hey, what's the advantage to throwing away the ability to manipulate that data?  Oh yeah, let's see - thread-safety, testability, capturing, invariants... just about everything you expect from a half-decent library.  Good program design is about creating good abstractions, not creating lousy abstractions and then leaking them so that other people can work around your garbage.

    It takes exactly one of line of code to convert a string to or from a character array so I'd love to see this.

    And another one to convert it back when you're done.  That's two superfluous lines (per operation) and one superfluous extra data type that languages that don't try to pretend that a string is something other than what it is simply don't require. Again, what advantage are you gaining from all this extra overhead?

    It's not two extra lines, it's one extra line, you tack on a "ToString()" to the return statement.  This is what you call "overhead?"  Memory management is extra overhead.  "Begin" and "End" statements are extra overhead.  Declaring all variables in the function header is extra overhead.  Just how often are you mutating strings that the "overhead" of a string builder/string buffer actually affects you?  No, actually, don't answer that question, I'm not sure I want to know.

    Have you ever even written a program in C#, Java, or any other language that doesn't leak the underlying implementation of a string?

    Yeah, but I ended up going back to Delphi because too many basic concepts (string handling included) are just plain broken in "managed" languages.

    Sounds to me like by "broken" you mean "didn't work exactly the way Delphi did and I couldn't be bothered to learn the differences."

    I repeat, an integer is just an array of bits/bytes.  How is this any different from a string being an array of characters?

    OK, now I know you're joking.  If I have an integer whose value is 0xF328A907, for example, and that number represents a real value (as opposed to being used as a bitmask or something like that,) then the number is truly atomic because the F3, 28, A9 and 07 have no real meaning outside the context of the integer.

    On the other hand, if I have a line of code that says "MyObject.IntegerValue := 97;", that's a string, but it contains six individual tokens, each of which mean something independent of the others.  If you truly don't understand this fundamental difference, then please don't take this the wrong way but you simply don't belong in this discussion.

    Says who?  If I'm sending this data over a serial port then F3, 28, A9, and 07 most certainly do have meaning outside the context of the integer.  Maybe F3 is the start byte and 07 is the checksum.  Or maybe all of this is part of some much larger stream of bytes coming through as integers (to maximize efficiency) where several of the individual bytes have special meanings.  You don't know.  On the other hand, a Base64-encoded string can hardly be considered a set of discrete tokens.  Nor can a string representing a number that came from a text field - that's every bit as atomic as the number itself.

    My analogy here is quite apt, the only difference is that an integer doesn't hold as much data as a string.  That shouldn't be a significant point of contention if you're writing halfway decent code.

    Do you implement mutable structs/records too?

    Umm... yeah.  How else are you supposed to change their values? For example, what am I supposed to do if I have a vector (as in the geometrical concept, not an array container) and I need to scale it?  The only alternative I can think of is to perform the calculation, create a new vector and throw away the old one, and if I wanted to do that, I'd be writing in Haskell.

     

    Christ.  Structs/records are value types.  They already have copy-on-assign semantics, when you mix that with mutation semantics then you end up with completely unintuitive and in some cases unpredictable results.  It's called the "principle of least surprise" and mutable value types smash it to pieces.  You think you're modifying the original but you're actually just modifying a copy.  Or you think you're modifying a copy but you're actually modifying the original, if you happened to pass a reference or pointer to the type.  Yuck.  People who do this need to be horse-whipped.



  • @Jaime said:

    Immutable strings allow them to be implemented as a reference type.  It is much more common to assign a string to a new variable than to change its value.  With immutable strings, a copy of a string only requires a few bytes.  With mutable strings, you'd have to copy the whole string value.  I'd rather create copies on changes than copies on assignment.

     

    Unfortunately, this isn't quite true.  Delphi uses a bunch of scary reference-counting semantics to minimize the allocations, as I alluded to earlier.  Of course this means you have absolutely no control over when a copy takes place, which has all sorts of downstream implications for pointer semantics and memory-management issues, but hey, that's a small price to pay for not having to use a StringBuilder, right?



  • @Aaron said:

    Well that's blatantly not true for C, a C string is literally just a pointer to an array of characters.  The only BASIC that matters today is VB.NET, which doesn't do that either. And in Lisp... aside from that depending on which Lisp, come on, is that the best you can do?

    No, that's how a string in C is represented.  But the compiler has support for string literals, (as in "this is a string literal") that it will treat as a special type.  Plus, the "just an array of characters" has a special rule associated: keep reading until you hit a null.  This makes it more than just any old array, it makes it a separate data type with special rules for how to work with it.

    As for BASIC, there are countless VB6 (and older) projects still running active production code, plus a handful of other dialects.  Simply because you assert that VB.NET is the only one that matters doesn't make it true. And I just mentioned Lisp (specifically thinking of Common Lisp) to demonstrate that this concept of a dual-natured string exists even in languages that are very, very different from the standard procedural/OO mindset style languages we're discussing.

    Those aren't practical examples, because they're things you should never be doing in production code. String concatenation should be done in a safe buffer (no overflows / subscript errors / O(N²) algorithms please) and the very idea of inline replacements makes me cringe.  I'm looking for an actual example of why you would ever need to do any of the things that you claim, that would take significantly more time and effort to deal with than the extra 1 line of code needed to create a buffer.

    Gotta love the circular logic here.  Because strings are supposed to be immutable, you should never do things like this in production code.  And because you should never do things like this in production code, any examples of doing so isn't a practical example.

    And again, for someone who talks about having so much experience in Delphi, you seem to know nothing whatsoever about how the language actually works!  Overflows and subscript errors while concatenating strings?  Never heard of 'em.  If you say "myString := myString + otherString;", it extends the length of myString to the appropriate length, then copies the contents of otherString into the new space.  It's all built into the compiler, and so things like that simply do not happen.  Nor do you get O(N²) performance when concatenating a long list of strings, because Schlemiel the Painter's Algorithm is only a problem with C strings, not Pascal strings where the length is known up front.

    That's not how a program designer is supposed to think - it's the same reason you don't go ahead and make all of a class's private fields public, because, hey, what's the advantage to throwing away the ability to manipulate that data?  Oh yeah, let's see - thread-safety, testability, capturing, invariants... just about everything you expect from a half-decent library.  Good program design is about creating good abstractions, not creating lousy abstractions and then leaking them so that other people can work around your garbage.

    Any programming problem can be solved by adding another layer of abstraction, except the problem of too many abstractions.  I'm not sure what your definition of a good abstraction is, but for me it's one that makes what you're trying to do easier.  And the fundamental rule is, any abstraction that you can't get beneath when necessary is evil.  Immutable strings break both rules.

    It takes exactly one of line of code to convert a string to or from a character array so I'd love to see this.

    And another one to convert it back when you're done.  That's two superfluous lines (per operation) and one superfluous extra data type that languages that don't try to pretend that a string is something other than what it is simply don't require. Again, what advantage are you gaining from all this extra overhead?

    It's not two extra lines, it's one extra line, you tack on a "ToString()" to the return statement.  This is what you call "overhead?"

    Calling extra functions to make unnecessary copies of strings that can be very large is exactly what I call overhead.  I just spent the better part of this entire week trying to figure out why a certain button wouldn't respond for 40 seconds after you pressed it.  What it came down to the some idiot who wrote the original code didn't understand how to test two GUIDs for equality without converting them to strings first.  Multiply that by a tight loop with a few hundred thousand iterations, and throw in some completely gratuitous copying of large objects because the same idiot didn't understand how to cache and then invalidate mutable values properly, and you end up with enormous levels of overhead that make the program's UI appear unresponsive.

    After a lot of careful analysis, debugging, recompiling, profiling and all that fun stuff, I've got it down to 3 seconds, which the boss says is "good enough".  But I couldn't have done that without knowing how things really work instead of just pretending that abstractions are real.  That's what caused the problems in the first place.

    Memory management is extra overhead.

    Yes, but it's overhead that can't be escaped from.  Even in a garbage-collected program, at least anything large enough for memory usage to become a real issue, you still end up managing the memory manually.  You have to arrange your program structure in a certain way so that the GC can clean it up without causing trouble.  You have to have some understanding of reference graphs to make sure you don't have a reference left somewhere holding onto objects you no longer have any need of.  And to make it work reliably, you need to use a minimum of 4X as much RAM and also double-indirect all your pointers, which both cause massive performance hits on a large-scale program and lowers clients' perception of the quality of your product.  Given the choice between that sort of overhead or the "overhead" of writing "someObject.Free" when you're done with something, I'll take the .Free version every time.

    "Begin" and "End" statements are extra overhead.  Declaring all variables in the function header is extra overhead.
     

    So now writing code that's actually designed to be read by other human beings is extra overhead?  When 90% of any piece of code's lifecycle is spent in maintenance, not in its original writing?  Have you ever actually worked on a large-scale project that needs to be maintained and updated over a period of several years?  You seem to be coming at this from the perception that what's important is being able to create code quickly.  This is a pretty convenient concept when you're hacking up some trivial tool, but it's the absolute opposite of the truth in real-world programming.

    Have you ever even written a program in C#, Java, or any other language that doesn't leak the underlying implementation of a string?

    Yeah, but I ended up going back to Delphi because too many basic concepts (string handling included) are just plain broken in "managed" languages.

    Sounds to me like by "broken" you mean "didn't work exactly the way Delphi did and I couldn't be bothered to learn the differences."

    Kind of.  Part that, part "I stopped when I realized that the time I might be spending learning the differences could be better spent actually doing something productive in a language that actually works the way you'd expect it to (principle of least surprise) and doesn't burden me with a bunch of nonsense about abstractions that I'm forced to pretend are real even when they get in the way."

    OK, now I know you're joking.  If I have an integer whose value is 0xF328A907, for example, and that number represents a real value (as opposed to being used as a bitmask or something like that,) then the number is truly atomic because the F3, 28, A9 and 07 have no real meaning outside the context of the integer.

    On the other hand, if I have a line of code that says "MyObject.IntegerValue := 97;", that's a string, but it contains six individual tokens, each of which mean something independent of the others.  If you truly don't understand this fundamental difference, then please don't take this the wrong way but you simply don't belong in this discussion.

    Says who?  If I'm sending this data over a serial port then F3, 28, A9, and 07 most certainly do have meaning outside the context of the integer.  Maybe F3 is the start byte and 07 is the checksum.  Or maybe all of this is part of some much larger stream of bytes coming through as integers (to maximize efficiency) where several of the individual bytes have special meanings.  You don't know.  On the other hand, a Base64-encoded string can hardly be considered a set of discrete tokens.  Nor can a string representing a number that came from a text field - that's every bit as atomic as the number itself.

    OK, you fail reading comprehension.  I specifically mentioned the number actually representing a real value and not being used to pack other data together.

    Do you implement mutable structs/records too?

    Umm... yeah.  How else are you supposed to change their values? For example, what am I supposed to do if I have a vector (as in the geometrical concept, not an array container) and I need to scale it?  The only alternative I can think of is to perform the calculation, create a new vector and throw away the old one, and if I wanted to do that, I'd be writing in Haskell.

     

    Christ.  Structs/records are value types.  They already have copy-on-assign semantics, when you mix that with mutation semantics then you end up with completely unintuitive and in some cases unpredictable results.  It's called the "principle of least surprise" and mutable value types smash it to pieces.  You think you're modifying the original but you're actually just modifying a copy.  Or you think you're modifying a copy but you're actually modifying the original, if you happened to pass a reference or pointer to the type.  Yuck.  People who do this need to be horse-whipped.

     

    Again, what in the world are you talking about?  If you can't tell the difference betwen a copied parameter and a by-reference parameter, (hint: the by-reference parameter says "var" and the copied one doesn't; it's that simple), then you don't belong anywhere near a compiler, for any language, in the first place.  And if you can, then this sort of confusion doesn't happen.  Please stop bringing up silly non-issues.  There are plenty of good reasons to criticze Delphi, but its lack of "immutability support" is not one of them.



  • @Aaron said:

    Unfortunately, this isn't quite true.  Delphi uses a bunch of scary reference-counting semantics to minimize the allocations, as I alluded to earlier.

    "A bunch of scary referene-counting semantics?"  First off, what exactly do you mean by "a bunch of"?  There's only one refernece-counting semantic that I'm aware of.  And second, what's scary about reference counting?  It does what garbage collection is supposed to do but doesn't: keeps memory around as long as it's needed, and then frees it right away.

    Of course this means you have absolutely no control over when a copy takes place, which has all sorts of downstream implications for pointer semantics and memory-management issues, but hey, that's a small price to pay for not having to use a StringBuilder, right?
     

    Sure you have control over when a copy takes place, if you want it.  You can take control of *anything* in Delphi if it's necessary.  If you specifically want a copy, you can call Copy, or call UniqueString to ensure that you get a string with a refcount of 1.  And what sor t of "downstream implications" does this bring up?  More specific examples, less pointless FUD please.


  • Discourse touched me in a no-no place

    @bstorer said:

    @RogerWilco said:
    You want to write object oriented software while being constrained by one of the above.
    This is only applicable in the very few cases where C is a good idea.  And even then, there are a few libraries out there that add OOP to C.  Or you could use Objective C.  I'll give you half a point, though.
    It's entirely possible to write OO code in C. Look at the Linux Kernel for example. While not as 'pretty' as C++ with its syntactic sugar, it can certainly be done.



  • my... my thread... what have you done to it?
    oh, the blood, the horror.....



  • I missed this kind of argument at TDWTF.

    It reminds me of the times of Masklinn and Asuffield.

    Good times.

     

    *snif*



  • I'm not going to try to respond to everything in this massive monolithic WTF of a post, just the interesting parts:

    @Mason Wheeler said:

    As for BASIC, there are countless VB6 (and older) projects still running active production code, plus a handful of other dialects.

    Right, and there are also countless COBOL and MUMPS projects still running active production code.  Does that mean language designers should be taking their cues from those?

    Gotta love the circular logic here.  Because strings are supposed to be immutable, you should never do things like this in production code.  And because you should never do things like this in production code, any examples of doing so isn't a practical example.

    No, you should never do things like that in production code because it's goddamn retarded.  That constraint has nothing to do with immutability.  They made strings immutable to prevent developers from even trying to write garbage like that.

    And again, for someone who talks about having so much experience in Delphi, you seem to know nothing whatsoever about how the language actually works!  Overflows and subscript errors while concatenating strings?  Never heard of 'em.  If you say "myString := myString + otherString;", it extends the length of myString to the appropriate length, then copies the contents of otherString into the new space.  It's all built into the compiler, and so things like that simply do not happen.  Nor do you get O(N²) performance when concatenating a long list of strings, because Schlemiel the Painter's Algorithm is only a problem with C strings, not Pascal strings where the length is known up front.

    Now I'm starting to get it.  You copied your crappy concatenation code into .NET, saw that it ran slow, bitched that you had to use a StringBuilder and said, "oh screw it, I'll just go back to Delphi."  Except that your version still has a lesser version of Schlemiel - it has to keep reallocating larger buffers and copying the string (that is, unless you explicitly specify the destination size, but in that case, why the hell are you concatenating iteratively anyway?)  Hey, tell me something, if the StringBuilder concept is so unnecessary, why did they add one to Delphi 2009?  Hmm, makes you think...

    Even before TStringBuilder existed, most people who knew what they were doing in Delphi used a 3rd-party implementation like HVStringBuilder, to prevent memory thrashing and the constant reallocations.  So basically what you're telling us is, you use Schlemiel the Painter's algorithm all over the place, but because Delphi doesn't behave quite as badly as C/C++/C# for that particular scenario of horribly broken code, you don't consider it a problem.  Well done, bro.

    And the fundamental rule is, any abstraction that you can't get beneath when necessary is evil.

    Fundamental to whom?  This is one of the most insane things I've ever read.  Again, are all of your classes composed of nothing but public fields?  Abstractions exist to enforce consistency and maintain invariants and yes, also to make things easier, but I'm asking again for what seems like the 15th time now, exactly what specific applications are made easier by leaking the character-array abstraction?

    Calling extra functions to make unnecessary copies of strings that can be very large is exactly what I call overhead.

    That's why you don't make unnecessary copies.  Again, based on your earlier comments, it seems that this is all about you doing broken string concatenation in Delphi and thinking that it's perfectly OK.  The concept of a string builder/string buffer is older than dirt.  If you're not using it, you're doing it wrong, doesn't matter what the underlying string implementation is.  I'm imagining some C# programmer who starts whining that files are immutable when you use File.ReadAllText and File.WriteAllText, and it's really slow to have to keep reading the file and then appending stuff and then writing again.  Well, FFS, it's slow because you're not supposed to deal with big blocks of text, you're supposed to deal with streams, this is a fundamental CS concept here.

    I just spent the better part of this entire week trying to figure out blah blah blah some idiot blah blah blah

    Great, you've confirmed that idiots can write bad code in any language.  That proves what about Delphi's string handling, exactly?

    Memory management is extra overhead.

    Yes, but it's overhead that can't be escaped from.  Even in a garbage-collected program, at least anything large enough for memory usage to become a real issue, you still end up managing the memory manually.  You have to arrange your program structure in a certain way so that the GC can clean it up without causing trouble.  You have to have some understanding of reference graphs to make sure you don't have a reference left somewhere holding onto objects you no longer have any need of.  And to make it work reliably, you need to use a minimum of 4X as much RAM and also double-indirect all your pointers, which both cause massive performance hits on a large-scale program and lowers clients' perception of the quality of your product.  Given the choice between that sort of overhead or the "overhead" of writing "someObject.Free" when you're done with something, I'll take the .Free version every time.

    Is this supposed to be a joke?  Shit, it sounds like you're talking about the mark-and-sweep algorithms from 30 years ago.  Modern GCs are precise, concurrent, and generational. They've been shown in dozens of benchmarks to be several times faster and certainly less error-prone than explicit memory management.  And 4X as much RAM?  Yeah, OK - best estimates today are that generational garbage collectors use up about 50% more memory, really nothing to write home about considering how cheap memory has become.

    And you're switching your definition of "overhead" too.  The first time around you were clearly implying the overhead of redundant or unnecessary code.  Now you're backpedaling and trying to make it sound like you were really referring to some nebulous concept like wasted CPU cycles. Not buying it.  Again, best estimates today are that in non-garbage-collected languages, memory management accounts for up to 30-40% of the complexity of large programs.  That's a lot of frickin' overhead.

    "Begin" and "End" statements are extra overhead.  Declaring all variables in the function header is extra overhead.
     

    So now writing code that's actually designed to be read by other human beings is extra overhead?

    Verbosity does not equal readability, and declaring variables 10 lines away from where they're first assigned is definitely not more readable.  Give me a break.  Proper indentation already makes the code readable; explicit scoping statements are just there to satisfy the compiler.  The most easy-to-read languages are the ones like Python that don't rely on any scoping keywords, just indentation.  Martin Fowler calls this "syntactic noise" - C# and Java aren't great about it, but Delphi is just chock full of it.

    Don't try to put one over on us and tell us that these silly requirements are actually helpful.  You may be used to them, but just about everybody of consequence in the CS field has recognized the common sense that the farther apart a variable's declaration and assignment are, the harder the code is to understand, because the person reading it has to scan more code and keep more pieces of information in memory to determine its final value.

    Have you ever actually worked on a large-scale project that needs to be maintained and updated over a period of several years?  You seem to be coming at this from the perception that what's important is being able to create code quickly.

    Funny, I would have said the same thing about you.  I can tell you ours is a few hundred KLOC spread across 60 or 70 different assemblies, about 15 major conceptual boundaries (web services), and several thousand classes (don't have an exact count of those).   And you know what?  It's not really that hard to manage, because the code in each module is easy to read and not full of noise like begin/end statements and try/finally/free blocks.

    Sounds to me like by "broken" you mean "didn't work exactly the way Delphi did and I couldn't be bothered to learn the differences."

    Kind of.  Part that, part "I stopped when I realized that the time I might be spending learning the differences could be better spent actually doing something productive in a language that actually works the way you'd expect it to (principle of least surprise) and doesn't burden me with a bunch of nonsense about abstractions that I'm forced to pretend are real even when they get in the way."

    In other words, you couldn't figure out how to concatenate strings without making a mess, so you gave up.  OK, we get it.

    OK, you fail reading comprehension.  I specifically mentioned the number actually representing a real value and not being used to pack other data together.

    And I specifically mentioned that you can't assume this to be true in the general case.  Your reasoning essentially went like this: "Type A is atomic in this very specific case, and Type B is really a composite type in this other very specific case, therefore Type A is always atomic and Type B is always composite, and the language should always make those assumptions."  Here's a thought; how about not treating a line of code that says "MyObject.IntegerValue := 97;" as a string, and actually treating it as what it really is - an variable/field reference, followed by a member access expression, followed by an assignment operator, followed by constant, followed by a terminator?  If you're writing a compiler or interpreter, then you aren't treating lines of code as strings, you're treating them as parse trees. If you actually need to be able to recognize that line as 6 individual and distinct tokens then you have no business storing it as a string.

    Again, what in the world are you talking about?  If you can't tell the difference betwen a copied parameter and a by-reference parameter, (hint: the by-reference parameter says "var" and the copied one doesn't; it's that simple), then you don't belong anywhere near a compiler, for any language, in the first place.  And if you can, then this sort of confusion doesn't happen.
     

    The confusion isn't between parameters being passed by reference or by value.  That part is obvious.  The confusion stems from the combination of byref/byval semantics, value type/ref type semantics, mutable/immutable semantics, and long chains of methods that don't always preserve or even know about the original byref/byval at the beginning of the chain.  I can say from experience - and not just my experience, but the experience of many people I've known and worked with and hundreds if not thousands of questions on message boards - that when you introduce a mutable type, it is very easy to forget that it's actually a value type, because it looks like a reference type.

    Let's take a simple example - a class that contains a mutable struct/record as a field and exposes it as a property.  What happens when a programmer writes "MyObject.MyRecord.X := SomeValue"?  What should happen?  Is this intuitive behaviour?  There was an obviously unscientific but nevertheless interesting poll that showed that only 20% of Delphi developers get it right. I guess the rest are just stupid, yes?

    Your argument all boils down to saying that "you can look up the type definition and walk through all the method invocations and figure out what's going on", and sure, you can, but people don't do that while they're writing code, they do it while they're debugging code that's broken because somebody didn't realize that they were mutating a copy instead of the original.  It creeps up in subtle ways, and if you've never seen it happen, then either (a) you're lucky, (b) you're lying, or (c) you just haven't seen the bug reports yet.

    And in fact, that's really what all of your arguments have been: "It's not hard, all you have to do is slog through the code and make mental notes of everything might pertain to the particular thing you're trying to figure out."  The point is to make it obvious no matter where you are in the code.  The point is to make it easy to do the right thing and difficult to do the wrong thing.  All of these obstacles you complain about are only obstacles because you're trying to do things that you shouldn't be doing, because they are likely to lead to incorrect or buggy code.  In almost all of these cases, there is a much easier way of doing the same thing that won't lead to bugs.  If it would have taken you too much time to learn the language properly, fine, I won't fault you for having deadlines to meet, but don't try to tell us that Delphi's way is "better" simply because it's the only way you know.



  • @Aaron said:

    Gotta love the circular logic here.  Because strings are supposed to be immutable, you should never do things like this in production code.  And because you should never do things like this in production code, any examples of doing so isn't a practical example.

    No, you should never do things like that in production code because it's goddamn retarded.  That constraint has nothing to do with immutability.  They made strings immutable to prevent developers from even trying to write garbage like that.

    ...and people call Pascal a bondage-and-discipline language?

    Hey, tell me something, if the StringBuilder concept is so unnecessary, why did they add one to Delphi 2009?  Hmm, makes you think...

    For the same reason they added a handful of other silly "me-too features": to make it easier for .NET developers to learn Delphi.  The test case presented here is highly contrived (as even Olaf admits), and in actual practice, TStringBuilder tends to be slower than using string concatenation.

    And the fundamental rule is, any abstraction that you can't get beneath when necessary is evil.

    Fundamental to whom?  This is one of the most insane things I've ever read.

    Fundamental to any experienced developer who's ever needed to spend ten times as long as he should have to fix what should have been some simple problem, except for all the impenetrable layers of abstraction in between the interface and the part where things really happen getting in the way.  No, I'm not bitter at all.  Why do you ask?

    Again, are all of your classes composed of nothing but public fields?  Abstractions exist to enforce consistency and maintain invariants and yes, also to make things easier, but I'm asking again for what seems like the 15th time now, exactly what specific applications are made easier by leaking the character-array abstraction?

    Specifically, concatenation and replacement, as I've mentioned repeatedly.  And deleting something out of the middle of a string, too.  And again, thinking of strings being character arrays as an abstraction that needs to be encapsulated instead of a fundamental matter of defintion is an invalid premise that leads to invalid questions that can't really be answered in any correct way.

    I just spent the better part of this entire week trying to figure out blah blah blah some idiot blah blah blah

    Great, you've confirmed that idiots can write bad code in any language.  That proves what about Delphi's string handling, exactly?

    It's not about the string handling, it's about the general principle that if you think abstractions are real, (such as your "string abstraction" you're so fond of talking about,) instead of understanding what's really happening, you end up writing bad code.

    Memory management is extra overhead.

    Yes, but it's overhead that can't be escaped from.  Even in a garbage-collected program, at least anything large enough for memory usage to become a real issue, you still end up managing the memory manually.  You have to arrange your program structure in a certain way so that the GC can clean it up without causing trouble.  You have to have some understanding of reference graphs to make sure you don't have a reference left somewhere holding onto objects you no longer have any need of.  And to make it work reliably, you need to use a minimum of 4X as much RAM and also double-indirect all your pointers, which both cause massive performance hits on a large-scale program and lowers clients' perception of the quality of your product.  Given the choice between that sort of overhead or the "overhead" of writing "someObject.Free" when you're done with something, I'll take the .Free version every time.

    Is this supposed to be a joke?  Shit, it sounds like you're talking about the mark-and-sweep algorithms from 30 years ago.  Modern GCs are precise, concurrent, and generational. They've been shown in dozens of benchmarks to be several times faster and certainly less error-prone than explicit memory management.  And 4X as much RAM?  Yeah, OK - best estimates today are that generational garbage collectors use up about 50% more memory, really nothing to write home about considering how cheap memory has become.

    Oh, don't even get me started on how "generational garbage collection" turns the entire concept inside out.  Take a look at this excellent article about how it works by a guy who's very fond of it.  Garbage collection was originally invented for Lisp, to collect garbage.  They couldn't come up with any way to dispose of memory when it was no longer needed within the language's paradigms, so they created a routine to do it for them.  The idea was to collect garbage and recycle it and thus prevent memory leaks.

    The generational garbage collector doesn't work that way.  It's not about collecting garbage, it's about collecting live objects.  And in order to get reasonable performance out of it, it has to collect as infrequently as possibe by letting the garbage pile up until memory pressure becomes an issue.  In other words, it's been mutated from a system designed to prevent memory leaks to a system that leaks as much memory as it possibly can for as long as it can get away with it.  Anyone who pulls a stunt like that inside a modern, multitasking computer with multiple large programs sharing the same RAM ought to be taken out and shot.

    "Begin" and "End" statements are extra overhead.  Declaring all variables in the function header is extra overhead.
     

    So now writing code that's actually designed to be read by other human beings is extra overhead?

    Verbosity does not equal readability, and declaring variables 10 lines away from where they're first assigned is definitely not more readable.  Give me a break.  Proper indentation already makes the code readable; explicit scoping statements are just there to satisfy the compiler.  The most easy-to-read languages are the ones like Python that don't rely on any scoping keywords, just indentation.  Martin Fowler calls this "syntactic noise" - C# and Java aren't great about it, but Delphi is just chock full of it.

    Yeah, Python's a lot of fun until you try to use a variable that you declared above but in a different indentation level, which will actually run and silently give you a hard-to-find glitch.  Code structuring requirements are there for a reason.

    Don't try to put one over on us and tell us that these silly requirements are actually helpful.  You may be used to them, but just about everybody of consequence in the CS field has recognized the common sense that the farther apart a variable's declaration and assignment are, the harder the code is to understand, because the person reading it has to scan more code and keep more pieces of information in memory to determine its final value.

    Oh, I agree completely.  But you've got the wrong reason for the problem.  If your variable declaration is too far away from the code that uses it, that just means your functions are too big.  There's still a significant readability bonus from having variable declarations in a predictable place instead of having to, as you put it, scan more code to find where you created it.

    Have you ever actually worked on a large-scale project that needs to be maintained and updated over a period of several years?  You seem to be coming at this from the perception that what's important is being able to create code quickly.

    Funny, I would have said the same thing about you.  I can tell you ours is a few hundred KLOC spread across 60 or 70 different assemblies, about 15 major conceptual boundaries (web services), and several thousand classes (don't have an exact count of those).   And you know what?  It's not really that hard to manage, because the code in each module is easy to read and not full of noise like begin/end statements and try/finally/free blocks.

    No, that's not really hard to manage because it's not really hard to manage.  Those numbers make it slightly more complex than the largest of my personal projects.  Try multiplying that by ten.  That's the project I help maintain at work.  That's what I mean by "large-scale projects."

    And I specifically mentioned that you can't assume this to be true in the general case.  Your reasoning essentially went like this: "Type A is atomic in this very specific case, and Type B is really a composite type in this other very specific case, therefore Type A is always atomic and Type B is always composite, and the language should always make those assumptions."

    Yeah, I left out the part where you really ought to be using something other than an integer for storing composite data because then it's not really an integer.

    If you're writing a compiler or interpreter, then you aren't treating lines of code as strings, you're treating them as parse trees. If you actually need to be able to recognize that line as 6 individual and distinct tokens then you have no business storing it as a string.

    A line of code isn't a parse tree; it's a line of code.  A parse tree is what you generate from it, and of course it's not made of strings; it's made of tree nodes.  But saying that a line of code is a parse tree is like saying that a piece of wood is a pad of paper.

    Let's take a simple example - a class that contains a mutable struct/record as a field and exposes it as a property.  What happens when a programmer writes "MyObject.MyRecord.X := SomeValue"?  What should happen?  Is this intuitive behaviour?  There was an obviously unscientific but nevertheless interesting poll that showed that only 20% of Delphi developers get it right. I guess the rest are just stupid, yes?

    Oh yes, that one's tons of fun.  But that's not a mutable/immutable issue, it's a flaw in Delphi's implementation of properties.  I've had some interesting discussions with Allen Bauer about fixing that, but there are other things that take priority.

    And in fact, that's really what all of your arguments have been: "It's not hard, all you have to do is slog through the code and make mental notes of everything might pertain to the particular thing you're trying to figure out."

    And your point is...?  If you can show me a language with no dark corners whatsoever, I'll gladly switch to it.  But no such thing exists, and I don't think it ever will.  So I'll go with the one that makes me the most productive and makes it easiest for me to read other people's code and for others to read mine, and that's Object Pascal.



  • This thread is getting epic.

     @Mason Wheeler said:

    Fundamental to any experienced developer who's ever needed to spend ten times as long as he should have to fix what should have been some simple problem, except for all the impenetrable layers of abstraction in between the interface and the part where things really happen getting in the way.  No, I'm not bitter at all.  Why do you ask?

    I consider myself an experienced developer and I've never come across this before. I've heard other developers complain about it though...

    @Mason Wheeler said:

    The generational garbage collector doesn't work that way.  It's not about collecting garbage, it's about collecting live objects.  And in order to get reasonable performance out of it, it has to collect as infrequently as possibe by letting the garbage pile up until memory pressure becomes an issue.  In other words, it's been mutated from a system designed to prevent memory leaks to a system that leaks as much memory as it possibly can for as long as it can get away with it.  Anyone who pulls a stunt like that inside a modern, multitasking computer with multiple large programs sharing the same RAM ought to be taken out and shot.

    Well, you're passing the memory management from the program at object level, to the OS to do at page level. Since most OSes have virtual memory and are pretty damned good at doing this... what's the big deal?

    Your modern, multitasking computer with multiple large programs sharing the same RAM runs fine *because* your OS is really good at figuring out what actually needs to be in RAM and what can be swapped to disk nearly 100% of the time. It's a solved problem-- so why should I, as the developer of some language runtime, re-solve it?

    Reminds me of the people who gripe that Windows Vista and 7 "use too much RAM". As if they'd prefer the OS keeping the RAM empty and running slower as a result.


  • Discourse touched me in a no-no place

    @blakeyrat said:

    @Mason Wheeler said:

    Fundamental to any experienced developer who's
    ever needed to spend ten times as long as he should have to fix what should have
    been some simple problem, except for all the impenetrable layers of abstraction
    in between the interface and the part where things really happen getting in the
    way.  No, I'm not bitter at all.  Why do you ask?

    I consider myself an experienced developer and I've never come across this before. I've heard other developers complain about it though...

    Trying to sort out new drivers in the Linux kernel is interesting.cn if you're unaccustomed to how OO is implemented in C. (i.e. me 2 years ago.) Especially if it interacts with the lower levels of the USB protocol.

    Not that I spent too much time learning how the kernel does stuff in the process and learnt a great deal out of it. No, not me.

    Abstraction is not a problem IFF you shouldn't need to know what's behind the scenes. Most of the time I've come across the 'abstraction problem,' you don't actually need to know what the abstraction is hiding, for example I've no problem using (say) a HWND*, or list_head* and have no (legimate) need to know what's behind that pointer.


  • @PJH said:

    Trying to sort out new drivers in the Linux kernel is interesting.cn if you're unaccustomed to how OO is implemented in C. (i.e. me 2 years ago.) Especially if it interacts with the lower levels of the USB protocol.

    Not that I spent too much time learning how the kernel does stuff in the process and learnt a great deal out of it. No, not me.

    Abstraction is not a problem IFF you shouldn't need to know what's behind the scenes. Most of the time I've come across the 'abstraction problem,' you don't actually need to know what the abstraction is hiding, for example I've no problem using (say) a HWND*, or list_head* and have no (legimate) need to know what's behind that pointer.
     

    Is that response in code? I really have no clue what you're trying to tell me...

    My hunch is that you were "trying to sort out new drivers" (whatever that means) in Linux. interesting.cn is a domain camper and doesn't seem relevant. Then there's the sentence with tons of negatives which either means you did spend too much time learning how the kernel does stuff, or perhaps you didn't.

    Anyway, the last paragraph I agree with. But the big problem is that nobody should be coding in a language with HWND in the first place, isn't that what this thread is about? Let's move on to modern languages, everybody, and leave the cruft behind.


  • Discourse touched me in a no-no place

    @blakeyrat said:

    Is that response in code?
    Clearly not popular memes.

    @blakeyrat said:
    My hunch is that you were "trying to sort out new drivers" (whatever that means)
    in Linux.
    Existing (old) drivers which didn't, at the time, work on newer kernels.
    @blakeyrat said:
    interesting.cn is a domain camper and
    doesn't seem relevant.
    The meme that's either unknown but to the varied boards I'm on, or you've not heard the phrase "may you live in interesting times," and how it's mutated due to the invention of the internet. I suggest wonkypedia as an initial reference. Or Google if you're too lazy for even that.



  • @blakeyrat said:

    @Mason Wheeler said:
    The generational garbage collector doesn't work that way.  It's not about collecting garbage, it's about collecting live objects.  And in order to get reasonable performance out of it, it has to collect as infrequently as possibe by letting the garbage pile up until memory pressure becomes an issue.  In other words, it's been mutated from a system designed to prevent memory leaks to a system that leaks as much memory as it possibly can for as long as it can get away with it.  Anyone who pulls a stunt like that inside a modern, multitasking computer with multiple large programs sharing the same RAM ought to be taken out and shot.

    Well, you're passing the memory management from the program at object level, to the OS to do at page level. Since most OSes have virtual memory and are pretty damned good at doing this... what's the big deal?

    Your modern, multitasking computer with multiple large programs sharing the same RAM runs fine *because* your OS is really good at figuring out what actually needs to be in RAM and what can be swapped to disk nearly 100% of the time. It's a solved problem-- so why should I, as the developer of some language runtime, re-solve it?

    Solved problem?  OS is good at it?  Where exactly are you pulling those assertions out of?  That doesn't fit particularly well with real-world experience, such as a client who recently complained of massive performance problems with our program.  We managed to trace it to their database server.  They had a few dozen GB of physical RAM available, and SQL Server was trying to use about 10 GB, (on a 64-bit machine,) so what did the oh-so-good-at-figuring-out-what-actually-needs-to-be-in-RAM OS do?  It gave SQL Server 2 GB of real memory to play with and paged the rest.  One-way ticket to Thrash City.

    Reminds me of the people who gripe that Windows Vista and 7 "use too much RAM". As if they'd prefer the OS keeping the RAM empty and running slower as a result.

    What I'd prefer is for the OS to be written by competent programmers who know how to solve problems without just throwing more hardware at it by default.  What I'd prefer is for my computers to actually get faster, not slower, over the years.

    My first computer was an Apple IIe.  My first console was a classic NES.  Today I have a high-end Alienware laptop and a PS3, both of which have processor speed and RAM and other Very Important Numbers thousands of times higher than their equivalents from my childhood. But that NES, with its <2 MHz clock speed and memory measured in double digits of KB, would boot instantly and put you at the game's main menu within 1 second after I hit the switch.  My PS3 takes almost a minute.  The Apple IIe was a bit slower, because it had to rely on a very slow floppy disk drive, but it still had me up and running within 15 seconds.  No version of Windows (or the Mac OS, for that matter) has ever been able to pull that off AFAIK.

    But because people keep piling abstractions on top of abstractions, and doing stupid things like trying to abdicate their responsibility to free up memory when it's no longer needed, instead of learning to actually write code competently, software has consistently gotten slower faster than hardware has gotten faster.  Andy giveth and Bill taketh away, as they say.



  • @Mason Wheeler said:

    No version of Windows (or the Mac OS, for that matter) has ever been able to pull that off AFAIK.

    @Mason Wheeler said:

    Solved problem?  OS is good at it?  Where exactly are you pulling those assertions out of?  That doesn't fit particularly well with real-world experience, such as a client who recently complained of massive performance problems with our program.  We managed to trace it to their database server.  They had a few dozen GB of physical RAM available, and SQL Server was trying to use about 10 GB, (on a 64-bit machine,) so what did the oh-so-good-at-figuring-out-what-actually-needs-to-be-in-RAM OS do?  It gave SQL Server 2 GB of real memory to play with and paged the rest.  One-way ticket to Thrash City.
     

    Without knowing what version of SQL Server or Windows this is, I can't really comment on this. I would like to know what was filling up the other 8 GB, though?

    I'm not going to call it bullshit, but I've also seen hundreds of MS SQL servers with similar hardware configurations, and I've never seen a problem like this.

    @Mason Wheeler said:

    But that NES, with its <2 MHz clock speed and memory measured in double digits of KB, would boot instantly and put you at the game's main menu within 1 second after I hit the switch.  My PS3 takes almost a minute.

    Well, first of all, Sony sucks ass at writing software, so if you're telling me that Sony's PS3 software sucks ass-- gasp horror! I guess.

    Secondly, how long would your NES have taken to boot if it had a tenth of the features of the PS3?

    Or, alternatively, how happy would you be with your PS3 if the entire OS was in ROM and immutable?

    @Mason Wheeler said:

    No version of Windows (or the Mac OS, for that matter) has ever been able to pull that off AFAIK.

    That's because Windows and Mac OS do useful work.

    By any practical measure, your 2010 PC is far superior to the NES and Apple ][ in every category imaginable except boot time. I'll take that trade-off, personally.

    But if you like, please, by all means, just use your Apple ][ for the rest of your life. You're happy because you can practice your unique Slashdot-esque brand of tech ludditeism, and we'll be happy because you can't post to this forum using an Apple ][.

    @Mason Wheeler said:

    software has consistently gotten slower faster than hardware has gotten faster.

    Bullshit. You do a benchmark, Excel on your 2010 PC against Visicalc on your Apple ][, then come back here and say that software has gotten "consistently slower". The very fact that you're arguing that, though, means you're so delusional, so completely detached from reality, that you probably think I'm a voice in your head transmitted from Venus at this point, so... I guess I shouldn't expect a response.



  • @blakeyrat said:

    I would like to know what was filling up the other 8 GB, though?

    Nothing.  It was a dedicated DB server.  Windows had all that memory available to it, with nothing interested in using any significant amount of it except SQL Server, but it was paging left, right and center.  And that's not the only example by any means, just the most severe.  All modern PCs waste a ridiculous amount of time paging instead of keeping their memory usage within reasonable limits.

    @Mason Wheeler said:
    No version of Windows (or the Mac OS, for that matter) has ever been able to pull that off AFAIK.

    That's because Windows and Mac OS do useful work.

    You've got a surprisingly narrow definition of "useful work" if you think that the Apple IIe couldn't do it.

    By any practical measure, your 2010 PC is far superior to the NES and Apple ][ in every category imaginable except boot time. I'll take that trade-off, personally.

    With the (admittedly very significant) exception of Internet access, I don't know of anything I can do on a modern computer that I couldn't do on an old one.  We do it in different ways these days, often at a larger scale, and it looks a lot better, but the basic concepts of what you do with a computer haven't changed all that much.

    But if you like, please, by all means, just use your Apple ][ for the rest of your life. You're happy because you can practice your unique Slashdot-esque brand of tech ludditeism, and we'll be happy because you can't post to this forum using an Apple ][.

    Umm... how can something be unique if it's like that which is found elsewhere, such as on Slashdot?

    @Mason Wheeler said:
    software has consistently gotten slower faster than hardware has gotten faster.

    Bullshit. You do a benchmark, Excel on your 2010 PC against Visicalc on your Apple ][, then come back here and say that software has gotten "consistently slower".

    Well sure, Excel can crunch numbers faster with a few thousand times more CPU and almost a few hundred thousand times more RAM available, but not by anywhere near as large a ratio as the difference between the amount of hardware each had available.  That's what I mean by software getting slower faster than hardware gets faster.

    The very fact that you're arguing that, though, means you're so delusional, so completely detached from reality, that you probably think I'm a voice in your head transmitted from Venus at this point, so... I guess I shouldn't expect a response.

    It must make you feel really big, cutting down all those tough, scary straw men...



  • @Mason Wheeler said:

    All modern PCs waste a ridiculous amount of time paging instead of keeping their memory usage within reasonable limits.
     

    Mine doesn't. Actually, since I put Windows 7 on it, I don't think I've *ever* seen it paging... even on my 2 GB laptop.

    @Mason Wheeler said:

    With the (admittedly very significant) exception of Internet access, I don't know of anything I can do on a modern computer that I couldn't do on an old one.

    Well, yeah, since they're both Turing machines there's literally nothing you can do on your modern computer that you couldn't do on an Apple ][. You're technically correct-- the best kind of correct!

    That aside, though, I don't have the patience to wait for a Apple ][ to pipe and aggregate the 50 million database rows of data we work with every day. For all practical purposes, a modern computer can do that and an Apple ][ can not.

    Other things I do daily that an Apple ][ can't (practically) do:

    • Play MP4 videos
    • Play MP3 music
    • Adjust the contrast on an 5 megapixel image
    • Decompress a 1 MB jpeg
    • Search all the files on my hardware in real-time as I start typing a file name
    • Play Psychonauts
    So fine, you win: an Apple ][ can do useful work. But it can't do any of the fucking useful work that I need to do every single day, not to mention all the entertainment functions.

    If there's literally nothing you do on your current computer that couldn't be done on an Apple ][ (even ignoring Internet access), that just means you're not very imaginative and you don't use your computer for shit. (Personally, though, I just think you're lying.)

    @Mason Wheeler said:

    Umm... how can something be unique if it's like that which is found elsewhere, such as on Slashdot?

    I was trying to be clever. Slashdot is full of people who are (presumably) excited about technology, and yet also huge luddites. What I meant is that the combination "excited about tech" and "luddite" is pretty unique, not that it's not found elsewhere.

    Anyway, you're in the same bracket. Consumed by nostalgia to delusion. Old computers sucked ass. Hell, computers 10 years ago sucked ass. Go buy one and use it and try to tell me otherwise, you'll soon find that all your beliefs are the result of nostalgia and has no basis in fact.

    @Mason Wheeler said:

    That's what I mean by software getting slower faster than hardware gets faster.

    So when you say "software has consistently gotten slower" what you actually mean is "software has consistently gotten faster." Thanks for clearing that up.

    Snark done, I don't know why you'd expect modern software to run faster (normalizing for CPUs) than older software, since:

    • The modern software does much, much more
    • The basic algorithms to (say) sort a speadsheet haven't changed in 30 years

    @Mason Wheeler said:

    It must make you feel really big, cutting down all those tough, scary straw men...

    What it comes down to is, obviously you don't really feel that Apple ][s were better/faster than modern computers, and there's nothing you couldn't do on one, or you'd be fucking using one right now. Which makes you either consumed by nostalgia to the point of delusion, or a hypocrite.



  • @blakeyrat said:

    Well, yeah, since they're both Turing machines there's literally nothing you can do on your modern computer that you couldn't do on an Apple ][. You're technically correct-- the best kind of correct!

    That aside, though, I don't have the patience to wait for a Apple ][ to pipe and aggregate the 50 million database rows of data we work with every day. For all practical purposes, a modern computer can do that and an Apple ][ can not.

    Other things I do daily that an Apple ][ can't (practically) do:

    • Play MP4 videos
    • Play MP3 music
    • Adjust the contrast on an 5 megapixel image
    • Decompress a 1 MB jpeg
    • Search all the files on my hardware in real-time as I start typing a file name
    • Play Psychonauts

    So fine, you win: an Apple ][ can do useful work. But it can't do any of the fucking useful work that I need to do every single day, not to mention all the entertainment functions.

    If there's literally nothing you do on your current computer that couldn't be done on an Apple ][ (even ignoring Internet access), that just means you're not very imaginative and you don't use your computer for shit. (Personally, though, I just think you're lying.)

    You're missing the entire point.  Again.  Every single thing in that list, except for the file indexing (which is a fairly new trick, I'll admit, and I wish we'd gotten it sooner) is made possible almost entirely by much heavier hardware.  That's what I meant by "larger scale".  But let's see?  Animations?  Check.  Music? Check.  Image processing?  Check.  Compression? Check.  Playing games? That's the main thing I used it for, when I wasn't poking around in Applesoft BASIC.

    Anyway, you're in the same bracket. Consumed by nostalgia to delusion. Old computers sucked ass. Hell, computers 10 years ago sucked ass. Go buy one and use it and try to tell me otherwise, you'll soon find that all your beliefs are the result of nostalgia and has no basis in fact.

    Compared to today's computers, yes.  Compared to the computers of ten years before them, no, they were a huge improvement.  Ten years from today we'll be talking about how much that the primitive systems of 2010 sucked.

    @Mason Wheeler said:
    That's what I mean by software getting slower faster than hardware gets faster.

    So when you say "software has consistently gotten slower" what you actually mean is "software has consistently gotten faster." Thanks for clearing that up.

    No, what I mean is... argh!  How can I explain this in such a way that you can't somehow miss the entire point and twist it around in your head into the exact opposite of what I said?

    OK, I think I got it.  If you were to take VisiCalc (or AppleWorks's spreadsheet, or even good old-fashioned Lotus 123) and somehow magically grant it access to the insanely huge amount of hardware modern programs have, and magically allow the program to take advantage of it, without altering the basic code structure in any other way--if you were to control for the 4-6 orders of magnitude difference in system resources, in other words--then those old spreadsheets would beat the pants off of Excel in a number-crunching contest.

    Snark done, I don't know why you'd expect modern software to run faster (normalizing for CPUs) than older software, since:

    • The modern software does much, much more
    • The basic algorithms to (say) sort a speadsheet haven't changed in 30 years

    Again, you express two contradictory concepts in the same breath.  George Orwell would be proud!  If the basic algorithms to do things haven't changed much, then why is it reasonable to believe that the software has to do much, much more in order to accomplish its work?

    What it comes down to is, obviously you don't really feel that Apple ][s were better/faster than modern computers, and there's nothing you couldn't do on one, or you'd be fucking using one right now. Which makes you either consumed by nostalgia to the point of delusion, or a hypocrite.

    *sigh* Why do I even bother?  Of course modern systems can do a lot more than older ones.  I would never claim otherwise.  But they do so with monumental amounts of wasted resources.  They've got, let's say, 10,000x more hardware available, but they don't do anywhere near 10,000x more real work.  More like 300-400x more.  And it's mostly due to a decrease in the average quality of the programmers who have been writing the software, and too many abstractions, frameworks and "managed" crap making it far too easy for them to get away with writing really wasteful code.



  • @Mason Wheeler said:

    They've got, let's say, 10,000x more hardware available, but they don't do anywhere near 10,000x more real work.  More like 300-400x more.
      I don't see how you can necessarily assume that amount of hardware needed should be linear with respect to the amount of work done.



  • @Mason Wheeler said:

    If you were to take VisiCalc (or AppleWorks's spreadsheet, or even good old-fashioned Lotus 123) and somehow magically grant it access to the insanely huge amount of hardware modern programs have, [etc etc] then those old spreadsheets would beat the pants off of Excel in a number-crunching contest.

    That is excessively speculative and I don't think it's true. I think they'd perform the same.

    The main reason programs (or rather, tasks) haven't gotten lightning-fast is because we have given them high-fidelity input or require hi-fi output, which matches the pace of growth with hardware:

    An Apple ][ maybe able to play some MIDI-shit, but these days, we decode FLAC.

    An Apple ][ may have been able to load up an image, but these days you dump all twenty of your 3000*3000 RAW photos into Photoshop and edit them smoothly with real-time filters, transforms and adjustments.

    An Apple ][ is incapable of playing any kind of video. These days we play back HD without dropping a frame or batting an eye.

    Framerates of games haven't shot up exponentially and we're still struggling to keep it in the 40-60fps zone because it's common to render triple AAA of a splendid pixelshaded vista filled with high-poly monsters at HD-resolutions.

    @Mason Wheeler said:

    Of course modern systems can do a lot more than older ones.  I would never claim otherwise.  But they do so with monumental amounts of wasted resources.  They've got, let's say, 10,000x more hardware available, but they don't do anywhere near 10,000x more real work.  More like 300-400x more.

    You can't illustrate a point about numbers using numbers that you know are utterly off and cannot be determined like that.So I'm going to show some numbers, comparing the first real computer I used to the one I'm using now:

    (I actually used an XT before that, with a MANLY on/off SWITCH, but I can't recall the specs)

    cpu | 66 MHz | 2*2,600 MHz | x78
    ram | 8MB | 4,096MB | x512
    disk | 500MB | 1,000,000MB | x2000
    vram | let's say 512KB | 1,000,000KB | x1953
    gpu | hell if I know. 1MHz | 900MHz | x900

    Those factors are a few zeros short of your fictional 10,000. The CPU is a surprising slow kid with a factor of only 78.

    As for determining the "work done", we can't quantify that until we define "work", and that's a hairball I'm not getting into in this post.

    Here's what I'll grant you, and it's a real-world example:

    I tried to play a 1024p movie in VLC and it jerked and tore like morbs' heavily-taxed rectum because the VLC programmers apparently lack some certain aspect of skill. Media Player Classic Home Cinema played it back in full, and even managed to apply useless playful filters like videowall or glass deform shit, while retaining framerate. So there's definitely an aspect of proper programming instead of a "use the library" mentality that produces better performing software.



  •  Cliffnotes for my text wall above:

    But they do so with monumental amounts of wasted resources.

    No, they don't.



  • @Mason Wheeler said:

    OK, I think I got it.  If you were to take VisiCalc (or AppleWorks's spreadsheet, or even good old-fashioned Lotus 123) and somehow magically grant it access to the insanely huge amount of hardware modern programs have, and magically allow the program to take advantage of it, without altering the basic code structure in any other way--if you were to control for the 4-6 orders of magnitude difference in system resources, in other words--then those old spreadsheets would beat the pants off of Excel in a number-crunching contest.
     

    Ok.

    Visicalc: http://www.bricklin.com/history/vcexecutable.htm (no guarantees about 64-bit support)

    If you don't own Excel, OpenOffice: http://download.openoffice.org/index.html (kind of unfair, because OO.org Calc is half the speed of Excel)

    There you go. Now give me some numbers. Both Excel and Visicalc only use one CPU core, so keep that in mind.



  • @blakeyrat said:

    @Mason Wheeler said:

    OK, I think I got it.  If you were to take VisiCalc (or AppleWorks's spreadsheet, or even good old-fashioned Lotus 123) and somehow magically grant it access to the insanely huge amount of hardware modern programs have, and magically allow the program to take advantage of it, without altering the basic code structure in any other way--if you were to control for the 4-6 orders of magnitude difference in system resources, in other words--then those old spreadsheets would beat the pants off of Excel in a number-crunching contest.
     

    Ok.

    Visicalc: http://www.bricklin.com/history/vcexecutable.htm (no guarantees about 64-bit support)

    If you don't own Excel, OpenOffice: http://download.openoffice.org/index.html (kind of unfair, because OO.org Calc is half the speed of Excel)

    There you go. Now give me some numbers. Both Excel and Visicalc only use one CPU core, so keep that in mind.


    Cute.  But that's a .COM program (remember when .COM meant a program, not a website?), which means it only has access to a 64KB address space, and you don't even need to worry about guaranteeing 64-bit support, since it can't even handle 32-bits!



  • @Mason Wheeler said:

    Cute.  But that's a .COM program (remember when .COM meant a program, not a website?), which means it only has access to a 64KB address space, and you don't even need to worry about guaranteeing 64-bit support, since it can't even handle 32-bits!
     

    Why is address space relevant to your test? Just pick a test case that requires lots of CPU and very little memory. It's a spreadsheet, should be pretty easy.

    Or why don't you propose an objective test that you can't weasel out of, then.



  • @dhromed said:

    I tried to play a 1024p movie in VLC and it jerked and tore like morbs' heavily-taxed rectum because the VLC programmers apparently lack some certain aspect of skill. Media Player Classic Home Cinema played it back in full, and even managed to apply useless playful filters like videowall or glass deform shit, while retaining framerate.

    That actually has more to do with the fact that MPC-HC uses DXVA and dumps a lot of the load of decoding video off to the graphics card. VLC only uses the CPU, which tends to be a fair bit slower. I had a similar problem with my computer - video played like complete and utter ass until I got DXVA up and running.



  •  @Fred Foobar said:

    actually

    A hint on the forum said that VLC 1.1 supported DXVA, but I didn't search any further.


Log in to reply