The abhorrent 🔥 rites of C



  • Yeah, it tells me that you don't recognize a play on words when you see one. :P

    So at the risk of repeating myself yet again, what about Rust?


  • Winner of the 2016 Presidential Election

    @Mason_Wheeler said:

    What code is "this code"? The code that requires a lambda in order to execute arbitrary code? Is any compiler somehow magically smart enough to recognize "oh, this lambda is a special arbitrary--code-RAII-lambda; we can de-lambdify it and turn its body into an inline"? Because I highly doubt that.

    Maybe some proof will convince you:

    I copypasta'd the ScopeGuard implementation from the StackOverflow question I linked above, since scope_exit is not in the standard library yet.

    And yes, the example is very simple, but as you can see inlining that lambda is no problem for the compiler. In fact, the whole code is optimized away and the constant 2 is passed to std::basic_ostream::operator<<(int).

    Edit: I just checked: You don't even need -O3 for that, -O1 is sufficient if you want gcc to inline the whole lambda (no matter what else you put in there). A separate lambda function is only generated when you disable optimizations (-O0). Updated version: https://goo.gl/5Ho21L


  • Winner of the 2016 Presidential Election

    @Mason_Wheeler said:

    So at the risk of repeating myself yet again, what about Rust?

    Just look at what I replied to. You said that only C++ and D (modeled after C++) call the destructor when an object goes out of scope. Rust behaves the same way, despite not being related to C++:

    https://doc.rust-lang.org/book/lifetimes.html#thinking-in-scopes
    http://rustbyexample.com/trait/drop.html


  • ♿ (Parody)

    @asdf said:

    The fact that I was unable to recognize it as a joke in the context of your post should tell you something…

    You're German? 🍹


  • Winner of the 2016 Presidential Election

    @boomzilla said:

    You're German?

    Yeah, maybe I've spent to much time in Germany.



  • So, @Mason_Wheeler, which compiler have you worked on? I need to know so I can avoid a compiler that can't optimize this:

    void somefunc() {
      struct SomeRAIIThing() {
        SomeRAIIThing() {DoSomething();}
        ~SomeRAIIThing() {UndoSomething();}
      }
    
      SomeRAIIThing thing;
    }
    

    into

    void somefunc() {
      DoSomething();
      UndoSomething();
    }
    

    Because that's utterly trivial. I'd be similarly surprised at a compiler that couldn't optimize out a lambda-taking scope_exit thing.

    Are classes in Pascal created at runtime, or everything in them is virtual, or something? My best guess here is that you actually have no idea what C++'s semantics are like and are basing them on what those whacky structured-programming nutjobs in the 80s thought was a good idea. You know templates are compile-time, too, right?

    I don't understand the abstraction-inversion complaint either. Reliable destructors are not a magical high-level feature that's being used to implement the terribly-low-level feature of reliably doing something at scope exit. They're just a tool, and you can use them to implement convenient semantics.

    P.S. you realize like 95% of the time what you're doing RAII for is to release a mutex or some other library-provided Thing, and so you won't need to define a specific class every time, right? I don't even remember the last time I had to explicitly write out an RAII helper. And yes, I have done GUI code in C++.



  • @Mason_Wheeler said:

    Counterexample: The for loop. As everyone knows, a for loop is a reduced-boilerplate special-case of a while loop that is common enough to have its own keyword and its own semantics. In fact, it's probably not at all controversial to say that for loops get used in code even more than while loops do. And that's just fine.

    But imagine if you had no while keyword at all, and the only way to implement it was by twisting a for loop in unnatural, convoluted ways. That would be an abstraction inversion.

    You've just sunk your own argument. I can implement finally in terms of a destructor of a local object holding a lambda, but you can't implement destructors in terms of finally clauses, no matter how much you twist them. Which means the destructor is the generic building block, and the finally is the special case.

    It'd be analogous to a while and a "for each". The "for each" is a special while that has to have an iterable collection, and can't be used to implement a more general while construct. The while can be used to implement a for each, but you have to write some extra code to tie it to an iterable collection. You're essentially claiming the "for each" is the generic thing and using a while to iterate over a collection is an abstraction inversion, because you often iterate over collections and using a while would require special case code.

    @Mason_Wheeler said:

    I doubt that the same can be said for custom classes that abuse RAII to get finally semantics for non-resource-releasing tasks.
    You'd be wrong. Here: https://goo.gl/h0Ynsv
    Obviously, I can't compare against a language with finally clauses, but you can see that the three different cases with the generic template approach, simply calling the functions manually, and having a specialized function all generate the exact same assembly. That's literally what Zero Cost Abstraction stands for. Notice the functions DoSomething and Undo could do anything, it's irrelevant. It's not special cased in any way. There's literally no cost, and if you make the functions more complex, the correct calls will get placed at the correct exit points with or without exceptions.


  • Discourse touched me in a no-no place

    The advantage of try/finally is that you put the code at the place where it occurs in the execution order. The advantage of RAII is that you don't. ;)



  • @Mason_Wheeler said:

    What is there to explain? Doing more than a single line of code (such as wrapping it in an object) to do the work of a single line of code is inherently worse. How is this not obvious?

    It's doing a "more than a single line" once versus a single three lines of code everywhere in user code.

    How is this not obvious?©



  • try {
      FrobEmu();
      if (something) {return;}
    
      // SNIP: Ten lines of code
    }
    catch(...) {...}
    finally() {
      UnfrobEmu();
    }
    

    If something is true then the execution is a bit out of order of the presentation. And I'd prefer to know that the emu will be unfrobbed at the start of the block, rather than assuming it based on try being present/scrolling down to check/collapsing the block to check. But I assume that's what you were getting at with:

    @dkf said:

    The advantage of RAII is that you don't.

    (:pendant: guard: "I'd prefer to (know that the emu will be unfrobbed) at the start of the block" not "I'd prefer to know that (the emu will be unfrobbed at the start of the block)")



  • @boomzilla said:

    I thought one of the philosophies of C++ was to be able to implement as code stuff that other languages had to write into the language. I'm sure we could all fight over the desirability of that.

    I respectfully suggest that this entire thread consists of such a fight.


  • Discourse touched me in a no-no place

    @jmp said:

    I assume that's what you were getting at

    The point I was making was that both perspectives are valid. We know that writing code where the order of writing and order of execution are substantially different leads to real confusion among developers; it's why callbacks are really quite difficult to use well, as many people using NodeJS have found. We also know that it is nice to be able to ensure that the housekeeping required to clean up a complex value happens automatically without having to write lots of code at all. The real problem comes when people decide that they've got a hammer of a particular type and so that the problem they're dealing with must therefore automatically be that type of nail. Even with the very best hammer, it's not going to be the right solution always, and a good workman has many tools.



  • @Mason_Wheeler said:

    Counterexample: The for loop. As everyone knows, a for loop is a reduced-boilerplate special-case of a while loop that is common enough to have its own keyword and its own semantics.

    For that matter, while is a special-case of if and goto.

    Or if you want to go to a lower level, a CMP followed by a JE.


  • area_pol

    @Mason_Wheeler said:

    he problem isn't that I don't understand RAII; it's that I do, and apparently I understand it better than the people here who seem incapable of thinking outside C++ idioms, because I see the flaws in it where they keep running around chasing their tails and talking about resource destruction.

    No, the problem is that you think you understand RAII, but you really don't. Consider the following:

    1. Object A creates and owns object B.
    2. Object B in turn creates and owns object C.

    Now you're trying to tell us that:

    3 . Object A destroys object B, which makes object B destroy C.

    is bad, while:

    3 . Object A has to remember to explicitly tell object B to destroy C, before B gets destroyed, because B apparently doesn't give two shits about C which he owns, and lets it leak.

    is good design. Flawless logic, flawless. You'd make a good Tizen dev.

    @Mason_Wheeler said:

    Nonzero overhead, no matter how small, is always higher than the zero overhead

    Except that many of those extra classes with destructors you fear so much get inlined to 0 overhead.

    @Mason_Wheeler said:

    Yes, and if "here" was the only place where leaks can possibly get introduced, you might have a valid point. Since it's not, you're peddling more snake oil.

    So, having to manually track you resource is good? And having the compiler to manage it for you is bad? No wonder a topic about C attracted you.

    @Mason_Wheeler said:

    In C++, specifically. I know of no other language (with the possible exception of D, which deliberately copied a lot of C++'s semantics and no one uses anyway) in which this is necessarily the case.

    Hence you have hacks like finally to introduce some sort of determinism (which I already said earlier).

    @Mason_Wheeler said:

    A scope is not an object; why should it have a destructor?

    Who said anything about adding destructors to scopes? Read again what he wrote.

    @Mason_Wheeler said:

    You know, I think that's literally the first objection anyone's raised all day that looks like the person writing it actually has any understanding of the subject matter. Congratulations.

    Damn, you're even more ignorant than blakey and that's an achievement.

    @Mason_Wheeler said:

    You can still forget to put something in the destructor, or to put the right thing in...

    So having to write finally every single time is less error-prone than writing a single destructor? Looks like we have a copy-paste warrior here.

    @Mason_Wheeler said:

    The compiler can special-case smart pointers because they're used so commonly that it's worth writing special case code in the compiler to recognize and optimize them down to the same codegen as you'd get without a smart pointer. I doubt that the same can be said for custom classes that abuse RAII to get finally semantics for non-resource-releasing tasks.

    You are, generally speaking, full of shit here and you know nothing about optimizing compilers.



  • @NeighborhoodButcher said:

    Consider the following:

    1. Object A creates and owns object B.
    2. Object B in turn creates and owns object C.

    Now you're trying to tell us that:

    3 . Object A destroys object B, which makes object B destroy C.

    is bad, while:

    3 . Object A has to remember to explicitly tell object B to destroy C, before B gets destroyed, because B apparently doesn't give two shits about C which he owns, and lets it leak.

    is good design.


    ...what.

    Seriously. What is this I don't even. I am totally at a loss to comprehend the twists and leaps of mental logic that it requires to bring you anywhere close to arriving at that conclusion based on what I wrote. Would you mind explaining your line of reasoning? Because you're missing several crucial steps (ie. the entire thing) and so you end up pulling an absolutely ridiculous conclusion out of nowhere.

    First off, how do we even get from the subject of "transient object creation and destruction within the scope of a single method" to "long-term ownership of one object by another, that gets destroyed when the owner is destroyed"? That's something that never even came up anywhere in this discussion AFAICT.

    @NeighborhoodButcher said:

    Except that many of those extra classes with destructors you fear so much get inlined to 0 overhead.

    Yes, and all of the examples people have posted that trivially demonstrate that this occurs are trivial examples that don't come anywhere close to demonstrating the sort of scenarios in which a try/finally would actually be used. (Hint: if you're not calling a method within the scope in question, in which an exception may or may not be raised, you're not proving anything useful or interesting.) I don't have time to test such scenarios myself right this moment and see what ASM they generate, but I probably will this evening.

    @NeighborhoodButcher said:

    So, having to manually track you resource is good? And having the compiler to manage it for you is bad? No wonder a topic about C attracted you.

    First off, where did I say that?

    Second, why do you keep talking about it as if it's such a bad or difficult thing? Keeping track of resource ownership is easy. There is nothing difficult about it whatsoever as long as you keep one single principle in mind and internalize it: every resource should have exactly one well-defined owner. That's not something you "have to remember" any more than you have to remember to tie your shoes before you leave the house: it becomes a habit very quickly, and if you somehow forget it, it doesn't feel right and is very noticeable if you pay any attention.

    @NeighborhoodButcher said:

    Hence you have hacks like finally to introduce some sort of determinism (which I already said earlier).

    See, this is why I asked if anyone here has any compiler experience. The finally mechanism isn't "a hack"; it's a fundamental SEH primitive. For each exception handling frame, there are three possible responses to any given exception:

    1. do not catch because this exception doesn't match the handler
    2. this exception matches the handler; catch and handle
    3. catch unconditionally but do not handle

    finally is type 3. There are multiple different actual mechanisms to implement exception handling--Win32 and Win64 do it in dramatically different ways, for example--but without some implementation for each of those 3 cases, you cannot have structured exception handling. finally is a fundamental control-flow primitive. RAII, in all non-trivial cases, (where there is actually any chance that an exception can be thrown, unlike the trivial examples posted thus far,) must necessarily be implemented in terms of the finally mechanism or it will not work. If you don't understand this simple fact, it's not surprising that you keep posting ignorant gibberish about RAII and try/finally.

    @NeighborhoodButcher said:

    So having to write finally every single time is less error-prone than writing a single destructor? Looks like we have a copy-paste warrior here.

    Again, this criticism would be correct if it were valid, but in actual practice try/finally blocks tend to be unique enough that that's not a concern.

    @NeighborhoodButcher said:

    You are, generally speaking, full of shit here and you know nothing about optimizing compilers.

    What I know is that the mythical "sufficiently smart compiler" that people always talk about being theoretically capable of performing arbitrary optimizations on arbitrary code cannot exist without the existence of strong AI, and we're nowhere near there yet.


  • Discourse touched me in a no-no place

    @Mason_Wheeler said:

    See, this is why I asked if anyone here has any compiler experience.

    👋

    But I'm not arguing the same points as either you or @NeighborhoodButcher, as you're both being less wise than you think you are. 😃


  • Winner of the 2016 Presidential Election

    @Mason_Wheeler said:

    Yes, and all of the examples people have posted that trivially demonstrate that this occurs are trivial examples that don't come anywhere close to demonstrating the sort of scenarios in which a try/finally would actually be used. (Hint: if you're not calling a method within the scope in question, in which an exception may or may not be raised, you're not proving anything useful or interesting.)

    Did you even look at the links I posted? I did exactly that!

    Since you're still deliberately ignoring my proof that you're wrong, I'll post it again:

    The [&] in the lambda definition means that all variables of the surrounding scope are captured, so you can write arbitrary code which uses the variables in that scope in such a lambda. As the assembly clearly shows, the compiler inlines the lambda passed to the scope guard, even at the lowest optimization level (-O1).



  • @Mason_Wheeler said:

    finally is a fundamental control-flow primitive.

    No it's not. There are exactly two fundamental control flow primitives. Conditional Branch, and Software Interrupt. Everything else is built on those two.


  • area_pol

    @Mason_Wheeler said:

    Seriously. What is this I don't even. I am totally at a loss to comprehend the twists and leaps of mental logic that it requires to bring you anywhere close to arriving at that conclusion based on what I wrote.

    Well, that's your problem. Let me put that in code. RAII approach:

    void A::f()
    {
        B b; // B owns some C inside
        // some code
    }
    

    Your approach:

    void A::f()
    {
        B b; // B owns some C inside
        try
        {
            // some code
        }
        finally
        {
            b.explicitlyReleaseC(); // seems like B forgot it owns C, let's remind him EVERYWHERE B is used
        }
    }
    

    Get it now? No? I thought so.

    @Mason_Wheeler said:

    Yes, and all of the examples people have posted that trivially demonstrate that this occurs are trivial examples that don't come anywhere close to demonstrating the sort of scenarios in which a try/finally would actually be used.

    These are the usual cases. When your destructor has non-inlined 100 lines of code, then it's sill better to write those 100 lines of code ONCE rather than remembering to copy-paste them many times with finally, even when you incur a single function call penalty.

    @Mason_Wheeler said:

    First off, where did I say that?

    It has been explained to you that that's what you exactly do with finally. That's why smarter people than you invented "using" for such cases, where there are no deterministic destructors.

    @Mason_Wheeler said:

    Second, why do you keep talking about it as if it's such a bad or difficult thing? Keeping track of resource ownership is easy.

    Oh yeah, all those memory leaks, double-frees etc from languages that track resources manually prove just how easy it is.

    @Mason_Wheeler said:

    That's not something you "have to remember" any more than you have to remember to tie your shoes before you leave the house: it becomes a habit very quickly, and if you somehow forget it, it doesn't feel right and is very noticeable if you pay any attention.

    Seriously, you'd rather copy-paste some cleanup code all around and hope you didn't forget any place, rather than write it once and have the compiler do it for you? And what if you have to change the cleanup logic? Would you search all the places some resource is used and manually repeat your fix?

    @Mason_Wheeler said:

    See, this is why I asked if anyone here has any compiler experience. The finally mechanism isn't "a hack"; it's a fundamental SEH primitive.

    Thank you about your descriptions of something which has no connection to what I wrote. When you have no deterministic lifetime, you need to have finally enforce some kind of determinism. If you had a deterministic lifetime, you wouldn't need it. It could be handy, but not needed. That's the point.

    @Mason_Wheeler said:

    Again, this criticism would be correct if it were valid, but in actual practice try/finally blocks tend to be unique enough that that's not a concern.

    In one sentence you say it's not the case, but later that it is the case, but not really a concern. How the hell did you get a job?

    @Mason_Wheeler said:

    What I know is that the mythical "sufficiently smart compiler" that people always talk about being theoretically capable of performing arbitrary optimizations on arbitrary code cannot exist without the existence of strong AI, and we're nowhere near there yet.

    Fortunately our compiler are smart enough to prove your points invalid in that subject. What has already been shown to you, by the way.


  • Discourse touched me in a no-no place

    @tufty said:

    There are exactly two fundamental control flow primitives. Conditional Branch, and Software Interrupt. Everything else is built on those two.

    Ordinary function call/return doesn't feel much like either of those (as SWI is usually thought of as being a call to operations out-of-context, such as in the OS) and unconditional branching has a substantively different effect on the call graph to conditional branching. While you might be able to make a contorted claim that your assertion is true nonetheless, if the way you analyse the code based on when they occur is very different, you've really not got the same thing.

    Most control flow code is mainly done with branches, conditional or otherwise. 😄



  • Function call is two operations.

    • Store return value
    • Branch conditionally, condition being "always"

    Function return, again, is two operations

    • Retrieve return value
    • Branch conditionally, condition "always"

    Admittedly, certain instruction sets roll these up into single dedicated instructions, but guess what's happening at the silicon level?


  • Discourse touched me in a no-no place

    @tufty said:

    Admittedly, certain instruction sets roll these up into single dedicated instructions, but guess what's happening at the silicon level?

    The silicon level is not the only useful level. It helps to have other levels as well as then you (or the optimising compiler ;)) can reason about the code.



  • It seems like in C++ you need to describe everything in terms of 'resources' that have 'ownership', even if it isn't actually a resource at all.

    Thus, if you would want to do something like this:

    class some_class {
      void button_clicked() {
        try {
          log.write('Processing started at ' + now());
          // do something that can go wrong
        }
        finally {
          log.write('Processing finished at ' + now());
        }
      }
    }
    

    You'd have to mentally think of the part of the log file between and including the 'started' and 'finished' messages as a 'resource' that needs to be wrapped into a class. Thus, something like so:

    class log_file_started_finished_writer_helper {
      log_file_started_finished_writer_helper() {
        log.write('Processing started at ' + now());
      }
      ~log_file_started_finished_writer_helper() {
        log.write('Processing finished at ' + now());
      }
    }
    
    class other_class {
      void button_clicked() {
        log_file_started_finished_writer_helper helper;
        // do something that can go wrong
      }
    }
    

    (similar to what has been described before, where the disabled state of some UI controls was the 'resource' that was 'owned' by some kind of helper class.)

    Now I'm not a C++ expert at all. But although it somehow makes some kind of sense, it is really a bit strange to be forced to think about things in this way. And I am also surprised that there is no direct access to a finally mechanism, even though C(++) are relatively low-level languages in other aspects.


  • area_pol

    That's one place I would see "finally" make sense in C++ - to avoid scope_exit. But, I fear adding finally would mean idiot programmers, especially coming from C or Pascal as we see here, would use it instead of doing destruction in actual destructors, and we'd have a ton of leaks, crashes, undefined behaviors and all that good stuff. Because, as one said earlier, manually handling resources is easy, right?



  • @tufty said:

    Function call is two operations.

    • Store return value
    • Branch conditionally, condition being "always"

    Function return, again, is two operations

    • Retrieve return value
    • Branch conditionally, condition "always"

    Admittedly, certain instruction sets roll these up into single dedicated instructions, but guess what's happening at the silicon level?

    In fact, at the opcode level, an ordinary unconditional branch is a conditional branch with "always" as condition on at least one platform: The 68k. The fun part is that on this platform, the opcode for a "branch to subroutine" is the one for an unconditional branch with "never" as condition: As that wouldn't make sense, they reused the opcode for something that does.



  • C++ looks pretty easy. Why do so many people complain about it?



  • I agree. I actually come from a Pascal background myself, but manual resource management is tedious and error-prone indeed. For me it was enough work already, using memory leak checkers, to make sure that there were no memory leaks in normal operation. I never even bothered to add try-finally-clauses except to fix an actual bug. So I really like how C++ makes resource management much more robust.


  • area_pol

    @Captain said:

    C++ looks pretty easy. Why do so many people complain about it?

    I think we have the usual prime cause - people don't understand it. Of course there's a lot of weird stuff inside and lots of legacy burden, but it all gets ironed out with every iteration. But things like resource management is so damn easy to do right nowadays, you literally have to put more work to break it. Of course GC languages make that easier - you just don't care there in most cases.



  • @blakeyrat said:

    How do people care so much about C. There's like 120 new posts this morning. Jesus.

    Bodybuilders, man. Did nobody tell you about bodybuilders?
    https://www.youtube.com/watch?v=eECjjLNAOd4

    @NeighborhoodButcher said:

    What ancient language should we invoke next?

    Pointers are for sissies. Real programmers create their double-free vulnerabilities using arrays and COMMON blocks.

    @Mason_Wheeler said:

    Profanity is the hallmark of a tragically limited vocabulary.

    https://www.youtube.com/watch?v=rTIorwtJbhE

    @tufty said:

    There are exactly two fundamental control flow primitives. Conditional Branch, and Software Interrupt. Everything else is built on those two.

    So you're saying that the only way I can implement conditional execution is to wrap what in any sane instruction set would be conditionally executed instructions in a conditional branch block? Fuck that abstraction inversion.



  • @flabdablet said:

    So you're saying that the only way I can implement conditional execution is to wrap what in any sane instruction set would be conditionally executed instructions in a conditional branch block? Fuck that abstraction inversion.

    Not at all. Conditional instructions other than branches don't actually affect control flow; a conditional instruction still passes through the instruction decode logic at least.



  • But on any architecture that would allow me to implement if and while and for constructs using conditional instructions other than conditional branch, conditional branch is not a fundamental control flow primitive.



  • I'd be interested to see how you intend to do potentially infinite while or generalised (i.e. not loop-unrolled) for without branching.

    Conditional non-branch instruction execution is not a fundamental feature, it falls into "platform-specific optimisation". Sure, when you've got it, you want to use it where it's useful.


  • Winner of the 2016 Presidential Election

    @Captain said:

    C++ looks pretty easy. Why do so many people complain about it?

    Mostly because people used it before C++11 or don't understand the "new" C++.

    But also because C++ still has some pitfalls that other languages don't have (e.g. the possibility of object slicing, you have to remember to make base class destructors virtual, etc.).

    @NeighborhoodButcher said:

    Of course GC languages make that easier

    I disagree.

    • shared_ptr already makes memory management pretty easy, and the memory is freed immediately when the last reference goes out of scope
    • It is still possible to leak memory in GC languages, so the abstraction is leaky
    • You will often have to work around or fight the garbage collector for performance reasons, which makes your code more complicated and super-ugly.


  • @tufty said:

    how you intend to do potentially infinite while or generalised (i.e. not loop-unrolled) for without branching

    Need branching, obviously. Any number of ways to do it without a conditional branch instruction.

    The point is that sure, you can think of conditional branching as a conceptual primitive with which to build any kind of control flow. But you can also think of other low-level constructs as conceptual primitives and build your arbitrary control flows on top of those instead. Depending on the architecture of the underlying machine, calling any of these things "primitives" might well amount to abstraction inversion.

    @Mason_Wheeler thinks of RAII as a specific use for try/finally. Various others here think of try/finally as a special case of RAII. Whether RAII or try/finally is conceptually simpler or perhaps conceptually cleaner appears to be the point at issue, and that depends entirely on the angle you look at stuff from.

    If you're accustomed to OO thinking, it's completely natural to think of anything that has or modifies state as some kind of object, and to model any kind of setup/cleanup pair as object creation and destruction even if the object concerned actually ends up degenerate and consumes no storage. If you're not accustomed to OO, it's natural to think of setup/cleanup as primarily a control flow issue and see any requirement to use object creation and destruction mechanisms to achieve this as useless conceptual baggage.

    It remains a matter of simple fact that if the outcomes you're actually trying to achieve are identical, then it doesn't matter which view you conceive of as primitive, because the compiler is going to spit out pretty much identical object code whether your source code uses try/finally blocks or degenerate RAII objects. There is no inner-platform effect going on here, regardless of which abstraction you build on top of the other.

    My personal opinion is that we should stop worrying about whether RAII or try/finally is the actual conceptual primitive and start arguing about mmap() vs. read() and write() instead.


  • area_pol

    @flabdablet said:

    It remains a matter of simple fact that if the outcomes you're actually trying to achieve are identical, then it doesn't matter which view you conceive of as primitive, because the compiler is going to spit out pretty much identical object code whether your source code uses try/finally blocks or degenerate RAII objects. There is no inner-platform effect going on here, regardless of which abstraction you build on top of the other

    One issue remains tho - with RAII you write once and it works everywhere; with try/finally you have to remember to manually duplicate your cleanup code everywhere. Also with RAII the owners remain owners - if one creates something, one destroys something. With try/finally, if one creates something, one relies on someone else to explicitly tell him to destroy that something.



  • None of which is relevant to the question of which construct it is appropriate to think of as the primitive on which the other is (or can be) built.


  • area_pol

    That may be.



  • It might well be an argument in support of a view on which of the two constructs should be exposed natively as a language feature, though.


  • ♿ (Parody)

    @flabdablet said:

    None of which is relevant to the question of which construct it is appropriate to think of as the primitive on which the other is (or can be) built.

    I can't see that either one is a primitive. Working up the chain of abstractions, they share a common parent, and that's the concept of exiting a scope.



  • Right, so any given person's view on the relative claims of these two constructs to conceptual simplicity is going to depend on whether they personally conceive of a scope as a control flow region with added data visibility semantics, or a data visibility region with added control flow semantics.

    This thread is full of blind men disputing the true nature of elephants.


  • area_pol

    More like debunking some absurd statements of one being a hack of another.



  • @flabdablet said:

    Any number of ways to do it without a conditional branch instruction.

    (directly modifying the program counter is a branch, btw)





  • Just to add to the GC debate. I have only once interrupted the PHP GC for performance reasons and don't need to do it "frequently" as asserted up thread.

    Also, RAII as a mentality is great for PHP.


  • Winner of the 2016 Presidential Election

    @Arantor said:

    I have only once interrupted the PHP GC for performance reasons and don't need to do it "frequently" as asserted up thread.

    That's because PHP applications, unlike Java or C# applications, usually don't run for long. A request comes in, the interpreter is started, it executes the PHP code and then the thread/process exits.



  • I do a fair bit of Java work and I've never had a performance issue with the GC there, either.
    I'd suspect people who often have GC problems in Java/C# may just be Doing It Wrong.



  • This is what I was thinking...


  • Winner of the 2016 Presidential Election

    @Salamander said:

    I'd suspect people who often have GC problems in Java/C# may just be Doing It Wrong.

    Depends on what kind of applications you write. If you write data-processing applications, you need to do stuff like re-use existing objects instead of creating new ones to avoid garbage collection in performance-critical parts of the code. If you're really unlucky, you might even have to resort to manual memory management using sun.misc.Unsafe.


  • Discourse touched me in a no-no place

    @Salamander said:

    I do a fair bit of Java work and I've never had a performance issue with the GC there, either.

    The performance problems have been largely sorted out for the past 10 years or so, but in the early days Java used a stop-the-world GC algorithm and that really did have performance issues in practice.


  • Notification Spam Recipient

    @asdf said:

    @Salamander said:
    I'd suspect people who often have GC problems in Java/C# may just be Doing It Wrong.

    Depends on what kind of applications you write. If you write data-processing applications, you need to do stuff like re-use existing objects instead of creating new ones to avoid garbage collection in performance-critical parts of the code. If you're really unlucky, you might even have to resort to manual memory management using sun.misc.Unsafe.

    To be honest if you're having problems with garbage collection you've probably really borked hard somewhere in your own code or hitting the memory ceilng in 32bit jvms/requirments. (I've actually seen thousands of man hours thrown at that problem even when the user had 16gb core i7 64 bit windows machines) My expeience has being our own code. GC tuning really is the last possible action to squeeze performance in recent jvms and even that is hit and miss.

    I've never used Unsafe but I've heard stories. I imagine anything that necessiates those libraries is great :wtf: in the making. You really should share those stories. We would be delighted to read them.


Log in to reply