The abhorrent πŸ”₯ rites of C


  • area_pol

    I wouldn't be so sure that your macro version would be optimized. Feel free to show me proof. On the other hand I am sure the normal qsort where you have an explicit parameter cannot be optimized, as the book says.



  • @Khudzlin said:

    Are nasal demons on strike?

    No, but I wanted to take a specific explicit action rather than hoping that the DeathStation 9000 would do its job and nuke a major city for me.



  • I didn't say it would be optimised, I said it could be optimised. As could the normal qsort with an explicit parameter - all that is required for that is inlining or specialising at every (or even just some) call site.


  • area_pol

    Well, almost everything could be optimized :) The whole point is being able to do that. I would seriously like to see a compiler capable of it.



  • A compiler with Link-Time Code Generation enabled might be able to.



  • Any whole-program compiler can do it (relatively) trivially. My own compiler for a schemish low level language can do it on library code as well.

    Generally speaking, though, if you can think of an optimisation, STALIN can do it. STALIN is brutal.

    STALIN is also hard to use, slow to compile, and generally blows your compile-time memory usage out of the water, often resulting in hard crashes


  • area_pol

    Hmm that could potentially work, but still I would need proof to believe it.



  • @NeighborhoodButcher said:

    No, just something you actually can look up.

    I just Googled "moving ownership". Nothing programming-related whatsoever anywhere on the entire first page. So again, context-less gibberish.

    @NeighborhoodButcher said:

    So you're disputing my claim of it being used for cleanup by saying that it's being used for cleanup?

    So you're actively ignoring the word "arbitrary"? (As in: does not necessarily have anything whatsoever to do with resource management or destructors?)

    @NeighborhoodButcher said:

    My point was that you've made a statement which only bases on your assumptions. I showed you how absurd it looks, by making the inverted one.

    I made a statement based on demonstrable facts. If your brain is too damaged due to C++ exposure (with all due credit to Dijkstra) to tell the difference between "a fact that contradicts the narrative you're trying to push" and "an assumption," that's not my fault.



  • @Mason_Wheeler said:

    So what you're saying is, when all you have is a πŸ”¨, everything starts to look like a class?

    No, I'm saying that types have more uses than establishing inheritance hierarchies. Those are just one very limited kind of relationship. I mean, what is wrong with describing every concept in your program in terms of a class? They have fairly little overhead in terms of how much you have to type, no runtime overhead (compared to doing the same thing without explicit classes), and it ensures that the compiler will check your program for the validity of those relationships.

    Do you dislike functions and enums too?

    On the contrary, it's the using/try/catch/finally mess that is a hack around the fact that you want code to always run under certain conditions, and you have no way to automate it reliably.


  • β™Ώ (Parody)

    @Kian said:

    Do you dislike functions and enums too?

    Only if you don't do them like Pascal. πŸ†



  • @NeighborhoodButcher said:

    Hmm that could potentially work, but still I would need proof to believe it.

    Optimisers are pretty good these days.


  • area_pol

    @Mason_Wheeler said:

    I just Googled "moving ownership". Nothing programming-related whatsoever anywhere on the entire first page. So again, context-less gibberish.

    Ok, let me help you out then: http://stackoverflow.com/questions/373419/whats-the-difference-between-passing-by-reference-vs-passing-by-value

    Also in C++ you have move semantics, which you don't have in many other languages.

    @Mason_Wheeler said:

    So you're actively ignoring the word "arbitrary"? (As in: does not necessarily have anything whatsoever to do with resource management or destructors?)

    Of course you can just not put cleanup code in there, to prove your point. You'll get resource leaks. To fix them, guess what you would have to do?

    @Mason_Wheeler said:

    I made a statement based on demonstrable facts. If your brain is too damaged due to C++ exposure (with all due credit to Dijkstra) to tell the difference between "a fact that contradicts the narrative you're trying to push" and "an assumption," that's not my fault.

    Going personal, huh?


  • area_pol

    Oh, nice one. Have to do some experiments on that.



  • @Kian said:

    No, I'm saying that types have more uses than establishing inheritance hierarchies. Those are just one very limited kind of relationship.

    I never said that. (And no, that's not what I said about classes either. Go back and re-read what I said if that's what you got from it.)

    @Kian said:

    I mean, what is wrong with describing every concept in your program in terms of a class?

    First, it smacks of Golden-Hammer-ism. Second... well, do I really need anything more than that?

    @Kian said:

    They have fairly little overhead in terms of how much you have to type,

    In a codebase measured in the millions of lines of code, all those "fairly little" overheads add up to something non-trivial.

    @Kian said:

    no runtime overhead (compared to doing the same thing without explicit classes),

    A method call (such as a destructor call) always has overhead unless it is inlined perfectly. (Which is a higher bar than simply inlined. I've seen some compilers make a huge mess of inline codegen in certain circumstances, creating literally 2-5x more code than is generated by manually inlining the function body and substituting arguments as appropriate.)

    @Kian said:

    and it ensures that the compiler will check your program for the validity of those relationships.

    Huh? How is any meaningful type checking done in a class that only exists for the purpose of exploiting RAII?

    @Kian said:

    Do you dislike functions and enums too?

    Of course not; why would I?

    @Kian said:

    On the contrary, it's the using/try/catch/finally mess that is a hack around the fact that you want code to always run under certain conditions

    Explicitly telling your code to always run is a hack around wanting it to always run?

    @Kian said:

    and you have no way to automate it reliably.

    Automate what? The "way to reliably automate" your code is to run it through the compiler! Is NeighborhoodButcher's meaningless gibberish contagious or something? Because it sure seems like you're catching it.

    @NeighborhoodButcher said:

    Ok, let me help you out then:

    I searched that page for "moving", "move", and "ownership". None of those words exists anywhere on the page. So again, context-less gibberish.

    @NeighborhoodButcher said:

    Also in C++ you have move semantics, which you don't have in many other languages.

    A brief Googling suggests that "move semantics" are all bound up in the whole mess of objects-as-value-types. You don't have that in other languages because they didn't make that mistake.

    @NeighborhoodButcher said:

    Of course you can just not put cleanup code in there, to prove your point. You'll get resource leaks. To fix them, guess what you would have to do?

    Reading comprehension fail. Again. The point isn't that you can omit resource cleanup; it's that other types of cleanup that have nothing to do with releasing resources also exist. try/finally is about exceptions, not destructors, not resource management.



  • OK, seriously, what part of "abstraction inversion" are you not understanding here?

    Seriously, for me, it's all three.


  • area_pol

    Ok, there's no point. If you so much care to remain ignorant and constantly make bigger and bigger idiot of yourself, that's your right.

    Just note there are people claiming you're wrong, and no one claiming you're right. That might be a something of a sign for you.


  • β™Ώ (Parody)

    @NeighborhoodButcher said:

    Just note there are people claiming you're wrong, and no one claiming you're right. That might be a something of a sign for you.

    We have a Pascal flamewar every year or so with him.



  • @NeighborhoodButcher said:

    Just note there are people claiming you're wrong, and no one claiming you're right. That might be a something of a sign for you.

    A sign that you're employing argumentum ad populum, or a sign that the majority of the people in this sub-thread are C++ developers?


  • area_pol

    So we had a C zealot and a Pascal zealot. What ancient language should we invoke next?

    @Mason_Wheeler said:

    A sign that you're employing argumentum ad populum, or a sign that the majority of the people in this sub-thread are C++ developers?

    Or a sign you're being ignorant.



  • @NeighborhoodButcher said:

    Or a sign you're being ignorant.

    What choice do I have when you say nothing meaningful? What is there for me to learn?


  • β™Ώ (Parody)

    @Mason_Wheeler said:

    What is there for me to learn?

    They say that you can write FORTRAN in any language. You seem kind of sad that the same isn't true of Pascal.



  • @boomzilla said:

    They say that you can write FORTRAN in any language. You seem kind of sad that the same isn't true of Pascal.

    Not at all. (BTW are you aware that that phrase, as originally coined, was not meant to be complimentary towards FORTRAN?) It's simply that Pascal is a trivial counterexample to the incorrect claim that try/finally was invented because of non-deterministic destructors.


  • β™Ώ (Parody)

    @Mason_Wheeler said:

    (BTW are you aware that that phrase, as originally coined, was not meant to be complimentary towards FORTRAN?)

    I'm not aware that it's ever been anything other than a put down of FORTRAN.

    @Mason_Wheeler said:

    It's simply that Pascal is a trivial counterexample to the incorrect claim that try/finally was invented because of non-deterministic destructors.

    Oh, that's all? Seemed like there was a lot more to it.



  • @boomzilla said:

    Oh, that's all? Seemed like there was a lot more to it.

    February 1995: Delphi 1 (Object Pascal) is released. It has try/finally and deterministic destructors.

    January 1996: Java 1 is released. It has try/finally and non-deterministic destructors. The fact that its try/finally uses the same keywords and has virtually identical semantics to the implementation found in Delphi, suggests common ancestry, though I'm too lazy to look it up and try to trace that one back further.

    January 2002: Version 1 of C# and the .NET framework are released. C# has non-deterministic destructors and a try/finally construct with the same semantics, which is not surprising as its two most significant influences were Delphi and Java.

    Does that count as "a lot more to it"? :P


  • β™Ώ (Parody)

    @Mason_Wheeler said:

    Does that count as "a lot more to it"? πŸ˜›

    No. There were definitely other things being argued here. And like I said, you seemed allergic to anything different than a Pascal idiom.

    Related: I think you have a crazy take on RAII.



  • @boomzilla said:

    No. There were definitely other things being argued here.

    Yeah, much earlier on in the thread. Nearly everything today has been about RAII.

    @boomzilla said:

    And like I said, you seemed allergic to anything different than a Pascal idiom.

    Related: I think you have a crazy take on RAII.


    I just think it's outright dishonest to go around claiming that high performance is one of your language's biggest benefits, and then implement an important and necessary control-flow feature as an abstraction inversion that has higher overhead than other languages.

    Why is that crazy?


  • β™Ώ (Parody)

    I don't buy the abstraction inversion argument on RAII. You could probably apply your argument to any language feature that reduced boilerplate.

    You really seem incapable of thinking outside of Pascal idioms. Which is OK, I guess, if you're going to code in Pascal. Just don't be surprised by all the funny looks you get when you try to bring your religion to the masses.


  • Winner of the 2016 Presidential Election

    @Mason_Wheeler said:

    A brief Googling suggests that "move semantics" are all bound up in the whole mess of objects-as-value-types. You don't have that in other languages because they didn't make that mistake.

    You're acting like passing objects by reference wasn't possible in C++. :wtf: are you getting at?

    @Mason_Wheeler said:

    I just think it's outright dishonest to go around claiming that high performance is one of your language's biggest benefits, and then implement an important and necessary control-flow feature as an abstraction inversion that has higher overhead than other languages.

    It does not have higher overhead, goddammit!



  • @tufty said:

    Generally speaking, though, if you can think of an optimisation, STALIN can do it. STALIN is brutal.


  • Winner of the 2016 Presidential Election

    @boomzilla said:

    I don't buy the abstraction inversion argument on RAII. You could probably apply your argument to any language feature that reduced boilerplate.

    +1

    @boomzilla said:

    You really seem incapable of thinking outside of Pascal idioms. Which is OK, I guess, if you're going to code in Pascal. Just don't be surprised by all the funny looks you get when you try to bring your religion to the masses.

    +INFINITY



  • @Mason_Wheeler said:

    Explicitly telling your code to always run is a hack around wanting it to always run?

    Yes, because if you ALWAYS want it to run, then the act of typing it is redundant. You shouldn't need to tell the compiler "don't introduce a leak here" every time you create something that might leak. Not introducing leaks should be the default.

    As for using code for things other than cleaning resources: A destructor always runs on leaving a scope. A finally clause always runs on leaving it's own try scope. It seems to me that the destructor is the more general tool, while the finally is the hack around a language's limitation. You're basically adding a destructor to individual scopes, instead of using a destructor for a whole class of things at a time.

    Imagine if instead of wanting to undo one state change, you want to undo two related bits of state. With a specialized class and destructor, I just need to touch one function. You need to touch every scope where you did it. If you have multiple things going on in the finally clause, it's easy to miss something. With the destructor, you can't ever forget anything.

    @Mason_Wheeler said:

    Automate what?
    Automate telling the compiler not to break your program. You need to tell it in every try block. I need to tell it once in the class definition.

    @Mason_Wheeler said:

    as an abstraction inversion that has higher overhead than other languages.
    [Citation Needed] I posted links to actual code compiled and turned into assembly that shows that unique_ptr compiles to the exact same code as manually calling free. Can you provide an example where any of these RAII classes introduced any runtime overhead? Or an explanation of why you think so?



  • @Medinoc said:

    But you can use an unique_ptr<BuiltinVoteStyle_Base>.

    Except I can't because as the "Filed under" section noted, it had to compile on Visual Studio 2008.



  • Ah, that is unfortunate. C++11 fixed most of the serious issues C++98 had, and while TR1 was already available, Visual Studio has generally been slow to adopt new features. Hopefully you're not still using a 7 year old IDE, though?


  • Winner of the 2016 Presidential Election

    @powerlord said:

    it had to compile on Visual Studio 2008.

    Then that's your problem. C++ itself is fine. :)

    I'm glad I'm allowed to use VS2015.



  • Whoops, sorry. Then you have no language-supported move semantics. However, you have:

    1. The Boost library's smart pointers: scoped_ptr for stuff that doesn't move at all, shared_ptr as a poor man's substitute for handling stuff that moves.
    2. The precursor of unique_ptr, the bad old auto_ptr that tries to implement move semantics through hacks (in fact, one could describe unique_ptr as "auto_ptr done right").


  • If you really need to then you should be able to write unique_ptr with manual moving for C++03. It'll be more error-prone than C++11 version, but might be less confusing than auto_ptr (which uses copy constructors to implement moves). scoped_ptr is still the safest and should be preferred, though.



  • No, if I'm doing development on C++ and/or C# stuff, I'm using Visual Studio 2013.

    As I recall, it had to be ABI compatible with whatever C++ version 2008 was compiling against.

    If I were to write it now instead of back then, I could target 2013 directly (but not 2015 which isn't ABI compatible).



  • @boomzilla said:

    I don't buy the abstraction inversion argument on RAII. You could probably apply your argument to any language feature that reduced boilerplate.

    Not at all.

    Counterexample: The for loop. As everyone knows, a for loop is a reduced-boilerplate special-case of a while loop that is common enough to have its own keyword and its own semantics. In fact, it's probably not at all controversial to say that for loops get used in code even more than while loops do. And that's just fine.

    But imagine if you had no while keyword at all, and the only way to implement it was by twisting a for loop in unnatural, convoluted ways. That would be an abstraction inversion.

    @boomzilla said:

    You really seem incapable of thinking outside of Pascal idioms

    If you say so. The problem isn't that I don't understand RAII; it's that I do, and apparently I understand it better than the people here who seem incapable of thinking outside C++ idioms, because I see the flaws in it where they keep running around chasing their tails and talking about resource destruction.

    @asdf said:

    You're acting like passing objects by reference wasn't possible in C++. :wtf: are you getting at?

    No, I'm acting like passing objects by value is also possible, and moreover is the default behavior.

    @asdf said:

    It does not have higher overhead, goddammit!

    Nonzero overhead, no matter how small, is always higher than the zero overhead of actually writing your finally code inline. If you literally cannot comprehend this blatantly obvious fact, I really don't know what to say.

    @Kian said:

    You shouldn't need to tell the compiler "don't introduce a leak here" every time you create something that might leak. Not introducing leaks should be the default.

    Yes, and if "here" was the only place where leaks can possibly get introduced, you might have a valid point. Since it's not, you're peddling more snake oil.

    @Kian said:

    A destructor always runs on leaving a scope.

    In C++, specifically. I know of no other language (with the possible exception of D, which deliberately copied a lot of C++'s semantics and no one uses anyway) in which this is necessarily the case.

    @Kian said:

    You're basically adding a destructor to individual scopes, instead of using a destructor for a whole class of things at a time.

    A scope is not an object; why should it have a destructor?

    @Kian said:

    Imagine if instead of wanting to undo one state change, you want to undo two related bits of state. With a specialized class and destructor, I just need to touch one function. You need to touch every scope where you did it.

    You know, I think that's literally the first objection anyone's raised all day that looks like the person writing it actually has any understanding of the subject matter. Congratulations.

    The thing is, using finally blocks for non-resource-releasing tasks tends to be a specialized thing: you don't have the same finally block in 100 places throughout your codebase. (Well, you could, but that means you're violating DRY left and right.) In practice, each finally block tends to be more or less unique to the function you're writing it in, so replacing it with something reusable rarely makes sense unless refactoring the entire function into something reusable does.

    @Kian said:

    With the destructor, you can't ever forget anything.

    You can still forget to put something in the destructor, or to put the right thing in...

    @Kian said:

    Automate telling the compiler not to break your program. You need to tell it in every try block. I need to tell it once in the class definition.

    Again, as finally blocks tend to be unique (or at least fairly unique) in practice, the distinction you're making isn't as significant as you seem to think it is.

    @Kian said:

    I posted links to actual code compiled and turned into assembly that shows that unique_ptr compiles to the exact same code as manually calling free. Can you provide an example where any of these RAII classes introduced any runtime overhead? Or an explanation of why you think so?

    Forget smart pointers and calling free. I've maintained all along that this has nothing to do with object destruction or resource releasing. I'm talking about using try/finally for guaranteed exception-safe reversible state changes.

    The compiler can special-case smart pointers because they're used so commonly that it's worth writing special case code in the compiler to recognize and optimize them down to the same codegen as you'd get without a smart pointer. I doubt that the same can be said for custom classes that abuse RAII to get finally semantics for non-resource-releasing tasks.


  • Winner of the 2016 Presidential Election

    @Mason_Wheeler said:

    As everyone knows, a for loop is a reduced-boilerplate special-case of a while loop

    πŸ„πŸ’©

    The amount of boilerplate code is exactly the same, you just document the intent better.

    Foreach, OTOH, removes some boilerplate code.

    @Mason_Wheeler said:

    apparently I understand it better than the people here who seem incapable of thinking outside C++ idioms, because I see the flaws in it

    You have not been able to demonstrate a single flaw so far other than "I don't like that I have to construct an object".

    @Mason_Wheeler said:

    Nonzero overhead, no matter how small

    The overhead is ZERO, goddammit!

    is always higher than the zero overhead of actually writing your finally code inline.

    So always write your code inline and never use abstractions?

    TDEMSYR

    @Mason_Wheeler said:

    > A destructor always runs on leaving a scope.

    In C++, specifically. I know of no other language (with the possible exception of D, which deliberately copied a lot of C++'s semantics and no one uses anyway) in which this is necessarily the case.

    What about Rust, for example?

    @Mason_Wheeler said:

    > With the destructor, you can't ever forget anything.

    You can still forget to put something in the destructor, or to put the right thing in...

    Possibility to forget it once, vs possibility to forget it multiple times. Are you really unable to see the difference? Are you that dense?



  • @asdf said:

    The amount of boilerplate code is exactly the same, you just document the intent better.

    In a C-style for loop, sure. In most other languages, this is not the case. (Exercise: compare and contrast FOR as implemented in in C, Pascal, Python, Basic and Ruby.)

    @asdf said:

    You have not been able to demonstrate a single flaw so far other than "I don't like that I have to construct an object".

    "You have not been able to demonstrate any flaw other than the flaw." Technically true, I suppose, but why do you say that as if I'm doing something wrong?

    @asdf said:

    The overhead is ZERO, goddammit!

    Profanity is the hallmark of a tragically limited vocabulary. Also, the overhead of a method call is always nonzero unless it is perfectly inlined. (Did you miss that every previous time I went over it?)

    @asdf said:

    What about Rust, for example?

    What about it? I've heard the name, but I literally know nothing about it other than it's named after destruction and corrosion, which hardly inspires confidence in its suitability as a structural material for engineering, software- or otherwise.

    @asdf said:

    Possibility to forget it once, vs possibility to forget it multiple times. Are you really unable to see the difference? Are you that dense?

    No, I'm just experienced enough to know that no matter how many times you do something in a codebase, you're likely to forget it exactly once. :P


  • Winner of the 2016 Presidential Election

    @Mason_Wheeler said:

    "You have not been able to demonstrate any flaw other than the flaw."

    Let me re-phrase that for you, since you're deliberately acting like you don't understand the argument:

    You have not been able to demonstrate why constructing an object for this purpose is a bad thing.

    @Mason_Wheeler said:

    Profanity is the hallmark of a tragically limited vocabulary.

    It seems to be the only way to get through to you, since you just ignore everything we tell you, for example:

    @Mason_Wheeler said:

    Also, the overhead of a method call is always nonzero unless it is perfectly inlined. (Did you miss that every previous time I went over it?)

    No, but you missed the part where we told you multiple times that perfect inlining is exactly what happens in practice when this code is compiled.

    @Mason_Wheeler said:

    What about it? I've heard the name, but I literally know nothing about it other than it's named after destruction and corrosion, which hardly inspires confidence in its suitability as a structural material for engineering, software- or otherwise.

    :headdesk:

    I rest my case. This is pointless.



  • OK, show of hands here. How many people in this discussion have actually done real compiler work? By which I mean, specifically:

    • A compiler specifically designed for production work, as opposed to, say, something you wrote for fun or as an assignment in a Compilers class,
    • which was used in at least one piece of software that was released in an official release version
    • to which you contributed at least one non-trivial new feature (not a bugfix), which was accepted into the official, main trunk of the codebase

  • Winner of the 2016 Presidential Election

    :hand:

    What does that have to do with anything?



  • Because most of the people here, yourself included, are acting like they don't know anything about how compilation, code generation and optimization actually work. So I was curious.


  • β™Ώ (Parody)

    @Mason_Wheeler said:

    Counterexample: The for loop. As everyone knows, a for loop is a reduced-boilerplate special-case of a while loop that is common enough to have its own keyword and its own semantics. In fact, it's probably not at all controversial to say that for loops get used in code even more than while loops do. And that's just fine.

    See, but that's how your argument sounds to the rest of us regarding RAII.

    @Mason_Wheeler said:

    If you say so. The problem isn't that I don't understand RAII; it's that I do, and apparently I understand it better than the people here who seem incapable of thinking outside C++ idioms, because I see the flaws in it where they keep running around chasing their tails and talking about resource destruction.

    I do say so, and I find the rest of this quote amusing. It's like reading a blakeyrant on non-native UI.



  • @asdf said:

    You have not been able to demonstrate why constructing an object for this purpose is a bad thing.

    What is there to explain? Doing more than a single line of code (such as wrapping it in an object) to do the work of a single line of code is inherently worse. How is this not obvious?

    @asdf said:

    No, but you missed the part where we told you multiple times that perfect inlining is exactly what happens in practice when this code is compiled.

    What code is "this code"? The code that requires a lambda in order to execute arbitrary code? Is any compiler somehow magically smart enough to recognize "oh, this lambda is a special arbitrary--code-RAII-lambda; we can de-lambdify it and turn its body into an inline"? Because I highly doubt that.

    @asdf said:

    I rest my case. This is pointless.

    How does me not knowing about Rust have anything at all to do with a discussion about C++ in which Rust has never been brought up until now?


  • β™Ώ (Parody)

    @Mason_Wheeler said:

    Profanity is the hallmark of a tragically limited vocabulary.

    But so is the lack of profanity. Weird, huh?


  • Winner of the 2016 Presidential Election

    @Mason_Wheeler said:

    How does me not knowing about Rust have anything at all to do with a discussion about C++ in which Rust has never been brought up until now?

    You immediately tried to dismiss it without knowing anything about it when I brought it up as an example.



  • @asdf said:

    You immediately tried to dismiss it without knowing anything about it when I brought it up as an example

    *gasp* Heaven forbid I should make a joke!


  • Winner of the 2016 Presidential Election

    @Mason_Wheeler said:

    gasp Heaven forbid I should make a joke!

    The fact that I was unable to recognize it as a joke in the context of your post should tell you something…


Log in to reply