Lets ensure this global variable gets deleted



  • SomeClass.cpp:

    SomeClass::SomeClass(std::unique_ptr<OtherClass> ptr)
    :m_ptr(std::move(ptr))
    {}
    

    DifferentClass.cpp

    DifferentClass::DifferentClass(/*elided*/, std::unique_ptr<OtherClass> ptr, /*elided*/)
    :SomeBaseClass(/*elided*/, SomeClass(std::move(ptr))
    {}
    

    AnEntirelyDifferentFile.cpp:

    OtherClass globalVariable;
    
    // ...
    
    void foo() {
    
      // ...
    
      blah = new ThirdClass(bar, baz, std::unique_ptr(&globalVariable), emu);
    }
    

    (Cheatsheet for the C++-deficient: 'unique_ptr' behaves like a reference in Java or C#, and it deletes the pointer it's carrying around when it goes out of scope. Putting a global variable into it is a guaranteed double-delete at best; in most situations they don't even live in the heap so the deallocator gets really confused. The std::move() stuff and passing unique_ptrs around like that is also a gigantic WTF, but it'd take a while to explain why, so you'll just have to take it from me).

    To be as fair as possible to the guy that wrote it, he's a) a recent graduate and b) has much more Java experience than C++. To be unfair, the Java doesn't work either.

    The part I'm puzzled about: I saw this code running on his machine on several occasions prior to him quitting and me picking it up. It didn't run for me until I discovered and fixed the bug. I verified that the code was broken like this from the first time it was in the subversion repository. How did he avoid the nasal demons?



  • Modern compilers are great at summoning Nasal Daemons. Gcc is particularly ingenious in this regard. When gcc sees a code, that has undefined behaviour, it compiles it the way you thought it would work with optimizations turned off. But once you turn optimizations on, the wicked module kicks in and invents some totally inconceivable behaviour and runs with that. Other compilers tend not to be as wicked, but sometimes they are too.

    Usually the mechanism is that the optimizer uses some assumptions that are violated by the undefined behaviour, gets inconsistent results from the dependency analysis and eliminates bits of the code either because the inconsistent case is not handled, or by explicitly marking such code as unreachable.

    But if it failed even in debug build, I'd suspect you did not actually see it until the point where it crashed when he demonstrated it to you.



  • Moving the unique_ptr around is fine, but creating it from a global variable is :doing_it_wrong: as you said.


  • Discourse touched me in a no-no place

    @jmp said:

    'unique_ptr' behaves like a reference in Java or C#, and it deletes the pointer it's carrying around when it goes out of scope.

    So no, it doesn't behave like a reference in Java or C#. Those are more like shared_ptr, except they've explicitly guaranteed to have a full GC about.



  • @LB_ said:

    Moving the unique_ptr around is fine, but creating it from a global variable is :doing_it_wrong: as you said.

    Yeah, the entire post could have been summed up as the representative line expression: std::unique_ptr(&globalVariable)



  • @Bulb said:

    Modern compilers are great at summoning Nasal Daemons. Gcc is particularly ingenious in this regard. When gcc sees a code, that has undefined behaviour, it compiles it the way you thought it would work with optimizations turned off. But once you turn optimizations on, the wicked module kicks in and invents some totally inconceivable behaviour and runs with that. Other compilers tend not to be as wicked, but sometimes they are too.

    gcc has been like that for a very, very long time. My first encounter with this characteristic was nearly 20 years ago.

    [code]
    void bit_reverse( void p, size_t n )
    {
    static unsigned char bits[16] = { /
    appropriate values */ };
    size_t counter;

    unsigned char *bp = (unsigned char *)p;

    /* Editor's note: So far, so good, nothing remarkable above. */

    for( counter = 0; counter < n; counter++ ) /* nothing to see here */
    {
    *bp++ = (bits[*bp & 0x0F] << 4) | bits[*bp >> 4];
    }
    }
    [/code]

    Leaving aside questions of portability in the face of a jolly machine with 16-bit char, the UB in this one is the single line inside the loop, modifying bp and also making an unrelated read. It's a classic, but every compiler the company had (except one) compiled it the way the eye sort of wants to have it compile (calculate the RHS, store through the pointer, increment the pointer). Notable among these compilers that did their job was gcc 2.5.8 for i386 on Linux.

    The one that did something else? That was also gcc 2.5.8, but this time cross-compiling for MIPS R4600, hosted on i386 Linux. It produced something that was, um, not exactly what anyone wanted. For ... reasons(1) ... the code was called twice for the same buffer, with a visual appearance that resembled the results a different kind of bug that I was expecting to have. The result was that I spent three days putting printfs in various places to isolate what was going wrong, and I eventually found the true wrongness when I finally had a printf in between the two calls.

    The f(riendly) compiler decided to generate code that was the equivalent of:
    [code] unsigned char temp = bp++;
    temp = / same RHS as before */;[/code]

    that is, increment the pointer before evaluating the RHS, but use the before-increment value as the address of the target of the assignment, instead of the version that all the other compilers produced, essentially this:
    [code] bp = / same RHS as before */;
    bp++;[/code]

    that is, do all the calculations and the assignment, then increment the pointer.

    (1) The reasons for calling it twice were valid, but only just. A large fraction of the codebase did things for reasons that could be described like that, and I once found a line of code with a comment that said "this doesn't work and I don't know why." (Why was evident by inspection, but apparently nobody thought of doing that, except me.)

    EDIT: WTF WTF WTF :wtf::wtf::wtf::wtf::wtf::wtf::wtf::wtf::wtf::wtf::wtf: Diksucks sucks donkey balls. Missing blank line completely HID a paragraph. This forum software sucks donkey balls!


  • Discourse touched me in a no-no place

    @Steve_The_Cynic said:

    The result was that I spent three days putting printfs in various places to isolate what was going wrong, and I eventually found the true wrongness when I finally had a printf in between the two calls.

    It's a lot simpler in some languages where there's a defined evaluation order in the language semantics. This isn't to say that the values have to be always evaluated in that order, but rather that the order of side effects is defined (which matters a lot, as your example shows). There's prescribed initial translation into an instruction sequence and optimisations have defined constraints that they must obey.

    If you use clang now, that's pretty much how it works at the back end. It's only the front end which can get really frisky semantically, but that's C and C++ for you.



  • @Steve_The_Cynic said:

    gcc has been like that for a very, very long time. My first encounter with this characteristic was nearly 20 years ago.

    Yes, mine too (well, a few less, but not that many).

    @Steve_The_Cynic said:

    machine with 16-bit char

    C and C++ require that sizeof(char) == 1, that is char must be the smallest addressable unit. I strongly doubt anybody makes devices with different addressing unit than 8 bits any more; reuseability of components is simply too useful.



  • This post is deleted!


  • @Medinoc said:

    @Bulb said:
    I strongly doubt anybody makes devices with different addressing unit than 8 bits any more; reuseability of components is simply too useful.

    which isn't the case on machines where the smallest addressable unit is a 16-bit word.

    which I am claiming don't exist, so there is no point in making the code portable to them.



  • (note: I removed my previous post because I didn't read your last paragraph right and it turns out my previous post is content-free)

    @Bulb said:

    @Medinoc said:
    machines where the smallest addressable unit is a 16-bit word.

    which I am claiming don't exist



  • @Bulb said:

    C and C++ require that sizeof(char) == 1, that is char must be the smallest addressable unit. I strongly doubt anybody makes devices with different addressing unit than 8 bits any more; reuseability of components is simply too useful.

    I've worked with one in the past (but, equally, the company that made it is long since defunct, so take that observation for what it's worth), and the "smallest addressable unit" on the CDC Cyber machines was 60 bits... (But those machines are also defunct.)

    But I'm not sure which components you are talking about reusing - the physical memory data bus of x86 has been wider than 8 bits for a very long time. (I think the 28x was the last generation with an 8-bit bus version, and the 386SX was the last x86 chip with a 16-bit data bus. The 386SX was seriously passé at the time when I encountered that jolly bit of UB described above.)

    All that said, some 16-bit addressable platforms (i.e. add +1 to an address, advance by 16 bits) had special instructions for accessing string data, where a string was accessed through a base address plus an offset, and the offset was 15 bits of address offset and one bit of left/right selector. It makes char * a bit complicated.

    EDIT: I've always interpreted the mandate that sizeof(char)==1 to mean that char is the unit in which sizeof is measured. It's not clear to me how exactly one should describe sizeof as meaning if char requires subaddressing.



  • @Bulb said:

    Modern compilers are great at summoning Nasal Daemons. Gcc is particularly ingenious in this regard. When gcc sees a code, that has undefined behaviour, it compiles it the way you thought it would work with optimizations turned off. But once you turn optimizations on, the wicked module kicks in and invents some totally inconceivable behaviour and runs with that. Other compilers tend not to be as wicked, but sometimes they are too.

    Usually the mechanism is that the optimizer uses some assumptions that are violated by the undefined behaviour, gets inconsistent results from the dependency analysis and eliminates bits of the code either because the inconsistent case is not handled, or by explicitly marking such code as unreachable.

    But if it failed even in debug build, I'd suspect you did not actually see it until the point where it crashed when he demonstrated it to you.

    Most of the GCC undefined-behaviour 'abuse' I've seen has been fairly sensible; usually optimising out expressions of the form if (n + c < n).

    It was failing in debug build. The code is always executed, when the program starts up. I'm genuinely at a loss; my guess is that he was running it in release and VS' allocator wasn't falling over too badly when handed a pointer to a global.

    @LB_ said:

    Moving the unique_ptr around is fine, but creating it from a global variable is :doing_it_wrong: as you said.

    I consider passing unique_ptr around like that a bit of a wtf; just use a friggin' shared_ptr, or pass a raw pointer down and have the bottom layer take ownership.

    @dkf said:

    So no, it doesn't behave like a reference in Java or C#. Those are more like shared_ptr, except they've explicitly guaranteed to have a full GC about.

    Close enough for Java/C# purposes.



  • @jmp said:

    I consider passing unique_ptr around like that a bit of a wtf; just use a friggin' shared_ptr, or pass a raw pointer down and have the bottom layer take ownership.

    What exactly are the semantics of passing a unique_ptr by value?



  • @jmp said:

    I consider passing unique_ptr around like that a bit of a wtf; just use a friggin' shared_ptr, or pass a raw pointer down and have the bottom layer take ownership.

    To each their own, I guess.

    @Steve_The_Cynic said:

    What exactly are the semantics of passing a unique_ptr by value?

    You can't. It's a move-only type.



  • It's being passed an rvalue each time, so it's getting move-constructed, which takes ownership.

    Unless that was intended to be Socratic, in which case yeah, I know.



  • @LB_ said:

    @Steve_The_Cynic said:
    What exactly are the semantics of passing a unique_ptr by value?

    You can't. It's a move-only type.


    That's actually a (marginal) non-sequitur(1). It could easily be defined to move the ownership to the callee's copy of the object, but that would be a serious WTF in its own right, as in "WTF? I called that function and it was OK, but then I called the other one and my pointer got NULLed. WTF?"

    (1) It's marginal because nobody sane would actually do it the way I suggest, but it doesn't automatically, mandatorially follow just from "move-only" type that you can't pass-by-value.

    For maximum amusement when talking about pass-by-value, consider the following class definition, and try to imagine the messages generated by calling the function declared below it:
    [code]class NotCopyable
    {
    public:
    NotCopyable();
    NotCopyable( NotCopyable other );

    // Other members
    };

    void notCallableFunction( NotCopyable thing );[/code]

    The compiler I tried this on (i.e. actually calling notCallableFunction) generated some ... strange ... messages at the point-of-call.

    Bonus points for someone who explains coherently why you can't call the function. (I know why - I want to see what people say about it.)



  • @jmp said:

    Most of the GCC undefined-behaviour 'abuse' I've seen has been fairly sensible; usually optimising out expressions of the form if (n + c < n).

    That's not actually undefined behaviour in the DS9K sense. If c is signed, then optimising it out is (obviously) just plain wrong, but the behaviour of the expression is well-defined. (In strict form, it isn't even correct to optimise it out if c is unsigned, since integer overflow may make the result of the addition less than n, and that isn't UB either.)



  • Whatever, as a person who did a lot of coding on Macs which had a 16-bit int, I have sympathy.

    The lack of DEFINING THE SIZE OF ITS MOST IMPORTANT VARIABLES is why everybody who says C is a "portable assembler" is wrong and stupid. Portable my ass.



  • Signed integer overflow is undefined behaviour in C++.



  • @jmp said:

    I consider passing unique_ptr around like that a bit of a wtf; just use a friggin' shared_ptr, or pass a raw pointer down and have the bottom layer take ownership.

    Taking a unique_ptr by value is the correct way of signaling your function will take ownership. The point of unique_ptr is to not pass owning raw pointers around, after all. And shared_ptr has the problem that with them you can't know if it's a shared_ptr because it needs to be a shared pointer, or because you inherited an old codebase where the alternative was auto_ptr.

    @LB_ said:

    You can't. It's a move-only type.
    Yes you can. It's the correct use, in fact. But you have to either move from your existing unique_ptr or make a new copy of the thing being pointed.

    @Steve_The_Cynic said:

    Bonus points for someone who explains coherently why you can't call the function. (I know why - I want to see what people say about it.)
    That's a weird way of disabling copy construction. You'd normally just declare the copy-constructor private, so that when you try to copy the compiler tells you "you tried to call the copy constructor, but it's private".

    I haven't tried to compile that, so I haven't seen the error yet, but here's my guess as to what's happening there: you declare a constructor that takes no arguments, and a constructor that takes another object of the same class by value as argument. Implicitly, you still have the copy constructor being provided by the compiler. When you try to call the function, you get an ambiguous situation because you could create a copy from either the copy constructor (that takes a const& argument) or the "self constructor" that takes another thing by value.

    Either that, or you get an unsolvable recursion where in order to create the parameter, you have to call the constructor that takes the parameter by value. Which requires calling the constructor itself. Meaning the constructor has to call itself before it can complete, and you make the compiler cry.



  • You know why it's like that, right? So people who aren't compiling C on a PDP11 or whatever don't need an abstraction for ints and friends and can just use the native mechanism? Kinda important for what it was at the time. The version that specifically defined how large integers are would have been less portable, because an abstraction layer on every integer operation would've been too slow.

    (Just to be clear in modern C++ - since visual studio 2010 at least in Microsoft-land, probably earlier in GCC, and probably even earlier if you used boost - there's a header that provides an explicit 8/16/32/64 signed/unsigned integer if one is available on your machine, so this isn't a concern nowadays.)



  • @Kian said:

    Taking a unique_ptr by value is the correct way of signaling your function will take ownership. The point of unique_ptr is to not pass owning raw pointers around, after all. And shared_ptr has the problem that with them you can't know if it's a shared_ptr because it needs to be a shared pointer, or because you inherited an old codebase where the alternative was auto_ptr.

    Maybe I'm doing it wrong then, fair enough.

    Fortunately, most of our codebase (not this bit) predates auto_ptr, so we don't have that problem. Just huge structs filled with fixed-size arrays (size specified with a macro, of course) that get initialised with memset.


  • ♿ (Parody)

    @jmp said:

    You know why it's like that, right?

    He doesn't care. He just likes to be a pendantic dickweed and do stuff like pretending that when someone said "portable" they meant that it would run on anything.



  • I guess in any other context he'd be all "Why do you care how big an integer is? That's the job of the people writing the language spec, not yours. You don't need to understand anything about an integer, or even it's performance, just the interface."


  • Discourse touched me in a no-no place

    @boomzilla said:

    He just likes to be a pendantic dickweed and do stuff like pretending that when someone said "portable" they meant that it would run on anything.

    There are times when I think he ought to change his avatar to Humpty Dumpty:

    @Lewis Carroll said:

    "When I use a word," Humpty Dumpty said in rather a scornful tone. "It means just what I choose it to mean - neither more or less."
    "The question is," said Alice, "whether you can make words mean so many different things."
    "The question is," said Humpty Dumpty, "which is to be master - that's all."



  • @Steve_The_Cynic said:

    It could easily be defined to move the ownership to the callee's copy of the object, but that would be a serious WTF in its own right

    Well yeah, that's why auto_ptr is now deprecated :)

    @Steve_The_Cynic said:

    The compiler I tried this on [...] generated some ... strange ... messages

    I would expect so, as you have provided an invalid copy constructor.

    @Kian said:

    Yes you can. It's the correct use, in fact.

    Sorry, to me "pass by value" means "take a copy" and isn't the same as "value semantics". I might be the only one who thinks like that though.



  • @Kian said:

    Taking a unique_ptr by value is the correct way of signaling your function will take ownership.

    Honest question: I haven't touched C++ in years.

    Do IDEs track "ownership?" And display it on the screen somehow? Or does the developer still have to keep all this stuff in their head?



  • I've never seen an IDE track ownership. It's usually pretty obvious from context, my opinions about passing unique_ptr around aside.



  • @jmp said:

    You know why it's like that, right?

    Yeah; the designers of C were morons who fucked-up their designs.

    @jmp said:

    So people who aren't compiling C on a PDP11 or whatever don't need an abstraction for ints and friends and can just use the native mechanism?

    Being able to use the native mechanism is not mutually-exclusive with having defining the length of the data types!

    @jmp said:

    The version that specifically defined how large integers are would have been less portable, because an abstraction layer on every integer operation would've been too slow.

    That is such gibberish I don't even know how to respond to it.

    Look, all I know is I did my C programming on a 16-bit Mac with a 24-bit memory bus, back in the "good ol' days". Every C source I came across, if typed in verbatim, would not run correctly. EVERY C SOURCE FILE I CAME ACROSS. That is not "portable" by any measure. That is "shittyville". Population: me.

    Especially when you're first learning the language, and your programs don't work, and you don't know why they don't work, and you feel like a little piece of dumb shit because everybody else used that textbook and their programs worked. Then only later you figure out it was never your fault, but guess what? Deep inside: you feel like a little piece of dumb shit for your entire life.



  • @jmp said:

    It's usually pretty obvious from context,

    And that doesn't strike you as "oh my God, this is a horrible problem we need fixed right now?"

    Huh.

    If there's one thing you can be sure of in this world, it's that when a programmer describes something as "obvious", it's about as far away from obvious as it's possible to be.


  • Discourse touched me in a no-no place

    @blakeyrat said:

    Yeah; the designers of C were morons who fucked-up their designs.

    I take it you've never met Dennis?



  • Nope. But thank you for that great contribution to the thread.



  • @jmp said:

    Signed integer overflow is undefined behaviour in C++.

    Ah, yes, forgot that.



  • @blakeyrat said:

    Being able to use the native mechanism is not mutually-exclusive with having defining the length of the data types!

    Say they specified 8-bit char, 16-bit int, 32-bit long. (And, iunno, 12 bit short for kicks)

    And then you try to compile some C on a Honeywell with a 9-bit char. You are not going to have native-performance integers.

    Or maybe you're trying to compile for a machine with 16-bit words that can't easily address 8-bit chunks. Anything with chars is going to be something like three times slower, which mattered back when C was designed.

    tl;dr you don't understand the design space.

    (also keep in mind this was a period of time where the norm was that programs didn't run on different machines. Did C improve on that? Fuck yes.)

    But thank you for making it clear where the deep-seated trauma lies. It's okay, blakey, I don't think you're an idiot because you couldn't get your C programs to work. I think you're an idiot for entirely different reasons.

    @blakeyrat said:

    And that doesn't strike you as "oh my God, this is a horrible problem we need fixed right now?"

    Huh.

    If there's one thing you can be sure of in this world, it's that when a programmer describes something as "obvious", it's about as far away from obvious as it's possible to be.

    I don't regularly run into problems where somebody has screwed up ownership of memory. Usually when I do it's something like the post I started this thread with - somebody doing something /very extremely obviously wrong/, which doesn't need much thought to recognise when the debugger blows up around there and no amount of IDE support can fix. Keeping track of it doesn't take any brainpower as far as I can tell. I haven't noticed myself being more productive the handful of times I've written C#, although that might be because I was learning C# and WPF.

    Also the same problem occurs in every other language, just with things that aren't memory. Which object in the design is responsible for frobbing the emu?



  • @jmp said:

    Say they specified 8-bit char, 16-bit int, 32-bit long. (And, iunno, 12 bit short for kicks)

    And then you try to compile some C on a Honeywell with a 9-bit char. You are not going to have native-performance integers.

    Ok.

    But that doesn't prevent C from having a "Honeywell 9-bit char type".

    Anyway, if you're promoting PORTABILITY, you can't just instantly go back on your promise the millisecond a performance issues comes up. Either it's portable or it's not. And C is not.

    @jmp said:

    tl;dr you don't understand the design space.

    I don't care about "the design space". What I care about is that it was (and is!) advertised to be portable AND IT IS CLEARLY NOT. Call me crazy, but I don't like people who lie to me.

    @jmp said:

    But thank you for making it clear where the deep-seated trauma lies. It's okay, blakey, I don't think you're an idiot because you couldn't get your C programs to work. I think you're an idiot for entirely different reasons.

    Yeah yeah, whatever. I don't care what you think of me.

    It's just a good example of why THIS SHIT MATTERS. GETTING THIS SHIT RIGHT IS IMPORTANT, damnit. If you're a language designer, you have a LOT of power to fuck up people's lives, to make them feel stupid and useless, and you have to be VERY VERY EXTREMELY CAREFUL.

    @jmp said:

    obviously

    There's that word "obviously" again.

    Look, your interactions with other people will improve by orders of magnitude once you learn the secret: there's no such thing as "obviously".



  • @Kian said:

    @jmp said:
    I consider passing unique_ptr around like that a bit of a wtf; just use a friggin' shared_ptr, or pass a raw pointer down and have the bottom layer take ownership.

    Taking a unique_ptr by value is the correct way of signaling your function will take ownership. The point of unique_ptr is to not pass owning raw pointers around, after all. And shared_ptr has the problem that with them you can't know if it's a shared_ptr because it needs to be a shared pointer, or because you inherited an old codebase where the alternative was auto_ptr.

    @LB_ said:

    You can't. It's a move-only type.
    Yes you can. It's the correct use, in fact. But you have to either move from your existing unique_ptr or make a new copy of the thing being pointed.

    @Steve_The_Cynic said:

    Bonus points for someone who explains coherently why you can't call the function. (I know why - I want to see what people say about it.)
    That's a weird way of disabling copy construction. You'd normally just declare the copy-constructor private, so that when you try to copy the compiler tells you "you tried to call the copy constructor, but it's private".

    I haven't tried to compile that, so I haven't seen the error yet, but here's my guess as to what's happening there: you declare a constructor that takes no arguments, and a constructor that takes another object of the same class by value as argument. Implicitly, you still have the copy constructor being provided by the compiler. When you try to call the function, you get an ambiguous situation because you could create a copy from either the copy constructor (that takes a const& argument) or the "self constructor" that takes another thing by value.

    Either that, or you get an unsolvable recursion where in order to create the parameter, you have to call the constructor that takes the parameter by value. Which requires calling the constructor itself. Meaning the constructor has to call itself before it can complete, and you make the compiler cry.


    In essence, it's an unsolvable recursion, because (on the compiler I tried it on, many years ago) it preferred calling an existing function to calling an implicit function.(1) And as you say the compiler breaks down in tears, but the messages were strange(2).

    (1) So if I declare a copy-like constructor that takes a reference to non-const, but not a copy constructor that takes a reference to const, the compiler will be happy to copy non-const objects, and I think it ends up in tears if I try to copy a const object.

    (2) But not as strange as a message I got once from Visual C++ 6 (which compiled a sort of bastard C++97.5), telling me that a concrete function in one base class overrode a virtual function in another base class "by domination". I had immediate visions of functions dressed in black PVC carrying whips... (The situation was more complex than that makes it sound. Base declared a pure virtual, let's say virtual void f() = 0. Derived1 derived from Base and had its own void f() calmly overriding void Base::f(). Derived2 also derived from Base and didn't have a concrete version of void f(). Derived3 then derived from both Derived1 and Derived2, and the compiler warned me that void Derived1::f() overrode void Base::f() in Derived2 by domination.)


  • FoxDev

    @Steve_The_Cynic said:

    I had immediate visions of functions dressed in black PVC carrying whips... (

    :giggity:

    :-D



  • ...Are you seriously suggesting that the language spec should have had multiple semi-compatible numerics to begin with? Like, char, char_9bit, char_16bit, wchar, int_2scomplement, int_signbit, etc?

    Or that the language should have said "char can be 9 bits, 8 bits, 16 bits, signed or unsigned, but the signed has to be 2s complement or sign-magnitude, no other chars allowed"?

    Neither of those make any goddamn sense, so I'm just a bit confused here.

    @blakeyrat said:

    I don't care about "the design space". What I care about is that it was (and is!) advertised to be portable AND IT IS CLEARLY NOT. Call me crazy, but I don't like people who lie to me.

    It was more portable than any of the alternatives available at the time it was designed!

    @Steve_The_Cynic said:

    (2) But not as strange as a message I got once from Visual C++ 6 (which compiled a sort of bastard C++97.5), telling me that a concrete function in one base class overrode a virtual function in another base class "by domination".

    That message still exists.



  • @Kian said:

    shared_ptr [...] because you inherited an old codebase where the alternative was auto_ptr

    The replacement for auto_ptr is unique_ptr. If anybody replaces auto_ptr with shared_ptr just because upgrading to C++11, they are idiots and should be immediately fired.

    @blakeyrat said:

    The lack of DEFINING THE SIZE OF ITS MOST IMPORTANT VARIABLES is why everybody who says C is a "portable assembler" is wrong and stupid. Portable my ass.

    Most variables don't actually want nor need to be 8 bits or 32 bits, but character/elementary unit (char), native integer size (int) or enough for pointer (size_t/ptrdiff_t; used to be long too, but Microsoft broke than with LLP64). And for some things, you create typedefs that you need to set on each platform accordingly. Since platforms with different size of byte were still common when C was created, you'd have had to do that for all types if the sizes were fixed, so the imprecise definitions were more portable than fixed ones back then. Now that the byte settled on 8 bits, C99 added the stdint.h so you can use the fixed sizes when you need. But most code still does not, because most code still does not want that, but usually still wants "platform default integer".

    @blakeyrat said:

    Do IDEs track "ownership?" And display it on the screen somehow? Or does the developer still have to keep all this stuff in their head?

    IDEs don't. If you have a unique_ptr, you own it. If you have a reference, you don't own it and you have to make sure it lives long enough; most of the time, you simply have reference to something from outer scope and that lives long enough by definition. And if you have a dumb pointer, you are dumb (usually).



  • @Bulb said:

    Now that the byte settled on 8 bits, C99 added thestdint.h so you can use the fixed sizes when you need. But most code still does not, because most code still does not want that, but usually still wants "platform default integer".

    Great, and if they have an overflow because they just assumed an int was 16-bit and on that platform it's 8-bit, it's their own goddamn fault.



  • @blakeyrat said:

    Do IDEs track "ownership?" And display it on the screen somehow? Or does the developer still have to keep all this stuff in their head?

    Neither. The current situation is less than ideal, but showing improvement. Unfortunately, a lot of the improvement is fairly recent (the 2011 standard marks the break between old C++ and what's called modern C++); compilers take a while to implement the final draft of the standard, many companies don't update compilers as soon as new versions become available, and programmers themselves may take some time to adopt the improvements. And even if you keep up to date with the latest developments, your compiler is up to date, and your company already made the switch, your ten year old codebase is still written in the less safe, old style (although some improvements are automatically available without touching the code).

    So there are problems that are "fixed", but people coding in corporate environments may not have access to the fixes yet.

    Lets consider the ideal case, however: a new codebase, up to date compiler and programmers using best practices. IDE's don't track ownership per se, but you will get in a decent one a squiggly line under a call if you pass something that has ownership to something that requires ownership without explicitly saying that you are giving up ownership (since you can't have two owners of one thing unless you specifically use shared ownership). And the compiler will give an error and won't compile. So you don't need to keep track of the current state of things when writing. You just have to decide when you run into the squiggly line or compiler error if you want to stop owning the thing or create a new thing and pass it along.

    You do need to establish what things you need to own when you write a function or a class, though. The IDE won't tell you, any more than it can tell you when to write an if or what to call your methods. But if you start by establishing what things you need to perform a task, the rules of the language itself will prevent you from getting it wrong by marking up errors when you break the rules.

    The problem is that you also have tools to break the rules and sidestep all the safety measures the language implements. So if you understand how the language models things and play within the rules, you won't ever have a leak, or a double free, or a buffer overflow. It all just works, at no performance penalty. If you try to be clever, or work around the model because it takes a bit more effort to do things right, you can cast all the safety away.

    Some of the rules can be a bit obscure, however, like the fact that signed overflow is undefined, but generally that punishes people that try to be clever and code as if they were writing assembly, checking for errors after doing the thing that caused an error instead of preventing the error from happening in the first place.

    @blakeyrat said:

    And that doesn't strike you as "oh my God, this is a horrible problem we need fixed right now?"
    That's what modern C++ tries to address. Unfortunately, it also tries to stay compatible with the past. So you have both a safe language and an unsafe mess in one. Like a pretty garden in the middle of a minefield. The garden is perfectly safe and pretty, but stray too far and you blow up.

    @Bulb said:

    The replacement for auto_ptr is unique_ptr. If anybody replaces auto_ptr with shared_ptr just because upgrading to C++11, they are idiots and should be immediately fired.
    Shared_ptr was available before unique_ptr, so there was a period where shared_ptr and auto_ptr existed. But auto_ptr was so broken that the superior alternative was to use shared_ptr instead. All those shared_ptrs don't go away just because you now have unique_ptr as the correct thing to use.


  • Winner of the 2016 Presidential Election

    @blakeyrat said:

    Do IDEs track "ownership?" And display it on the screen somehow?

    In case of Rust, yes. The compiler tracks it, so you'll get red squiggly lines and your code won't compile if you do something wrong.

    C++11 tries to do something similar with unique_ptr et al., see @Kian's post above mine.



  • Nowadays if you're writing code for anything where int isn't 32 or 64 bits, you're writing it for a microcontroller or DSP and in that situation if you haven't a) checked what your compiler does and b) thought about overflow you're doing it wrong.

    If you overflow a 32-bit int and you haven't considered the possibility of overflow you're also doing it wrong.

    @asdf said:

    In case of Rust, yes. The compiler tracks it, so you'll get red squiggly lines and your code won't compile if you do something wrong.

    Am I the only person who hates squiggly red lines in their IDE? It's not just that 'intellisense errors' are consistently wrong and/or out of date by five minutes, it's just aesthetically unpleasing.



  • @powerlord said:

    @Bulb said:
    Now that the byte settled on 8 bits, C99 added thestdint.h so you can use the fixed sizes when you need. But most code still does not, because most code still does not want that, but usually still wants "platform default integer".

    Great, and if they have an overflow because they just assumed an int was 16-bit and on that platform it's 8-bit, it's their own goddamn fault.

    Except the standard demands that ints be able to contain at least all the values from -32767 to +32767, which implies they must always have at least 16 bits.
    (the standard does not demand -32768 because it does not mandate 2's complement)

    However, assuming an int can contain 1,000,000 just because one used to code on a 32-bit platform can and has bit people in the rear.


  • Winner of the 2016 Presidential Election

    @jmp said:

    It's not just that 'intellisense errors' are consistently wrong and/or out of date by five minutes

    You should get a better computer then. I rarely have that problem.

    @jmp said:

    it's just aesthetically unpleasing

    What kind of visual indicator would you prefer for errors?



  • @jmp said:

    Am I the only person who hates squiggly red lines in their IDE?

    I don't like them when the IDE uses something other than the actual compiler to track them, so that you have the IDE saying one thing and the compiler saying another. Other than that, they're handy to fix mistakes quickly before trying to build.



  • @jmp said:

    @Steve_The_Cynic said:
    (2) But not as strange as a message I got once from Visual C++ 6 (which compiled a sort of bastard C++97.5), telling me that a concrete function in one base class overrode a virtual function in another base class "by domination".

    That message still exists.

    Doesn't surprise me. The situation is still legal within the language definition, I guess, and if it happens, the situation is still just as WTF.


  • Winner of the 2016 Presidential Election

    @Kian said:

    I don't like them when the IDE uses something other than the actual compiler to track them

    PyCharm is a good example. As soon as you modify __all__ programmatically, it will start complaining about undefined symbols everywhere.



  • @Kian said:

    Lets consider the ideal case, however: a new codebase, up to date compiler and programmers using best practices. IDE's don't track ownership per se, but you will get in a decent one a squiggly line under a call if you pass something that has ownership to something that requires ownership without explicitly saying that you are giving up ownership (since you can't have two owners of one thing unless you specifically use shared ownership). And the compiler will give an error and won't compile. So you don't need to keep track of the current state of things when writing. You just have to decide when you run into the squiggly line or compiler error if you want to stop owning the thing or create a new thing and pass it along.

    Oh, right, I forgot that. To explain to a confused Blakeyrat who's already writing a post about how Kian and I are contradicting each other and therefore LIARS, if you're passing a pointer to someone in modern C++ you do it in a few different ways:

    1. Give them a reference. This directly implies no change in ownership. delete someRef is a compile time error. delete &someRef is stylistically bizarre and probably a compile-time warning.
    2. Give them a std::shared_ptr<>. This implies shared ownership, and is magical. You can store a shared_ptr, pass it around, whatever, and when the last shared_ptr<> pointing to something is destructed, the thing is destructed. Internally they're reference-counted. delete someSharedPtr is a compile-time error.
    3. Give them a std::unique_ptr<>. This implies that I've given ownership to you. You can't copy a std::unique_ptr<> without doing something weird and unnatural that static analysis or code review will likely pick up (calling a particular function on unique_ptr<>), it'll automatically delete when it's destroyed, and if you want to give ownership to someone else you have to write std::move(someUniquePtr). delete someUniquePtr is a compile time error.

    So in all three cases the recipient doesn't really have to do anything to ensure they don't not-delete something they've been handed. Attempting to delete is a compile time error or requires weird contortions that code review will pick up. The only complicated thing is that if you want to pass unique_ptr<> to someone you have to tell the compiler you're trying to move it, and that's enforced because there's no valid copy constructor so you can't accidentally forget-to-move; it's a compile-time error.


Log in to reply