Visual Studio WTF


  • Java Dev

    @dkf I would rather ask the question why you're dealing with two textually different versions of the same declaration in the first place. Because the only reasons I can think of are undesirable code duplication (if it's all your own code) or unreliable library authors (if they're changing bits of the header for no clear reason).


  • Banned

    @dkf said in Visual Studio WTF:

    @Gąska said in Visual Studio WTF:

    And you haven't yet established any reason why a compiler, when encountering two SEMANTICALLY IDENTICAL IN LITERALLY EVERY WAY, AND I MEAN EVERY WAY classes, would produce code that behaves differently from when there's one class.

    Because @Steve_The_Cynic is using a C++ compiler that's written to be as fuckwaddedly asinine as possible in its interpretation of the standard. If it can find a way to interpret things differently in some circumstances to others, it does exactly that. In particular, it likes to change the order of things (if it is at all legal to do so) based on a hash of the userid of the account running the compiler. Because there's nothing in the standard to say it can't.

    But that's the thing. Even if he does use such asinine compiler, there's still nothing in this particular case that the compiler can do to make the program do the wrong thing, unless it outright breaks the spec in other places (e.g. due to a bug).


  • BINNED

    @PleegWat said in Visual Studio WTF:

    @dkf I would rather ask the question why you're dealing with two textually different versions of the same declaration in the first place. Because the only reasons I can think of are undesirable code duplication (if it's all your own code) or unreliable library authors (if they're changing bits of the header for no clear reason).

    My first thought is that this could only happen if it’s defined in two different places, which would be pretty bad even if the definition is literally identical and not breaking ODR. But it could also happen if the same header is compiled with different defines for different TUs. Which at first glance is also a big “this breaks ODR” red flag but could happen with things you’d really assume do not matter. Like something tagged [[nodiscard]] conditionally. As far as I can tell, that purely affects warnings without changing any semantics, but it’s textually different. Is that really UB? I’d wager lots of library headers do that, including standard lib implementations.



  • @PleegWat said in Visual Studio WTF:

    @dkf I would rather ask the question why you're dealing with two textually different versions of the same declaration in the first place. Because the only reasons I can think of are undesirable code duplication (if it's all your own code) or unreliable library authors (if they're changing bits of the header for no clear reason).

    There's nothing requiring those two versions of the header to co-exist at the same time - I could see this happening when you have an old binary version of something whose sources have been lost. You piece together a header with the necessary declarations. Old binary version = translation unit(s) with the old declaration, new code = translation unit(s) that see the new declaration. Technically you're now in UB territory.

    I agree with @Gąska that there is no reason for a compiler to intentionally mess this up. Still, good to be aware that you might be technically UB and have the appropriate testing in place. Same goes for other things that are technically UB but happen to work right now. Sometimes those are necessary (or at least not reasonably workaroundable). Testing can mitigate the problem, and may help you from getting bitten across compilers/new versions.



  • @cvi said in Visual Studio WTF:

    Testing

    What is this "testing" of which you speak?

    'Tis ancient and deep magic — a mystery beyond the ken of the code monkeys of this age.


  • Considered Harmful

    @HardwareGeek said in Visual Studio WTF:

    @cvi said in Visual Studio WTF:

    Testing

    What is this "testing" of which you speak?

    'Tis ancient and deep magic — a mystery beyond the ken of the code monkeys of this age.

    Nice try, but, more likely, you work in a technological backwater.


  • Discourse touched me in a no-no place

    @HardwareGeek said in Visual Studio WTF:

    @cvi said in Visual Studio WTF:

    Testing

    What is this "testing" of which you speak?

    The thing users do once code is released to Production.



  • @Steve_The_Cynic said in Visual Studio WTF:

    *pointer++ = arithmetic on *pointer

    I've never understood why people wrote those kinds of expressions in the first place. Or rather, I too loved writing them in like the second year of university and then grew out of it not being able to read my own code later. I think the Single Responsibility Principle should apply in some way to lines of code too, and "do something to the pointee then increment the pointer" violates it. And still literature is chock full of these lines.


  • Java Dev

    @marczellm I do use set-and-increment or read-and-increment, but I try not to have set-and-increment references and set-only references (or read-and-increment plus normal read) to the same variable in the same bit of code.


  • BINNED

    @marczellm seems like people took “look, I can write strcpy in a single line with an empty loop body” and decided that’s a great way to write all your code.
    Probably a Zeitgeist thing, after all intel basically dedicated specific opcodes to implement these functions in a single instruction.



  • @marczellm It's a balance act between conciseness and simplicity. Too much or too little of either will make code more difficult to read. IMO *pointer++ = stuff is a common enough idiom that it shouldn't impede readability of the code (rather the opposite if used appropriately).


  • Discourse touched me in a no-no place

    @loopback0 said in Visual Studio WTF:

    @HardwareGeek said in Visual Studio WTF:

    @cvi said in Visual Studio WTF:

    Testing

    What is this "testing" of which you speak?

    The thing users do once code is released to Production.

    Only if you're lucky. If not, the code sits there unused for months, all unnoticed…


  • Discourse touched me in a no-no place

    @marczellm said in Visual Studio WTF:

    the Single Responsibility Principle should apply in some way to lines of code too

    But that depends on what you think the responsibility is. It doesn't have to be at the lowest conceptual level.

    (Also, only C and C++ really get in a mess over this sort of thing. Java and C# don't, once you allow for not having address arithmetic as such in the first place; it just converts into operations on indices of arrays.)


  • Banned

    @cvi said in Visual Studio WTF:

    @marczellm It's a balance act between conciseness and simplicity. Too much or too little of either will make code more difficult to read. IMO *pointer++ = stuff is a common enough idiom that it shouldn't impede readability of the code (rather the opposite if used appropriately).

    Whereas I'm of the opinion that even ++number is dumb and should always be written as number += 1. (Except in the 3rd clause of for loop. The for loop is an idiom of its own.)



  • I found the actual 2nd year of uni homework where I was going crazy with assigning to expressions that result from assignments etc:

    template<typename T>
    class List
    {    
        // snip
        void insertLast(T elem)
        {
            Node* p= new Node(elem);
            if (tail)
            {
                (tail->next=p)->prev=tail;
                tail=act=p;
            }
            else
                act=head=tail=p;
        }
        void insertBeforeAct(T elem)
        {
            if (act)
            {
                if (isActFirst())
                    insertFirst(elem);
                else
                {
                    Node* p= new Node(elem);
                    act=act->prev=(p->prev=(p->next=act)->prev)->next=p;
                }
            }
            else throw ListException(false);
        }
        void insertAfterAct(T elem)
        {
            if (act)
            {
                if (isActLast())
                    insertLast(elem);
                else
                {
                    Node* p=new Node(elem);
                    act=(p->prev=act)->next=(p->next=act->next)->prev=p; // so much fun!
                }
            }
            else throw ListException(false);
        }

  • Considered Harmful

    @Gąska said in Visual Studio WTF:

    @cvi said in Visual Studio WTF:

    @marczellm It's a balance act between conciseness and simplicity. Too much or too little of either will make code more difficult to read. IMO *pointer++ = stuff is a common enough idiom that it shouldn't impede readability of the code (rather the opposite if used appropriately).

    Whereas I'm of the opinion that even ++number is dumb and should always be written as number += 1. (Except in the 3rd clause of for loop. The for loop is an idiom of its own.)

    You're doing it wrong, per everyone. ++x in a for loop is going to cause bugs down the line. Nobody expects it.



  • @Gribnit said in Visual Studio WTF:

    ++x in a for loop is going to cause bugs down the line. Nobody expects it.

    What?


  • Considered Harmful

    @marczellm said in Visual Studio WTF:

    @Gribnit said in Visual Studio WTF:

    ++x in a for loop is going to cause bugs down the line. Nobody expects it.

    What?

    You lazy sack of shit.



  • @Gąska said in Visual Studio WTF:

    Whereas I'm of the opinion that even ++number is dumb and should always be written as number += 1. (Except in the 3rd clause of for loop. The for loop is an idiom of its own.)

    We'll probably have to agree to disagree on that front.

    To me, seeing "x += 1" instead of "++x" is a bit of a warning flag to be extra suspicious of the code. While not always true, there's a good chance that it wasn't written by somebody very familiar with C/C++.


  • BINNED

    @Gąska said in Visual Studio WTF:

    @cvi said in Visual Studio WTF:

    @marczellm It's a balance act between conciseness and simplicity. Too much or too little of either will make code more difficult to read. IMO *pointer++ = stuff is a common enough idiom that it shouldn't impede readability of the code (rather the opposite if used appropriately).

    Whereas I'm of the opinion that even ++number is dumb and should always be written as number += 1. (Except in the 3rd clause of for loop. The for loop is an idiom of its own.)

    Why not argue the same for += then?


  • Banned

    @topspin because += saves you having to type the entire lvalue, while ++ only saves you three characters: two spaces and one 1. Also, += is more universal, and for non-primintive types, has different semantics from = followed by + that lead to more optimized code.


  • BINNED

    @Gąska said in Visual Studio WTF:

    Also, += is more universal, and for non-primintive types, has different semantics from = followed by + that lead to more optimized code.

    So does ++ on iterators.


  • Banned

    @topspin well, technically speaking, it's only true if an iterator has no binary + at all, so naturally the semantics are different. But if you have a random access iterator with proper operator+=(int), and += 1 isn't exactly the same as prefix ++, then something is very wrong.

    But that's nitpicking. Yes, for iterators ++ is not just acceptable but preferable. But integers aren't iterators. Usually.


  • Java Dev

    @Gąska said in Visual Studio WTF:

    Whereas I'm of the opinion that even ++number is dumb and should always be written as number += 1.

    You're gonna love Swift, then. It got rid of the ++ and -- operators, so requiring the number += 1 writing style by language design.


  • ♿ (Parody)

    @Gąska said in Visual Studio WTF:

    C++03 or below? The solution is upgrading to newer C++.

    :sadface:


  • ♿ (Parody)

    @dkf said in Visual Studio WTF:

    @loopback0 said in Visual Studio WTF:

    @HardwareGeek said in Visual Studio WTF:

    @cvi said in Visual Studio WTF:

    Testing

    What is this "testing" of which you speak?

    The thing users do once code is released to Production.

    Only if you're lucky. If not, the code sits there unused for months, all unnoticed…

    Even better, if they wait long enough that you're no long responsible for fixing it.


  • Discourse touched me in a no-no place

    @Atazhaia said in Visual Studio WTF:

    @Gąska said in Visual Studio WTF:

    Whereas I'm of the opinion that even ++number is dumb and should always be written as number += 1.

    You're gonna love Swift, then. It got rid of the ++ and -- operators, so requiring the number += 1 writing style by language design.

    Copying from Python…


  • Banned

    @Atazhaia said in Visual Studio WTF:

    @Gąska said in Visual Studio WTF:

    Whereas I'm of the opinion that even ++number is dumb and should always be written as number += 1.

    You're gonna love Swift, then.

    It has a few good ideas, yes, and this is one of them. But it's absolutely braindead in other areas, so no thanks.

    We must remember that the origin of ++ was that back in the 60s, compilers were too dumb to realize that adding a constant 1 to a variable can be compiled down to an inc.


  • BINNED

    @Gąska said in Visual Studio WTF:

    We must remember that the origin of ++ was that back in the 60s, compilers were too dumb to realize that adding a constant 1 to a variable can be compiled down to an inc.

    Which goes for the existence of the left-shift / right-shift operators too.
    I first hated how the C++ stream library has overloaded << / >> for something else because they obviously mean left-/right-shift. But unless you're explicitly bit-twiddling, there's really no use for these operators and you might as well get rid of them, or in this case re-purpose them. Compilers certainly don't compile your constant multiplication into what you literally wrote, anyway, you don't need to tell them "use shl".
    So now I just hate the stream library because of all the other reasons.


  • Discourse touched me in a no-no place

    @topspin said in Visual Studio WTF:

    So now I just hate the stream library because of all the other reasons.

    You still have true fprintf() available. Yes, it gives the C++ wonks the collywobbles over type safety, but it doesn't require a half-hour rebuild just to change the exact number format…


  • Banned

    @topspin said in Visual Studio WTF:

    @Gąska said in Visual Studio WTF:

    We must remember that the origin of ++ was that back in the 60s, compilers were too dumb to realize that adding a constant 1 to a variable can be compiled down to an inc.

    Which goes for the existence of the left-shift / right-shift operators too.

    Bit shifts aren't used only for multiplications. Sometimes literally saying "move these bits to the left and pad with zeroes" conveys the intention better than multiplication. But yes, "multiply by a power of two using bit shift" in modern codebase should get you hanged.

    So now I just hate the stream library because of all the other reasons.

    Same. It's actually kind of amazing. Everything that could possibly be done wrong, C++ streams have done wrong.


  • BINNED

    @dkf said in Visual Studio WTF:

    @topspin said in Visual Studio WTF:

    So now I just hate the stream library because of all the other reasons.

    You still have true fprintf() available. Yes, it gives the C++ wonks the collywobbles over type safety,

    And about a millon CVE numbers...

    but it doesn't require a half-hour rebuild just to change the exact number format…

    I've heard good things about fmt-lib.


  • Java Dev

    @Gąska said in Visual Studio WTF:

    Bit shifts aren't used only for multiplications. Sometimes literally saying "move these bits to the left and pad with zeroes" conveys the intention better than multiplication. But yes, "multiply by a power of two using bit shift" in modern codebase should get you hanged.

    Bit twiddling instructions are useful when bit twiddling. Some of my codebase defines bit flags using (1 << 6) rather than 0x40, though the latter is more common. Otherwise, there's a decent number of shifts where the right-hand argument is a variable.

    The only dubious one I've got offhand is a multiplication by 31 in a hashing function. No doubt, :trwtf: there is use of non-cryptographic hash functions in hash maps.



  • @PleegWat said in Visual Studio WTF:

    Bit twiddling instructions are useful when bit twiddling. Some of my codebase defines bit flags using (1 << 6) rather than 0x40, though the latter is more common. Otherwise, there's a decent number of shifts where the right-hand argument is a variable.

    If I want to set bits 25, 20, and 13 of some hardware control register, writing (1 << 25) | (1 << 20) | (1 << 13) conveys the intent much more clearly than 0x02102000, especially if each (1 << n) term is a named constant. This kind of bit twiddling is very, very common in the hardware world; the codebase where I'm currently working defines unions of structs of bitfields for this twiddling, but in my 30+ years in the industry, it's almost the only one I've ever seen. The vast majority use either (1 << n) or raw hex numbers hidden behind #defines.



  • Really, what C/C++ is missing is operators for rol and ror.

    But what others have said. printf() et al., or std::format et al.. Pass on the streams.



  • @topspin said in Visual Studio WTF:

    I've heard good things about fmt-lib.

    I like it. I've switched all my personal code to that.


  • Discourse touched me in a no-no place

    @topspin said in Visual Studio WTF:

    And about a millon CVE numbers...

    Only if you're using the scanf() family (which have all sorts of weird caveats for safety) or are letting the user specify the format. The latter is deeply :wtf: but there are plenty of safe ways to do the former. As long as a few critical conversions are avoided (notably, scanning and %s are a bad mix). Printing when you control the arguments… that's pretty safe.


  • Java Dev

    @dkf We've got a custom set of string parsing based on whitespace-separated tokens. Convenient to use and way saver than scanf(), but I'd love to somehow have compile-time argument type validation.

    I've considered moving the format argument to a suffix of the function name and autogenerating the required functions, but that'll be fiddly to get working right and may still be inconvenient to use if the specific function you need hasn't been generated yet.


  • Java Dev

    @PleegWat Hm, as I'm thinking about it I could probably also change it so I just need to do a separate function call to consume each argument. If I can justify the time needed for the refactor of course.



  • @Gąska said in Visual Studio WTF:

    @Steve_The_Cynic said in Visual Studio WTF:

    Have you seen LITERALLY THIS ONE SPECIFIC CASE that I'm talking about? Because if you haven't seen LITERALLY THIS ONE SPECIFIC CASE, then no, you haven't seen anything that disproves anything I say. Because I'm talking about LITERALLY THIS ONE SPECIFIC CASE.

    OK, so let's go back to the specific case. The two classes have a semantically-equivalent but textually-different definition, but their member functions are defined in some .cpp files rather than in the headers that I, as a user of those classes, will #include. When I call a member function on the object, which version of the member function will the linker give me? The two .cpp files both define void Fred::some_function(some_parameters); and I cannot predict which version of that function will be called. That's why it matters.

    The classes are semantically identical, so it doesn't matter which of the functions gets picked every time - the result will always be the same. Now, if the two methods had different code (ie. the classes DID differ semantically), it would be different.

    Fussy: the definitions of the classes (i.e. their external interfaces) are semantically equivalent. That doesn't mean that the two classes are the same. Presumably the fact that they are defined like that makes them different classes, and therefore we have to assume (because the standard says it's UB) that the code is different as well, and that weird shit will happen.

    Once again, in case you didn't get it the other 57 times I've said it: once you have UB in your program, you can rely on exactly nothing working correctly.

    UB is UB, and there's no "only a little bit UB", nor "unimportant UB".

    But there is "the reason this particular UB results in this particular behavior". And you haven't yet established any reason why a compiler, when encountering two SEMANTICALLY IDENTICAL IN LITERALLY EVERY WAY, AND I MEAN EVERY WAY classes, would produce code that behaves differently from when there's one class.

    (Also I thought we're talking about a case where just the class definition is duplicated, not a case where we have two .cpp files duplicating the method implementations too. But even then, it will be just fine unless you have static member variables defined outside class body.)

    The class definition is not duplicated. One has that pesky private somewhere that it isn't needed, and the other doesn't, so it is a case of two different classes with the same name.

    Once a piece of code is UB, you cannot predict its behaviour.

    Sure you can. See the Raymond Chen's blog post I linked above. It's just that in large codebases, it becomes infeasible to keep track of it all, and as you pointed out, changing the compiler or even just compilation options - or even adding new code - can drastically change everything.

    You can't predict it just by looking at the code. You can do experiments that show that this compiler / linker does this, and therefore you can prredict that the same compiler / linker will do something similar in that case, but that requires you to know the behaviour of the specifics. Once you have to support other environments (even the same code compiled by the same compiler with the same options for the same OS but with a different CPU architecture, say x86-32 vs x86-64) you have to go back to those experiments.

    Or you say that it's UB, so of course you fix it so it isn't.

    Out of the millions possible UB cases, you just happened to pick the one and only* UB case that doesn't result in catastrophic failure. If you picked any other UB, everything you said would be completely true. But you picked the only one out of millions and millions possible UBs where it's not.

    You have to assume that all UB cases will produce some sort of failure, because the standard explicitly refused to define what happens, and therefore anything could happen. That case I suffered through where someone had written *p++ = expression with *p, didn't fail catastrophically (nothing crashed, burned, etc.), but it did fail.

    @dkf said in Visual Studio WTF:

    Because @Steve_The_Cynic is using a C++ compiler that's written to be as fuckwaddedly asinine as possible in its interpretation of the standard. If it can find a way to interpret things differently in some circumstances to others, it does exactly that.

    I'm not using a compiler like that, but the nature of UB means that I have to assume that I am.


  • Banned

    @Steve_The_Cynic said in Visual Studio WTF:

    @Gąska said in Visual Studio WTF:

    @Steve_The_Cynic said in Visual Studio WTF:

    Have you seen LITERALLY THIS ONE SPECIFIC CASE that I'm talking about? Because if you haven't seen LITERALLY THIS ONE SPECIFIC CASE, then no, you haven't seen anything that disproves anything I say. Because I'm talking about LITERALLY THIS ONE SPECIFIC CASE.

    OK, so let's go back to the specific case. The two classes have a semantically-equivalent but textually-different definition, but their member functions are defined in some .cpp files rather than in the headers that I, as a user of those classes, will #include. When I call a member function on the object, which version of the member function will the linker give me? The two .cpp files both define void Fred::some_function(some_parameters); and I cannot predict which version of that function will be called. That's why it matters.

    The classes are semantically identical, so it doesn't matter which of the functions gets picked every time - the result will always be the same. Now, if the two methods had different code (ie. the classes DID differ semantically), it would be different.

    Fussy: the definitions of the classes (i.e. their external interfaces) are semantically equivalent. That doesn't mean that the two classes are the same. Presumably the fact that they are defined like that makes them different classes, and therefore we have to assume (because the standard says it's UB) that the code is different as well, and that weird shit will happen.

    Once again, in case you didn't get it the other 57 times I've said it: once you have UB in your program, you can rely on exactly nothing working correctly.

    Once again, in case you didn't get it the other 57 times I've said it: you still haven't shown how THIS PARTICULAR UB can lead a NON-MALICIOUS compiler to generate wrong code. Not even a hypothetical. You just repeat "UB is UB is UB" ad nauseam as if I didn't know the subject of this discussion. But you haven't even attempted to show that an actual problem can result from this code.

    UB is UB, and there's no "only a little bit UB", nor "unimportant UB".

    But there is "the reason this particular UB results in this particular behavior". And you haven't yet established any reason why a compiler, when encountering two SEMANTICALLY IDENTICAL IN LITERALLY EVERY WAY, AND I MEAN EVERY WAY classes, would produce code that behaves differently from when there's one class.

    (Also I thought we're talking about a case where just the class definition is duplicated, not a case where we have two .cpp files duplicating the method implementations too. But even then, it will be just fine unless you have static member variables defined outside class body.)

    The class definition is not duplicated. One has that pesky private somewhere that it isn't needed, and the other doesn't, so it is a case of two different classes with the same name.

    Whatever, my point still stands. There is no scenario under which this would result in an actual problem in an actual compiled program.

    Once a piece of code is UB, you cannot predict its behaviour.

    Sure you can. See the Raymond Chen's blog post I linked above. It's just that in large codebases, it becomes infeasible to keep track of it all, and as you pointed out, changing the compiler or even just compilation options - or even adding new code - can drastically change everything.

    You can't predict it just by looking at the code.

    Sure you can. Raymond Chen has an entire series about what the compiler will do with a particular UB. Really, it's not that hard. UB is something that should never happen. Optimizing compiler makes assumptions that the developer would never write something that has UB, determines that some code branches are unreachable unless there's UB, and removes them since UB can't happen. It's that simple.

    As a best practice, yes, avoiding every instance of UB like fire absolutely makes sense. But your particular example of UB, that should still be avoided like fire as a best practice, doesn't actually result in anything bad happening. It's like privilege escalation without ability to execute any commands. Sure, you have admin rights, but what can you do really?

    Out of the millions possible UB cases, you just happened to pick the one and only* UB case that doesn't result in catastrophic failure. If you picked any other UB, everything you said would be completely true. But you picked the only one out of millions and millions possible UBs where it's not.

    You have to assume that all UB cases will produce some sort of failure, because the standard explicitly refused to define what happens, and therefore anything could happen.

    Yes. Just like you have to assume that all drivers are absolute fucking morons who don't know the first thing about driving a car (it's almost literally what the Polish law says, although they use some nicer words). Doesn't change the fact that there are some drivers who are actually good drivers and you don't have to worry about them in particular doing anything stupid. But as a best practice, you should still treat them like morons and expect the unexpected.

    That case I suffered through where someone had written *p++ = expression with *p, didn't fail catastrophically (nothing crashed, burned, etc.), but it did fail.

    I know the difference is quite subtle, but *p++ = expression with *p is different from "same class defined twice and one of them has extra private:". There's a reason I put emphasis on LITERALLY JUST THIS ONE EXAMPLE AND NOTHING ELSE, I MEAN NOTHING ELSE, LITERALLY JUST THIS ONE EXAMPLE.


  • Java Dev

    @Steve_The_Cynic said in Visual Studio WTF:

    UB

    Encountered basically this function today

    static inline int pointer_in_buffer( char * buf, size_t bufsz, char * p )
    {
        return (p > buf) && (p < (buf + bufsz));
    }
    

    And no, this was not an overly-complicated check for p == buf + bufsz.


  • Banned

    @PleegWat is this about overflow? Because I don't see anything else wrong here.

    Edit: well, the first comparison should be >=, but I don't think that's what you meant either.


  • Java Dev

    @Gąska Might be over-anonimised. The implication (and fact in code) is that buf and buf + bufsz are the extreme ends of that memory allocation.


  • Banned

    @PleegWat I still don't quite get it. Is it about the pointer to one-past-end-of-allocation? Because that's perfectly legal in C++.


  • Java Dev

    @Gąska One past, yes. Entirely different allocation whatsoever, no.

    Of course the code would work as intended on x86 and this isn't likely to be ported elsewhere. But that's besides the point.



  • @PleegWat I'm rather irked by it returning an int instead of a bool...


  • Java Dev

    @dcon points at signature C.

    Granted, I could typedef int bool but nobody before me did and I'm not gonna refactor.


  • Discourse touched me in a no-no place

    @Gąska said in Visual Studio WTF:

    You just repeat "UB is UB is UB" ad nauseam as if I didn't know the subject of this discussion.

    The other thing that is tricky is that what is UB in general might not be for a particular platform ABI. An example are bitfields in C, where in general doing any kind of manipulation of the underlying memory other than via the structure type is UB, but if you're targeting an ARM chip, it is exactly defined (and very useful for some types of memory mapped hardware). The code in question can't be ported to another platform... but you weren't going to do that with a device driver anyway; with an ARM SoC the hardware may well be on the same piece of silicon as the CPU...

    The point is, if it is defined then it is defined, even if the defining authority is not the main standards committee.



  • @Gąska said in Visual Studio WTF:

    @PleegWat I still don't quite get it. Is it about the pointer to one-past-end-of-allocation? Because that's perfectly legal in C++.

    The problematic part is the use of the less-than / greater-than or greater-than-equal operators. They are only required to work for pointers that belong to the same object / allocation / array. Consequently, the case where the function is supposed to return false isn't required to actually work.

    C++ has std::less<> which does define a total order for pointers. operator< et al. don't.


Log in to reply