WTF Bites


  • Banned

    @cvi said in WTF Bites:

    @Gąska Unless the code does something silly in-between (i.e., you're showing a slightly sanitized version here), compilers should optimize that to a single memory read. (Modulo volatile of course, but hopefully it isn't.)

    It's C++. Non-local reads never get optimized (and inputs is a non-static class member).



  • @Gąska Wut? Of course they get optimized. (Also, the std:: makes it obvious that it's C++.)


  • Java Dev

    @cvi said in WTF Bites:

    @Gąska Wut? Of course they get optimized. (Also, the std:: makes it obvious that it's C++.)

    Not sure on C++, but in C, with usual optimisation levels, things like that don't get optimised if the value may change from a different thread or signal handler inbetween both reads.



  • @PleegWat No, in that case you would need to make it volatile. That is pretty much the sole purpose of volatile, after all.

    What might be preventing the compiler from optimizing are due to the compiler having to assume aliasing if a different pointer of the same type is being used between the two reads (and it can't prove that they point to different things). That was why I mentioned the thing about there being some code in-between that isn't shown.



  • @Gąska Here's the obligatory Compiler Explorer link. To make it easier to see the differences, I added a second instance that declares inputs volatile, in which case there indeed is a second read (compare the .L7 branches). In the non-volatile one, it just uses the earlier value from xmm0.


  • Banned

    @cvi I guess you are right. Apparently the compiler isn't required to consider other threads that might run concurrently. I think I must've confused it with some other conditions that prevent read omissions.



  • @Gąska said in WTF Bites:

    Apparently the compiler isn't required to consider other threads that might run concurrently.

    It isn't. Besides preventing a lot of optimizations, it would be nigh impossible to actually make any sensible guarantees on platforms with less restrictive memory models. (x86/x86_64 has incredibly strong guarantees in this regard.)

    If you move information between threads, you need to do something manually (mutex, atomics, other synchronization points, in particular thread::join()). This gives a fair overview from the view of atomics.

    I think I must've confused it with some other conditions that prevent read omissions.

    From my experience, it's the pointer-aliasing assumption. C++ does mainly type-based aliasing analysis (which is why type punning is such a problem), meaning that pointers to different types are assumed to not alias (with a few exceptions, e.g. char), but pointers to the same type are.

    There's probably a more elegant example somewhere, but this demonstrates the problem, I think:

    void f( float* a, float* b, float const* c ) {
      *a = *c+1.f;
      *b = *c+1.f;
    }
    

    Normally, this requires c to be read twice, because it's legal to call the function as follows:

    float x;
    f( &x, &x, &x );
    

    (i.e., the value pointed to by c changes at the first write).

    This is why __restrict__ is such a useful (almost necessary!) tool when optimizing certain code, despite it not being part of the C++ standard (it's in C, though; and the big compilers all support it as an extension).

    Inlining can have a similar side-effect. Because the compiler can see all the code, it can occasionally prove that certain pointers do not alias, allowing it to generate better code (at least where the inlining takes place).


  • Banned

    @cvi said in WTF Bites:

    I think I must've confused it with some other conditions that prevent read omissions.

    From my experience, it's the pointer-aliasing assumption. C++ does mainly type-based aliasing analysis (which is why type punning is such a problem), meaning that pointers to different types are assumed to not alias (with a few exceptions, e.g. char), but pointers to the same type are.

    I believe there's also some rule like "if you call a non-inlined function, all pointers and references are invalidated and must be read again", which prevents many optimizations unless you're using link-time optimization. I mean, there must be a rule like that, otherwise much of the code wouldn't work as expected.


  • BINNED

    @cvi said in WTF Bites:

    @Gąska said in WTF Bites:

    Apparently the compiler isn't required to consider other threads that might run concurrently.

    It isn't. Besides preventing a lot of optimizations, it would be nigh impossible to actually make any sensible guarantees on platforms with less restrictive memory models. (x86/x86_64 has incredibly strong guarantees in this regard.)

    Essentially the threading model says: any data races are UB. If you need concurrent access it must be synchronized.


  • Banned

    @topspin does that mean every synchronization point invalidates all reads, whether I like it or not?


  • BINNED

    @Gąska said in WTF Bites:

    @topspin does that mean every synchronization point invalidates all reads, whether I like it or not?

    Not sure, I’d guess no. Of course “essentially” meant “it’s complicated”, but a good enough rule. For example you can use atomics with memory_order_relaxed and that implies neither synchronization nor ordering constraints, only atomic access, but doesn’t violate the “no races” rule.

    The point was, though, that in the absence of any synchronization/atomics/fences/... the compiler doesn’t reason about concurrent access from other threads.



  • @Gąska Yeah, optimization across (opaque) functions is also messy.

    Compilers can work around that to some degree if they "know of" the function (e.g., std::isnan() probably ends up calling a well-known intrinsic function under the hood, even if it isn't one itself). I would guess that [[gnu::pure]]/[[gnu::pure]] could allow for similar optimizations. (I thought VisualStudio's compiler had an almost equivalent attribute as well, but I can't find it right now.)

    Edit: It doesn't, in a very quick and simple test. In the test, it would have to store/restore the contents of an xmm register before/after the function call, which probably as costly as just reading the value from the original location again.


  • Considered Harmful

    That's an interesting marketing tack you have there, chocolatey:

    > choco install retroarch -y
    Error: The remote file either doesn't exist, is unauthorized, or is forbidden for url http://...
    This package is likely not broken for licensed users - see https://chocolatey.org/features-private-cdn.
    


  • @cvi said in WTF Bites:

    That is pretty much the sole purpose of volatile, after all.

    Or hardware. In my world, we use it a lot for things like hardware status or data FIFOs.

    #define FROB_BUSY          0x00000001
    #define FROB_START         0x00000002
    #define FROB_DATA_FIFO_CNT 0x0000ff00
    #define FROBNICATOR_CTRL   (*(volatile unit32_t *) (FROBNICATOR_BASE+0x1230))
    #define FROBNICATOR_STATUS (*(volatile uint32_t *) (FROBNICATOR_BASE+0x1234))
    // Write to this address stuffs data into frobnicator data FIFO. Read from this address gets result data.
    #define FROBNICATOR_DATA   (*(volatile uint32_t *) (FROBNICATOR_BASE+0x1238))
    
    while (FROBNICATOR_STATUS & FROB_DATA_FIFO_CNT) < 0x0000ff00)
    {
        FROBNICATOR_DATA = parp[i++]; // Write side-effect: push data onto tail of queue
    }
    FROBNICATOR_CTRL = FROBNICATOR_CTRL | FROB_START;
    // Wait for frobnicator to finish
    while (FROBNICATOR_STATUS & FROB_BUSY)
    {
        // Waste time
    }
    // Get the results
    while ((FROBNICATOR_STATUS & FROB_DATA_FIFO_CNT) != 0)
    {
        blarb[j++] = FROBNICATOR_DATA; // Read has side effect of popping data off the head of the queue
    }
    

    And yes, the same hardware register address can do very different (though usually related) things on read and write



  • @HardwareGeek said in WTF Bites:

    Or hardware. In my world, we use it a lot for things like hardware status or data FIFOs.

    It's sort of the same thing, i.e., where a value may not be cached because it can change at "any" time, specifically between two consecutive reads. Whether that's a hardware register, a different thread, asynchronous signal handler, or cosmic radiation flipping your bits.

    You're right, though: dealing with memory mapped hardware registers is probably the most reasonable use of volatile.


  • Notification Spam Recipient

    @dkf said in WTF Bites:

    are a special kind of exception that can only be caught by exiting the function.

    Yeah, I can't tell you how extremely often I wish there was an easy way to get the data being returned from a function. It has always befuddled me why I have to go up to the caller and mangle it there...



  • @cvi said in WTF Bites:

    @HardwareGeek said in WTF Bites:

    Or hardware. In my world, we use it a lot for things like hardware status or data FIFOs.

    It's sort of the same thing, i.e., where a value may not be cached because it can change at "any" time, specifically between two consecutive reads. Whether that's a hardware register, a different thread, asynchronous signal handler, or cosmic radiation flipping your bits.

    You're right, though: dealing with memory mapped hardware registers is probably the most reasonable use of volatile.

    Actually different thread is explicitly excluded from it. volatile only affects the compiler level, while thread synchronization might also require something on the hardware level, which volatile does not provide, only atomics (and synchronization functions) do.

    @Gąska said in WTF Bites:

    @topspin does that mean every synchronization point invalidates all reads, whether I like it or not?

    Atomics have the memory order argument, which can specify whether you want to

    • not affect anything except that atomic variable (relaxed)
    • prevent moving reads before it (acquire)
    • prevent delaying writes past it (release)
    • both

    For the synchronization functions the compiler usually does not know, so it is forced to assume both. As it does for any library functions it does not have access to and don't have any pure and restrict annotations.


  • Banned

    @Bulb I mean specifically like, if I read a non-atomic class member once (e.g. in if condition), then lock a completely unrelated mutex, then read that same member a second time (e.g. in if body), will the compiler-generated code always read the value twice unconditionally even on highest optimization settings? And if I comment out the lock, will it remove the second read and reuse the first value?

    Intuitively, that's exactly how it should be, otherwise double-checked locking wouldn't work. But maybe there's some caveat. C++ spec isn't 1800 pages long for nothing.



  • @cvi said in WTF Bites:

    Although ... why are you copying values only if they are nan? Nan-tagging?

    If the else is doing some lengthy maths computation that will end up with NaN (because almost every operation with NaN results in NaN), it makes sense to skip all that.


  • Considered Harmful

    @Gąska said in WTF Bites:

    C++ spec isn't 1800 pages long for nothing.

    As an aside, would you recommend C++ spec for passwords instead of certain literary works? In addition to length it also contains lots of special characters and numbers :trollface:



  • @Gąska said in WTF Bites:

    @Bulb I mean specifically like, if I read a non-atomic class member once (e.g. in if condition), then lock a completely unrelated mutex, then read that same member a second time (e.g. in if body), will the compiler-generated code always read the value twice unconditionally even on highest optimization settings? And if I comment out the lock, will it remove the second read and reuse the first value?

    Generally yes.

    The mutex::lock is assumed to do acquire+release, so the second read cannot be reordered before it, but must be done again.

    Removing the mutex::lock, and in absence of any functions the compiler can't prove don't contain either locking or modify the member, the read can be reordered and combined with the first one.

    Theoretically a mutex::lock could only have acquire semantics, and while that alone still prevents reordering the second read before it, it also implies that if the member was actually written by another thread, the first read would be a data race, which would be undefined behaviour, so the compiler could assume it does not happen and combine the reads anyway. But the compiler does not have as detailed information, and I don't think the definition of data race in C++ specification even allows giving the lock this detailed semantics.

    only exception is if the compiler can determine the instance is on stack and no references were handed outside of it. Then the variable obviously can't be modified from another thread and the compiler can merge the reads on the as-if rule.



  • tl;dr: Algorithm only looks for keywords in the answers. Students were able to 100% a question by simply copying the question into the answer field.



  • ca3ca288-2374-459b-885c-436ab1909225-image.png

    Bravo Android:

    • System sets errno to specific failure reason, but it's nowhere to be found in the exception object.
    • Cause pointing to the same exception instead of being null seems like it's trying to lure me into an endless loop.
    • What read where you doing anyway? The call that threw that exception was android.bluetooth.BluetoothSocket#connect.


  • @Bulb said in WTF Bites:

    @bobjanova said in WTF Bites:

    it's still a WTF. No normal coder is going to look at that and expect the 'return 1' not to end the method.

    What did you expect? It's Java after all. There is plenty of things that normal coder is not going to expect, but Java does them nevertheless.

    If that is the cut-off, then pretty much any language feature more advanced than arithmetic is right out the window.


  • Discourse touched me in a no-no place

    @cvi said in WTF Bites:

    That is pretty much the sole purpose of volatile, after all.

    The purpose of volatile is to say “there may be stuff going on that you don't see, so do exactly what you're told with this variable”. It's not suitable for inter-thread synchronization on normal modern processors because of the complexities of memory barriers, but it is quite relevant in other scenarios (such as memory-mapped hardware devices). If you're dealing with volatile, you probably should be in something low level like a device driver; it's really not valuable in most high level code.

    The semantics in languages other than C and C++ may be entirely different.


  • Discourse touched me in a no-no place

    @cvi said in WTF Bites:

    std::isnan() probably ends up calling a well-known intrinsic function under the hood

    No. It just checks if a value is not equal to itself. In most compilers this is fine in itself, but MSVC needs the extra hint of the standard function because it doesn't understand NaN properly and assumes that values always equal themselves. (IEEE floating point is weird.)



  • @Bulb said in WTF Bites:

    ca3ca288-2374-459b-885c-436ab1909225-image.png

    Bravo Android:

    • System sets errno to specific failure reason, but it's nowhere to be found in the exception object.
    • Cause pointing to the same exception instead of being null seems like it's trying to lure me into an endless loop.
    • What read where you doing anyway? The call that threw that exception was android.bluetooth.BluetoothSocket#connect.

    Java seems to like having exceptions with the cause set to the same exception. I've noticed that with some other exceptions (maybe NPEs?). So I don't think that's Android specific - still odd though.

    It's reasonable for a 'connect' method to also do a handshake, as long as it's expected to be quick. For example if you 'connect' an SSL socket there's a lot of read and write happening as well.


  • BINNED

    @Applied-Mediocrity said in WTF Bites:

    @Gąska said in WTF Bites:

    C++ spec isn't 1800 pages long for nothing.

    As an aside, would you recommend C++ spec for passwords instead of certain literary works? In addition to length it also contains lots of special characters and numbers :trollface:

    Only in times of peace, because "War and C++" would be extremely cruel punishment.



  • @Rhywden said in WTF Bites:

    tl;dr: Algorithm only looks for keywords in the answers. Students were able to 100% a question by simply copying the question into the answer field.

    I didn't read the article, but isn't it a bit of a stretch to call that "AI"?


  • Discourse touched me in a no-no place

    I'm so glad I didn't have anything to do with this bug:

    https://www.vasp.at/post/bugfix-in-testsuite-vasp6/

    “Oopsie! Deleted your home directory. Sowweee!”


  • Considered Harmful

    @hungrier said in WTF Bites:

    I didn't read the article, but isn't it a bit of a stretch to call that "AI"?

    There's a lot of things wrong with Al, but name isn't one of them 🐠


  • Discourse touched me in a no-no place

    @Atazhaia said in WTF Bites:

    ...
    NB: Because fuck you that's why!

    Reminiscent of Text from Xcode...


  • Considered Harmful

    @hungrier said in WTF Bites:

    I didn't read the article, but isn't it a bit of a stretch to call that "AI"?

    AI is anything I don't understand.



  • @dkf said in WTF Bites:

    No. It just checks if a value is not equal to itself. In most compilers this is fine in itself, but MSVC needs the extra hint of the standard function because it doesn't understand NaN properly and assumes that values always equal themselves. (IEEE floating point is weird.)

    It doesn't. Here's the definition from the standard library that ships with GCC 9.3:

      constexpr bool
      isnan(float __x)
      { return __builtin_isnan(__x); }
    

    Yes, the compiler will eventually reduce that to a comparison with itself (a ucomiss in this case), but the definition of std::isnan() uses a compiler intrinsic. This is a good thing, because it makes isnan() work even with e.g. -ffast-math, which allows the compiler to disregard IEEE floating point rules in order to optimize more aggressively. (This is not an endorsement of -ffast-math.)


  • Considered Harmful

    @cvi said in WTF Bites:

    e.g. -ffast-math, which allows the compiler to disregard IEEE floating point rules in order to optimize more aggressively.

    This explains why the SNES retroarch core has a "fast math" setting that's disabled by default. I was thinking "why wouldn't I want the math to be fast?"



  • @error Yeah, it's a fun one.

    -fast-math has a tendency to complete screw over any carefully designed numerical methods. But for code that's written very sloppily from the get go? Chances are it'll be fine.



  • I just typed the sentence "... I was very confused; it took me most of the day ..." into slack. Slack turned "confused; it" into a link http://confused.it. :wat:



  • @HardwareGeek said in WTF Bites:

    http://confused.it

    Have you clicked on it?

    Make confused.it work hard for you

    Yes, confused IT works hard. But not smart.


  • Notification Spam Recipient

    @Zerosquare said in WTF Bites:

    @HardwareGeek said in WTF Bites:

    http://confused.it

    Have you clicked on it?

    It's a parked domain. They probably want a few hundred 💰 for it.



  • @bobjanova said in WTF Bites:

    @Bulb said in WTF Bites:

    ca3ca288-2374-459b-885c-436ab1909225-image.png

    Bravo Android:

    • System sets errno to specific failure reason, but it's nowhere to be found in the exception object.
    • Cause pointing to the same exception instead of being null seems like it's trying to lure me into an endless loop.
    • What read where you doing anyway? The call that threw that exception was android.bluetooth.BluetoothSocket#connect.

    Java seems to like having exceptions with the cause set to the same exception. I've noticed that with some other exceptions (maybe NPEs?). So I don't think that's Android specific - still odd though.

    Yes, it's kind of obvious this would be a usual Java thing. It's kinda weird nevertheless. Update: @dkf claims it's not below, so I went to check the documentation and it indeed says that getCause() “Returns the cause of this throwable or null if the cause is nonexistent or unknown.”

    It's reasonable for a 'connect' method to also do a handshake, as long as it's expected to be quick. For example if you 'connect' an SSL socket there's a lot of read and write happening as well.

    Yes, it is reasonable, but then any error produced really needs to distinguish which part of the no longer simple process failed. It does matter if SSL connection couldn't be established because host not found, host unreachable, port unreachable, no response received, certificate not valid at the moment, certificate not valid for the host or certificate not trusted.

    In my case it is a Bluetooth socket, so I'd expect it to at least distinguish between device not responding (out of range or not on), not authorized (needs pairing) or service not found (probably not the right device).


  • Discourse touched me in a no-no place

    @bobjanova said in WTF Bites:

    Java seems to like having exceptions with the cause set to the same exception. I've noticed that with some other exceptions (maybe NPEs?). So I don't think that's Android specific - still odd though.

    I don't recall ever seeing that. It'd mean that something is doing exn.initCause(exn) and that's just weird; if you just build an exception and throw it immediately (the overwhelmingly normal case) then the cause is either null (default) or the causing exception that you provided to the constructor. It sounds like someone's violating causality and should see a practicing philosopher ASAP.



  • @dkf … or they might be overriding getCause() to return this instead of null … though, it was just a standardish IOException, so. Nothing would surprise me from AOSP though.


  • BINNED

    htop.png

    why are you running.jpg

    Why does it take over 10% CPU time (continuously) to display a folder with 6 files, in a minimized window? Just stop doing anything!


  • Discourse touched me in a no-no place

    @Bulb said in WTF Bites:

    @dkf … or they might be overriding getCause() to return this instead of null … though, it was just a standardish IOException, so. Nothing would surprise me from AOSP though.

    :wtf_owl: That'd be even stranger and far more likely to cause irrelevant exceptions. I'm much more inclined to think that somewhere between creation and the point where it was caught by your code (note that it could potentially have been caught and released several times on the way out) someone did initCause, probably under the misguided belief that null is bad and must be eliminated from everywhere at all costs (or other such idiocy).



  • @dkf said in WTF Bites:

    It'd mean that something is doing exn.initCause(exn) and that's just weird

    I agree but I've definitely seen it in debug traces. "That's weird" is exactly my reaction when I worked out what was going on.



  • @topspin said in WTF Bites:

    Why does it take over 10% CPU time (continuously)

    Well, emulating a Nintendo Wii does require a bit of CPU usage...

    @topspin said in WTF Bites:

    to display a folder with 6 files, in a minimized window?

    ...oh. That Dolphin.



  • @dkf said in WTF Bites:

    @bobjanova said in WTF Bites:

    Java seems to like having exceptions with the cause set to the same exception. I've noticed that with some other exceptions (maybe NPEs?). So I don't think that's Android specific - still odd though.

    I don't recall ever seeing that. It'd mean that something is doing exn.initCause(exn) and that's just weird; if you just build an exception and throw it immediately (the overwhelmingly normal case) then the cause is either null (default) or the causing exception that you provided to the constructor.

    I've already seen it, but it's quite rare. Fortunately so, because it usually causes stupid issues in any code that tries to report those exceptions.

    Actually, this is not the worst case - the loop is immediately visible. I have seen JDBC driver that created indirect loops, so e != e.getCause(), but e == e.getCause().getCause() (or maybe it was e == e.getNextException().getCause()... can't remember, but :wtf: in any case ).

    It sounds like someone's violating causality and should see a practicing philosopher ASAP.

    Bah. Once upon a time, I spent some time debugging a strange bug when application received reply before the request was sent (but only on HP-UX). That was a real head-scratching moment for a young developer, but I have learned a good lesson about "API behavior assumptions" being derived from "ass".


  • BINNED

    @Zerosquare said in WTF Bites:

    @HardwareGeek said in WTF Bites:

    http://confused.it

    Have you clicked on it?

    Make confused.it work hard for you

    :giggity:


  • Discourse touched me in a no-no place

    @Kamil-Podlesak said in WTF Bites:

    :wtf: in any case

    Amen.



  • @HardwareGeek said in WTF Bites:

    Slack turned "confused; it" into a link http://confused.it.

    Ah, good to know: confusion is 🇮🇹 .


Log in to reply