WTF Bites


  • BINNED

    @Bulb said in WTF Bites:

    @topspin said in WTF Bites:

    Anyways, it doesn’t matter. Checking for negative size is trivial.

    Checking one side (unsigned) is less work than checking both sides (signed) and since range checking is a very common operation, it does actually make a difference.

    1. malloc is slow already, one extra comparison does not make a difference.
    2. You could move the casts you're imposing on everyone everywhere inside it and replace if (size >= 0 && size < whatever) with what you had before: if (size_t(size) < whatever). In fact, you don't need to, the compiler will do it for you anyway.

    Bildschirmfoto 2021-07-28 um 14.05.07.png


  • Considered Harmful

    @topspin said in WTF Bites:

    @LaoC said in WTF Bites:

    @topspin said in WTF Bites:

    @dkf said in WTF Bites:

    @topspin If you're allowing sizes to be negative, then tell me how to go about allocating an array that can contain -78 elements. And what it means too.

    What does it mean to allocate size_t(-78) elements?
    The former would just cause an error, because you can trivially check that, the latter would OOM crash, and yet you don't encode that constraint into the type system.

    It only crashes if you ignore the error that would result from the former, too, and that can happen for any size allocation.

    But you don't actually get an error. You only get an error if Linux decides to be nice to you, otherwise it tells you "sure, have all the memory you want. I'll kill you if you touch it though."

    Which of the two behaviors you want is your choice. Depends on your workload which one makes more sense.


  • Discourse touched me in a no-no place

    @Bulb said in WTF Bites:

    :wtf:1: Notice the severity. The error message (unable to sign jar: java.io…) is INFO, followed by something on ERROR level that is actually just a note, maybe a warning, because the options clearly don't break anything.

    That's definitely the worst problem there, unless there's some roll-up logging message later which aggregates the “I tried this and that and none of it worked” into a single message. I'd be surprised if such a message existed — it would demonstrate more forethought than is usual for a software developer! — but it's possible I guess.

    The bitching about JAVA_TOOL_OPTIONS is just at entirely the wrong logging level. That ought to be DEBUG, not ERROR…



  • @LaoC Well, the thing is that, especially given how everything creates thousands of dreads these days, you'll waste a lot of memory without overcommit. Because programs totally do allocate memory they won't end up using. The biggest offender is stacks. Since the stack has to be continuous memory, when starting a thread the process reserves a block of memory that should be large enough for everybody. Default is 8 MiB. But most service threads don't use more than a few kilobytes. So unless all programs on the system carefully set their stack sizes to what they actually need, you'll waste a lot of memory. On the dread-heavy system I develop on I've seen things like the memory being overcomitted 10 times.


  • Discourse touched me in a no-no place

    @Bulb said in WTF Bites:

    Default is 8 MiB.

    That's chunky. And it's all because some people decided that the heap was “too slow” and stopped putting large objects in the space designated for them…


  • BINNED

    @LaoC said in WTF Bites:

    @topspin said in WTF Bites:

    @LaoC said in WTF Bites:

    @topspin said in WTF Bites:

    @dkf said in WTF Bites:

    @topspin If you're allowing sizes to be negative, then tell me how to go about allocating an array that can contain -78 elements. And what it means too.

    What does it mean to allocate size_t(-78) elements?
    The former would just cause an error, because you can trivially check that, the latter would OOM crash, and yet you don't encode that constraint into the type system.

    It only crashes if you ignore the error that would result from the former, too, and that can happen for any size allocation.

    But you don't actually get an error. You only get an error if Linux decides to be nice to you, otherwise it tells you "sure, have all the memory you want. I'll kill you if you touch it though."

    Which of the two behaviors you want is your choice. Depends on your workload which one makes more sense.

    It's the choice of the system administrator, not of the program. So yes, it is entirely possible that the program does check for the error case but still gets killed.
    Anyway, that's completely beside my point that checking for < 0 is trivial to do and not a good argument for causing a billion casts and / or bugs everywhere else.

    Also, allocating a size of 0 is also nonsensical (though technically allowed) and you wouldn't encode that into the type system either.


  • Discourse touched me in a no-no place

    @topspin said in WTF Bites:

    you wouldn't encode that into the type system either

    :sideways_owl:



  • @dkf said in WTF Bites:

    @Bulb said in WTF Bites:

    :wtf:1: Notice the severity. The error message (unable to sign jar: java.io…) is INFO, followed by something on ERROR level that is actually just a note, maybe a warning, because the options clearly don't break anything.

    That's definitely the worst problem there, unless there's some roll-up logging message later which aggregates the “I tried this and that and none of it worked” into a single message. I'd be surprised if such a message existed — it would demonstrate more forethought than is usual for a software developer! — but it's possible I guess.

    The bitching about JAVA_TOOL_OPTIONS is just at entirely the wrong logging level. That ought to be DEBUG, not ERROR…

    I've moved a bit further with the build and it seems to be standard behaviour of maven. The plugin/tool that fails prints the actual problem it had on standard output, often with wrong level or not through the logging mechanism at all, then maven prints the summary, and then maven prints the error from the exception that the plugin/tool threw to stop the build.

    So you see the error at the end, it just tells you something generic like “Installation of product com.company.project.bflmpsvz.customer.special.whatever failed” and then you have to scroll back before the summary where it tells you, without any log level at all, that it requires ‘com.company.project.bflmpsvz [someversion]’ but it could not be found. Which is a bit better, but …

    … well, the package does exist in ~/.m2/repository, but instead of x.y.z.timestamp there is only x.y.z-SNAPSHOT … and I don't understand this tool to know which side is wrong or why.


  • BINNED

    @dkf said in WTF Bites:

    @topspin said in WTF Bites:

    you wouldn't encode that into the type system either

    :sideways_owl:

    In Ada (I think) you could specify that the range of sizes is strictly positive, excuding zero. If you think you should use unsigned types because "allocating -78 elements doesn't make sense", then you should also exclude zero. Haven't seen anyone attempt to do that in C or C++. They only fuss around with signed / unsiged because "it saves one bit", then create a ton of casts / bugs because -1 > 0u.


  • Discourse touched me in a no-no place

    @Bulb said in WTF Bites:

    … well, the package does exist in ~/.m2/repository, but instead of x.y.z.timestamp there is only x.y.z-SNAPSHOT … and I don't understand this tool to know which side is wrong or why.

    Snapshot publishing? That's weird IME which is why the only snapshot versions I'll ever rely on in my builds are ones that are directly part of the current build (in a multi-module project, natch).



  • @dkf

    1. I don't know what I am doing. I am weaving dead chicken around it and seeing whether it's going to work.
    2. There are two directories: core and customer. I can run maven in either and it gives me the corresponding OSGi bundle. Since all the customer-specific parts include the core, I thought that the pom.xml in customer references the one in core, but it actually does not. And now I need to do a change in core, but test whether it works in the customer version. So I need to use the locally-built artifact.
    3. Previously I copied the option -Dtycho.localArtifacts=ignore from the build server, but the result was that it ignored some build failures and pulled the versions from the updatesite instead. But I need the opposite. I need to only use the local artifacts—through ~/.m2/repository, because the projects don't reference each other directly.

  • Discourse touched me in a no-no place

    @Bulb Hmm, it might work if you do mvn install in core (which installs the build product into the local repo — the one in ~/.m2/repository) before building the customer. It sounds a lot like they were trying to do a multi-module project without actually doing a multi-module project (:headdesk:).



  • @dkf said in WTF Bites:

    @Bulb Hmm, it might work if you do mvn install in core (which installs the build product into the local repo — the one in ~/.m2/repository) before building the customer.

    That's what I am doing. Running mvn install in core, then running mvn install in customer. But it's not finding the main module from core :mlp_shrug:.

    It sounds a lot like they were trying to do a multi-module project without actually doing a multi-module project (:headdesk:).

    core is a multi-module project, and customer is also a multi-module project. But they are not connected.

    It also might be a new thing. There was a big refactoring couple of months ago that split the modules into the core and then section for each customer.

    Update: Hm, thinking about it, a -SNAPSHOT is before the version it references, so maybe I just need to bump the version (correctly).


  • Considered Harmful

    @Bulb said in WTF Bites:

    @topspin said in WTF Bites:

    Anyways, it doesn’t matter. Checking for negative size is trivial.

    Checking one side (unsigned) is less work than checking both sides (signed) and since range checking is a very common operation, it does actually make a difference.

    You can check sign and magnitude with one AND.


  • Notification Spam Recipient

    @Bulb said in WTF Bites:

    Default is 8 MiB. But most service threads don't use more than a few kilobytes.

    Criminey, and I was bothering about when the stack was overflowing with 4KiB not to long ago...



  • @Bulb said in WTF Bites:

    The biggest offender is stacks. Since the stack has to be continuous memory, when starting a thread the process reserves a block of memory that should be large enough for everybody. Default is 8 MiB. But most service threads don't use more than a few kilobytes. So unless all programs on the system carefully set their stack sizes to what they actually need, you'll waste a lot of memory. On the dread-heavy system I develop on I've seen things like the memory being overcomitted 10 times.

    If you think stack overcommitting is bad, don't ever look at Linux.

    Since it's UNIX-based, fork() is used frequently. Without overcomitting, it would have to allocate as much memory to the new process as the original process had access to, just in case the new process decided to overwrite everything. So unless you have buckets of memory, disabling overcommit is a non-viable option (I did try it once. The system crashed hard just a few seconds later.)

    But if you don't disable overcommit, Linux will let malloc() succeed even if you ask for far more memory that's even in the machine. Except your program will crash when it tries to actually, you know, access that memory. So there is no guarantee that even a properly-written program doesn't crash with an OoM exception. (:wtf: #1)

    Oh, and when memory is running out, the Linux kernel solves the problem by killing a process. How does it chooses which process to kill? Heuristics. Does it get it wrong sometimes? You bet. So a memory leak in another process can make your program crash. (:wtf: #2)

    As a embedded systems designer, the fact that this is considered normal makes me go :doing_it_wrong:


  • Considered Harmful

    @Zerosquare said in WTF Bites:

    @Bulb said in WTF Bites:

    The biggest offender is stacks. Since the stack has to be continuous memory, when starting a thread the process reserves a block of memory that should be large enough for everybody. Default is 8 MiB. But most service threads don't use more than a few kilobytes. So unless all programs on the system carefully set their stack sizes to what they actually need, you'll waste a lot of memory. On the dread-heavy system I develop on I've seen things like the memory being overcomitted 10 times.

    If you think stack overcommitting is bad, don't ever look at Linux.

    Since it's UNIX-based, fork() is used frequently. Without overcomitting, it would have to allocate as much memory to the new process as the original process had access to, just in case the new process decided to overwrite everything. So unless you have buckets of memory, disabling overcommit is a non-viable option (I did try it once. The system crashed hard just a few seconds later.)

    But if you don't disable overcommit, Linux will let malloc() succeed even if you ask for far more memory that's even in the machine. Except your program will crash when it tries to actually, you know, access that memory. So there is no guarantee that even a properly-written program doesn't crash with an OoM exception. (:wtf: #1)

    Oh, and when memory is running out, the Linux kernel solves the problem by killing a process. How does it chose which process to kill? Heuristics. Does it get it wrong sometimes? You bet. So a memory leak in another process can make your program crash. (:wtf: #2)

    As a embedded systems designer, the fact that this is considered normal makes me go :doing_it_wrong:

    The Linux kernel is not optimized for or actually
    well-suited for embedded. That's the point where Tannenbaum's endless complaints start applying.

    So, obviously, switch to HURD...


  • Notification Spam Recipient

    Status: Google Drive is invalidating all shared-with-a-link items created before some date. It has a page you can use to review the affected items at:

    https://drive.google.com/drive/update-files

    This is not the WTF.

    In reviewing some schoolwork I had captured some time ago, I found a small gem:

    0f54c198-f233-46a5-974d-9a62ac98c815-image.png

    Ah yes, I have used my knowledge of Java's Swing tools a lot since then...



  • @topspin said in WTF Bites:

    If you think you should use unsigned types because "allocating -78 elements doesn't make sense", then you should also exclude zero.

    Nope. Allowing the allocation of zero bytes is useful. Otherwise you can't write something like array = realloc(array, nb_elems * sizeof(*array)); for dynamically-resizing arrays, if the array can be empty.


  • Considered Harmful

    @Tsaukpaetra said in WTF Bites:

    Status: Google Drive is invalidating all shared-with-a-link items created before some date. It has a page you can use to review the affected items at:

    https://drive.google.com/drive/update-files

    This is not the WTF.

    In reviewing some schoolwork I had captured some time ago, I found a small gem:

    0f54c198-f233-46a5-974d-9a62ac98c815-image.png

    Ah yes, I have used my knowledge of Java's Swing tools a lot since then...

    I used GBL on purpose at my own option one time. Last use of Swing was as an example of where Java breaks its good rules, spifficly exceptions not being used for flow control, like Swing does.


  • Considered Harmful

    @Zerosquare said in WTF Bites:

    @topspin said in WTF Bites:

    If you think you should use unsigned types because "allocating -78 elements doesn't make sense", then you should also exclude zero.

    Nope. Allowing the allocation of zero bytes is useful. Otherwise you can't write something like array = realloc(array, nb_elems * sizeof(*array)); for dynamically-resizing arrays, if the array can be empty.

    Werll, in my world there's still bytes for the array reference itself, and its meta-typy-stuff. But, the above holds for suicidally insane languages.


  • Considered Harmful

    @Bulb said in WTF Bites:

    Yesterday the build server got finally fixed, so I scheduled the two builds I needed. The first passed. The second failed with

    [ERROR] Failed to execute goal com.company.maven:robust-jarsigner:1.4.0-SNAPSHOT:sign (PROJECT Security certificate) on project com.company.project.bflmpsvz.whatever.xmlbeans: Failed executing 'cmd.exe /X /C "D:\Apps\jdk1.8.0_172\jre\..\bin\jarsigner.exe -storetype Windows-ROOT -sigalg SHA1withRSA -digestalg SHA1 -strict -J-Dhttp.proxyHost=the.build.server.company.com -J-Dhttp.proxyPort=3128 -sigfile PROJECT -tsa http://tsa.starfieldtech.com D:\Jenkins\workspace\nch_feature_whatever\customer\Component\Bflmpsvz-XMLBeans-Whatever\target\com.company.project.bflmpsvz.whatever.xmlbeans-2.0.0-SNAPSHOT.jar "PROJECT Core Code Signing""' - exitcode 1 -> [Help 1]
    

    Ok, but why? Well, lets go a bit higher in the log (this is after the final summary). That message is not repeated there, but we have:

    [INFO] --- robust-jarsigner:1.4.0-SNAPSHOT:sign (PROJECT Security certificate) @ com.company.project.bflmpsvz.whatever.xmlbeans ---
    [INFO] Using timestamping server http://sha256timestamp.ws.symantec.com/sha256/timestamp
    [INFO] jarsigner: unable to sign jar: java.io.IOException: Server returned HTTP response code: 503 for URL: http://sha256timestamp.ws.symantec.com/sha256/timestamp
    [ERROR] Picked up JAVA_TOOL_OPTIONS: -Dmaven.ext.class.path="D:\Jenkins\workspace\nch_feature_whatever@tmp\withMaven8f93d76e\pipeline-maven-spy.jar" -Dorg.jenkinsci.plugins.pipeline.maven.reportsFolder="D:\Jenkins\workspace\nch_feature_whatever@tmp\withMaven8f93d76e" 
    [INFO] Using timestamping server http://tsa.starfieldtech.com
    [INFO] jarsigner: unable to sign jar: java.io.FileNotFoundException: http://tsa.starfieldtech.com
    [ERROR] Picked up JAVA_TOOL_OPTIONS: -Dmaven.ext.class.path="D:\Jenkins\workspace\nch_feature_whatever@tmp\withMaven8f93d76e\pipeline-maven-spy.jar" -Dorg.jenkinsci.plugins.pipeline.maven.reportsFolder="D:\Jenkins\workspace\nch_feature_whatever@tmp\withMaven8f93d76e" 
    

    :wtf:1: Notice the severity. The error message (unable to sign jar: java.io…) is INFO, followed by something on ERROR level that is actually just a note, maybe a warning, because the options clearly don't break anything.
    :wtf:2: The error at the end is not repeated from the build log, it is a different one, which makes it harder to go back and look up the context.
    :wtf:3 It wasn't the first jar to be signed. Two or three succeeded before this.

    If it's not a red herring entirely issue seems to be that the signatory's website don't exist. Do it?


  • ♿ (Parody)

    @dkf said in WTF Bites:

    @topspin If you're allowing sizes to be negative, then tell me how to go about allocating an array that can contain -78 elements. And what it means too.

    You've entered..... The Gribnit zone.


  • Considered Harmful

    @boomzilla said in WTF Bites:

    @dkf said in WTF Bites:

    @topspin If you're allowing sizes to be negative, then tell me how to go about allocating an array that can contain -78 elements. And what it means too.

    You've entered..... The Gribnit zone.

    The first 78 reads or writes cause a fatal error, then it's a no-op.


  • Considered Harmful

    @Bulb said in WTF Bites:

    @LaoC Well, the thing is that, especially given how everything creates thousands of dreads

    zG

    these days, you'll waste a lot of memory without overcommit. Because programs totally do allocate memory they won't end up using.

    Good point, but it still doesn't apply to size_t(-78) allocations unless you have 18 EB of swap configured¹. Linux's default is to "deny blatantly invalid requests" even while allowing overcommit. Actually that would reject every sensible allocation amount that would have accidentally become negative and casted to unsigned.
    If you change the default to allow allocations like that, you get what you asked for.

    ¹ I'm quite sure some other memory limit won't let you do that anyway, even if the page tables would fit your RAM.



  • @Gribnit said in WTF Bites:

    If it's not a red herring entirely issue seems to be that the signatory's website don't exist. Do it?

    The first signatory's website does exist, but encountered a random hiccup. The alternate signatory's website does indeed not exist at all.



  • @Bulb said in WTF Bites:

    @dkf said in WTF Bites:

    @Bulb Hmm, it might work if you do mvn install in core (which installs the build product into the local repo — the one in ~/.m2/repository) before building the customer.

    That's what I am doing. Running mvn install in core, then running mvn install in customer. But it's not finding the main module from core :mlp_shrug:.

    Ugh, that might look like a good idea, but it's... fragile. And I don't mean "house of glass" fragile, more like "house of thin ice" fragile. Be glad that it failed at this stage and not few months down the road.

    It sounds a lot like they were trying to do a multi-module project without actually doing a multi-module project (:headdesk:).

    This.

    core is a multi-module project, and customer is also a multi-module project. But they are not connected.

    It also might be a new thing. There was a big refactoring couple of months ago that split the modules into the core and then section for each customer.

    There are two correct solutions here:

    1. Create parent pom.xml to make everything one multi-module project.
    2. Separate the builds and make sure that the customer project always depends on proper version of core from repository.

    Everything else is a recipe for pain. BTDT.

    Update: Hm, thinking about it, a -SNAPSHOT is before the version it references, so maybe I just need to bump the version (correctly).

    That's pretty much the only way how to do it, but it's more tricky than it sounds and still fragile.

    But if you really want to go down the thorny road, one important tip: make sure that the local maven repository is part of the workspace (the standard maven plugin has a checkbox for that). Because what wouldwill happen if you use ~/.m2/repository and two (three, four,...) builds run at the same time? Fun! Lots and lots of fun!


  • Discourse touched me in a no-no place

    @boomzilla said in WTF Bites:

    @dkf said in WTF Bites:

    @topspin If you're allowing sizes to be negative, then tell me how to go about allocating an array that can contain -78 elements. And what it means too.

    You've entered..... The Gribnit zone.

    https://youtu.be/iXBwt-Z6Jn4?t=3


  • BINNED

    Trying to upload a bunch of files to a project partner's SharePoint (sigh). I get these lovely error messages:

    Sorry, for some reason this document couldn't upload. Try again later or contact your administrator.
    Sorry, for some reason this document couldn't upload. Try again later or contact your administrator.
    Sorry, for some reason this document couldn't upload. Try again later or contact your administrator.
    Sorry, for some reason this document couldn't upload. Try again later or contact your administrator.
    

    Some reason. How very informative.
    I do try again and now for "some reason" it works. Network hiccup? Who knows.

    An hour later I again want to upload a different bunch of files and it fails again. Try again, fails again. Since it's proven to be nondeterministic I try again 5 times, just to always fail. Well, maybe this time there is some other reason it doesn't tell me.

    I try to upload the files individually and all work but one, which repeatedly fails with:

    Bildschirmfoto 2021-07-28 um 15.20.12.png

    Very technical details, much wow.
    Of course the troubleshoot link does nothing.

    So since the one file that keeps failing is the only one that is just barely larger than 50MB, I figure that could be the issue. I zip it and it works.
    Is there a file size limit? Well, who knows, but if there is it surely isn't telling you about it.

    Why does nothing tell you what's went wrong anymore so you can fix it? When did we decide "Oops :(" is an acceptable error message? At least tell me if the error message may be intermittent or if retrying will continue to fail.


  • Discourse touched me in a no-no place

    @topspin said in WTF Bites:

    Why does nothing tell you what's went wrong anymore so you can fix it? When did we decide "Oops :(" is an acceptable error message?

    An administrator has been notified. Please remain where you are for our reeducation team.



  • @topspin said in WTF Bites:

    When did we decide "Oops :(" is an acceptable error message?

    About the time all the computery stuff got so complicated that most users don't have a clue what they are doing any more, and most developers are not significantly better off. Because, you see, technical details won't be helpful for people who don't have a clue what they are doing even if everything works :half-trolleybus-br:.


  • BINNED

    @Bulb said in WTF Bites:

    @topspin said in WTF Bites:

    When did we decide "Oops :(" is an acceptable error message?

    About the time all the computery stuff got so complicated that most users don't have a clue what they are doing any more, and most developers are not significantly better off. Because, you see, technical details won't be helpful for people who don't have a clue what they are doing even if everything works :half-trolleybus-br:.

    That’s why it’s called technical details. If you don’t care for the details, don’t click on them.

    But don’t make an error message that says

    Oops :(

    Technical details Oopsie Doopsie.


  • @topspin said in WTF Bites:

    When did we decide "Oops :(" is an acceptable error message?

    "We cannot tell you more in case you're an evil hacker who intends to use this as a side channel."

    At least that's my reasoning in one case where I'm guilty of it myself. I'm returning a vague "There's been an error on the server" instead of the full output of the back-end when the latter crashes despite all the input parameter validation, because I'm afraid that a smarter person than me could see the backtrace and use a pointer leak to achieve code execution by submitting a tricky combination of input values.

    In my defence, there is also a suggestion to change the input values to something less crash-y so that the user wouldn't retry the request that would just crash the back-end again.


  • Discourse touched me in a no-no place

    @aitap Sometimes you see “Ooops” messages when the fault is actually that the back-end expects one capitalization+pluralization of parameter names but the front end decides to produce another one. Because fuck you, that's why.


  • Discourse touched me in a no-no place

    @aitap said in WTF Bites:

    "We cannot tell you more in case you're an evil hacker who intends to use this as a side channel."

    Right. Error messages producing detailed technical errors such as stack traces can potentially give away information useful to hackers.
    OWASP etc recommend against it, and it'll get picked up as an issue by security testers and/or automated security scanners.
    It's also completely useless to users who generally can't do anything with that anyway. It's just scary nonsense to non-technical people and even if the user is technical they aren't going to be able to fix anything on someone else's server.

    Obviously for expected errors such as "this file is larger than the configured limit" then display the message but otherwise there isn't a useful message to show other than "something unexpected happened".


  • Discourse touched me in a no-no place

    @loopback0 said in WTF Bites:

    Obviously for expected errors such as "this file is larger than the configured limit" then display the message but otherwise there isn't a useful message to show other than "something unexpected happened".

    If there's something that the client should do differently, the client should be told what it is. Validation failures really should be reported to clients! OTOH, Security is Different™. There the failure cause should not be reported, and the norm seems to be to also not log it at all. Because why would you want anyone at all to ever figure out WTF is going wrong?! 🍊


  • Considered Harmful

    @topspin said in WTF Bites:

    @Bulb said in WTF Bites:

    @topspin said in WTF Bites:

    When did we decide "Oops :(" is an acceptable error message?

    About the time all the computery stuff got so complicated that most users don't have a clue what they are doing any more, and most developers are not significantly better off. Because, you see, technical details won't be helpful for people who don't have a clue what they are doing even if everything works :half-trolleybus-br:.

    That’s why it’s called technical details. If you don’t care for the details, don’t click on them.

    But don’t make an error message that says

    Oops :(

    Technical details Oopsie Doopsie.

    Yes, boss, we added error handling. All possible errors are now being handled

    void main(void) {
        try {
            real_main();
        } catch {
           print("oopsie!");
        }
    }
    

  • Discourse touched me in a no-no place

    @dkf said in WTF Bites:

    d the norm seems to be to also not log it at all. Because why would you want anyone at all to ever figure out WTF is going wrong?!

    Not logging such things on the server where it's useful to the people who can do something about it is a different problem though.


  • Discourse touched me in a no-no place

    @loopback0 said in WTF Bites:

    Not logging such things on the server where it's useful to the people who can do something about it is a different problem though.

    Typically, you don't log that because it produces a lot of logging output otherwise (like, seriously large amounts of logging). Which is fair enough. But discovering how to switch on logging of the details when you're trying to work out why the client you're writing isn't working… that's what's actually hard (especially if you want to not get so much information that you can't find the useful bits in the shit torrent).


  • Discourse touched me in a no-no place

    @dkf I guess I misunderstood the part of your post that I replied to because I wasn't suggesting logging anything that results in problematic quantities of logs being written.



  • @loopback0 said in WTF Bites:

    Right. Error messages producing detailed technical errors such as stack traces can potentially give away information useful to hackers.
    OWASP etc recommend against it, and it'll get picked up as an issue by security testers and/or automated security scanners.

    You should not send backtrace to the user. It isn't useful to a legitimate user and might be useful to an attacker. But that's not what was being discussed.

    @loopback0 said in WTF Bites:

    Obviously for expected errors such as "this file is larger than the configured limit" then display the message but otherwise there isn't a useful message to show other than "something unexpected happened".

    Which is the problem here. It does not show even expected errors like that. Because … because excavator, that's why.

    … almost nobody bothers to do proper requirement analysis these days, so they don't understand which errors do make sense to the user and which don't. Nor proper security analysis to tell what information would be actually useful to an attacker and should really be withheld from error message and which should not.


  • Discourse touched me in a no-no place

    @Bulb said in WTF Bites:

    Which is the problem here. It does not show even expected errors like that. Because … because excavator, that's why.

    What doesn't? The error that kicked off this dicussion was an unexpected error.



  • @loopback0 said in WTF Bites:

    @Bulb said in WTF Bites:

    Which is the problem here. It does not show even expected errors like that. Because … because excavator, that's why.

    What doesn't? The error that kicked off this dicussion was an unexpected error.

    As far as I can tell it was a perfectly expected validation error of exceeding configured maximum upload size:

    @topspin said in WTF Bites:

    So since the one file that keeps failing is the only one that is just barely larger than 50MB, I figure that could be the issue. I zip it and it works.
    Is there a file size limit? Well, who knows, but if there is it surely isn't telling you about it.


  • Discourse touched me in a no-no place

    @Bulb Our intrepid reporter guessed at the cause. He also changed the file type, and time elapsed for an issue already identified as intermittent.


  • BINNED

    @loopback0 said in WTF Bites:

    @Bulb Our intrepid reporter guessed at the cause. He also changed the file type, and time elapsed for an issue already identified as intermittent.

    File type between files that succeeded and that didn't are the same.

    The "I don't know if the issue is actually intermittent or not" is what I'm complaining about, because it seems dubious. There are a lot of ways to report a useful error that would make me able to decide about the actual cause besides dumping a stack trace.


  • Considered Harmful

    Nag popup, asking for feedback.

    Options are:

    • Yes, I like it, leave a positive review.
    • No, I don't like it, tell us why (privately so as not to affect our review scores)
    • Ask me later

    There's no fuck off forever and leave me alone button. :fu:


  • Banned

    @topspin said in WTF Bites:

    But don’t make an error message that says

    Oops :(

    Technical details Oopsie Doopsie.

    In the old times, the error message would say "An error occurred. Details: Unknown error."

    If you ask me, I consider "oops :(" an improvement. It's just as useless but it makes it easier to explain my frustration with other developers to my family.


  • Considered Harmful

    For web, we generally don't (or shouldn't) show any technical information to the user, as any user is potentially malicious. At best, they get a reference number that can be used by internal support to look up the relevant logs, or - in the very rare case their error is part of the expected workflow - a specially composed "friendly" message.

    Of course, I see all the time where the friendly message is shown for unknown errors, and in no way reflects what actually happened. Just because the developer couldn't conceive that anything else might go wrong.


  • BINNED

    @error said in WTF Bites:

    For web, we generally don't (or shouldn't) show any technical information to the user, as any user is potentially malicious. At best, they get a reference number that can be used by internal support to look up the relevant logs, or - in the very rare case their error is part of the expected workflow - a specially composed "friendly" message.

    Of course, I see all the time where the friendly message is shown for unknown errors, and in no way reflects what actually happened. Just because the developer couldn't conceive that anything else might go wrong.

    Yes, I see that bullshit attitude all the time.
    You try to sign up somewhere and you get a "sorry, didn't work" message. Some trial and error leads you to figure out that it didn't accept your password, username, email provider, who knows. Fuck this.
    I don't want a stack trace, I want to know why it didn't work.


  • Considered Harmful

    @topspin I think it's important to make the distinction between form validation and server errors. Or perhaps client-side and server-side errors. If the user has done something wrong, developers should provide immediate and detailed feedback, preferably before the form is submitted. If the server has done something wrong, then an internal resource needs to do the needful and the client just gets the "whoops, we fucked up" message.


Log in to reply