Exceptionally designed Java JSON library



  • The relatively widely known and used json-simple library has this code in it: link

    public Object parse(String s, ContainerFactory containerFactory) throws ParseException {
                StringReader in=new StringReader(s);
                try{
                        return parse(in, containerFactory);
                }
                catch(IOException ie){
                        /*
                         * Actually it will never happen.
                         */
                        throw new ParseException(-1, ParseException.ERROR_UNEXPECTED_EXCEPTION, ie);
                }
        }
    
     public Object parse(Reader in, ContainerFactory containerFactory) throws IOException, ParseException{
                reset(in);
                LinkedList statusStack = new LinkedList();
                LinkedList valueStack = new LinkedList();
               
                try{
                // Snip: parsing code
                }
                catch(IOException ie){
                        throw ie;
                }
               
                throw new ParseException(getPosition(), ParseException.ERROR_UNEXPECTED_TOKEN, token);
        }


  • My guess is, the reason for this is Java's idiotic rule that you have to declare every potential exception your code could ever throw. This creates a strong incentive to reduce the number of exceptions, so your library would be marginally less horrible to use. Thus, shit like that.



  • That does explain most of it except for the last catch block in the snippet (TRWTF).



  • Yeah, looks like dead code.

    I don't remember, does java mangle stack trace in exceptions when you re-throw them explicitly (like C#)? Maybe they wanted to get rid of detailed stack trace?



  • Java makes the stack trace when you construct the exception, not when you throw it.



  • Right. It's WTF.



  • @marczellm said:

    That does explain most of it except for the last catch block in the snippet (TRWTF).

    Hm... Wouldn't it change the exception type to IOException from whichever descending class it was? Not sure it makes any difference, I haven't done Java in years.


  • Discourse touched me in a no-no place

    The first one is OK; it's catching a checked exception that can't actually occur (unless the parser is buggy as fuck) and doing something that should never actually happen.

    The second one has a dumb catch-and-release in it. There's absolutely no reason for the code to be there; it does nothing observable other than slowing things down (possibly, if the JIT doesn't detect it as being stupid and decide to optimise). There are cases where catch-and-release is necessary (e.g. to prevent a superclass exception from masking the failure) but this isn't one of them.

    But you missed the real WTF, close to the bottom of the file.

    catch(Error e){
            status = S_IN_ERROR;
            throw e;
    }
    ```
    There are _very_ few reasons to ever catch an `Error`, and a parser shouldn't ever do it (it's the sort of thing done by an application container). Recovering from an `Error` is usually a matter of praying to God, as they typically represent catastrophic trouble like running out of memory — the most common case — or a corrupt class file — rare, but I've seen it. (OK, it was because I was using a buggy runtime assembler… :smiling_imp:)

  • Discourse touched me in a no-no place

    @Maciejasjmj said:

    Wouldn't it change the exception type to IOException from whichever descending class it was?

    No. Exceptions are objects, and objects know their own type. Exception trapping is based on the actual type of the exception, not the declared type in the throws clause or the the compile-time type of the argument to throw.

    @cartman82 said:

    Right. It's WTF.

    Attitudes to exceptions are one of the big differences between Java and C# communities. Each seems to think the other is wrong… so obviously you're totally wrong. 😛



  • @dkf said:

    Attitudes to exceptions are one of the big differences between Java and C# communities. Each seems to think the other is wrong… so obviously you're totally wrong.

    The useless try/catch in the second part is WTF. Wasn't commenting on the design of java the language. I'm not knowledgeable enough to have strong opinions on that sort of stuff.



  • @dkf said:

    No. Exceptions are objects, and objects know their own type. Exception trapping is based on the actual type of the exception, not the declared type in the throws clause or the the compile-time type of the argument to throw.

    Hm... must have been thinking of something else, but I think there were some compile-time gotchas here. Nevermind.

    @dkf said:

    Recovering from an Error is usually a matter of praying to God, as they typically represent catastrophic trouble like running out of memory

    It's just a bit stupidly-done "catch-all, log and rethrow" pattern. Whatever the parser throws at you, you can be sure it's in an error state afterwards. Of course if you hit an OOM or stack overflow, it's rearranging chairs on the Titanic, but I kinda get the vibe.

    It also seems that the whole thing is flex-based, so maybe someone had more experience with C status codes than Java exceptions.


  • Discourse touched me in a no-no place

    @Maciejasjmj said:

    It's just a bit stupidly-done "catch-all, log and rethrow" pattern.

    Except there's no logging. And log-rethrow is a bad idea unless you really like having the same exception logged multiple times as it bubbles out. Logging at the point when the exception is actually handled, when it isn't going further, is enough.


  • Discourse touched me in a no-no place

    @Maciejasjmj said:

    Hm... must have been thinking of something else, but I think there were some compile-time gotchas here. Nevermind.

    Java's refreshingly dynamic there. (About the only place, alas.) Maybe you were thinking of C++, where the programming style has usually indicated that code authors are afraid of using exceptions at all…?



  • // in case a neutrino hits the server



  • @dkf said:

    Except there's no logging.

    Well, it's not logging, but it's setting the error flag on the parser - so if you try to resume parsing, it'll puke (throwing, oddly enough, an unexpected token exception, judging by the S_IN_ERROR case). But still, it prevents reusing the parser without resetting the stream in case of an error.

    @dkf said:

    Maybe you were thinking of C++, where the programming style has usually indicated that code authors are afraid of using exceptions at all…?

    Well, C++'s exceptions are not much more than gotos (and those are eeeevil). You don't get many useful information out of them anyway, you don't have a finally clause, and they do upset the procedural control flow, which is how many C++ programs are written.

    Actually, I think I was thinking about not being able to do something like this prior to Java 7:

    public void f() throws SpecificException
    {
      try
      {
        throw new SpecificException();
      }
      catch (Exception e)
      {
        throw e;
      }
    }
    

    As I said, nevermind.


    Filed under: the throws clause should die in a fire anyway, those things belong to javadocs



  • @Maciejasjmj said:

    Well, C++'s exceptions are not much more than gotos (and those are eeeevil).

    What? C++ exceptions aren't more (or less) gotos than almost any other language. The three exceptions (hah) I can think of are:

    • SEH, which is resumable
    • Some Lisp condition thing, which is resumable
    • You could argue checked exceptions


  • Unlike most other languages, for which forgetting to include a finally block to close files is about the worst incorrect thing you can do, there are quite frequently used patterns in C++ that result in exception unsafe code that cannot be made safe in a systematic way. Programmers are entirely correct to treat C++ exceptions with a degree of skepticism given the code they're usually dealing with, and not always knowing how their code will be used.

    The good news is that "Constructor does real work" is being increasingly recognized as an antipattern, which is really the biggest of only 2 ways in which exceptions in C++ gain unsafety.


  • Discourse touched me in a no-no place

    @prozacchiwawa said:

    The good news is that "Constructor does real work" is being increasingly recognized as an antipattern, which is really the biggest of only 2 ways in which exceptions in C++ gain unsafety.

    Except that “constructor does work” is common in all sorts of bits of C++, like the whole smart pointer system. I think I've also seen it in many settings that use RAII (in the worst case, having objects that you mention at start of a block and then never talk about again since all that stuff is magically happening for you). I suppose that might also be what you're talking about with “antipattern” but I think the C++ community is going to be stuck with that stuff for a long time to come; all that existing code and in-use libraries doesn't magically un-happen just because it is realised that some of it was a really bad idea in terms of comprehensibility and maintainability.

    I really don't like C++.



  • @prozacchiwawa said:

    ... there are quite frequently used patterns in C++ that result in exception unsafe code that cannot be made safe in a systematic way. Programmers are entirely correct to treat C++ exceptions with a degree of skepticism given the code they're usually dealing with, and not always knowing how their code will be used.

    Well, given the steaming heaps of pointer-laden WTF that old(er) C++ tends to be, I'd say you're right in being skeptical of exceptions in that sort of codebase. If you write C++ like your compiler actually gives a crap about conformance though, being exception-safe isn't nearly as difficult because the hard bits are mostly encapsulated.

    @prozacchiwawa said:

    Unlike most other languages, for which forgetting to include a finally block to close files is about the worst incorrect thing you can do,

    What is with the finally block brainworms? It's the exception-handling equivalent of having to do it all in C, complete with calling close() yourself on everything. Any language worth its salt should provide some form of scoped resource management...(Python and C# have with, and Lisp API designers usually provide macros that act that way as well.)

    @prozacchiwawa said:

    The good news is that "Constructor does real work" is being increasingly recognized as an antipattern, which is really the biggest of only 2 ways in which exceptions in C++ gain unsafety.

    So you'd rather have zombie objects roaming around that need an extra init() call to have life breathed into them?

    @dkf said:

    Except that “constructor does work” is common in all sorts of bits of C++, like the whole smart pointer system. I think I've also seen it in many settings that use RAII (in the worst case, having objects that you mention at start of a block and then never talk about again since all that stuff is magically happening for you). I suppose that might also be what you're talking about with “antipattern” but I think the C++ community is going to be stuck with that stuff for a long time to come; all that existing code and in-use libraries doesn't magically un-happen just because it is realised that some of it was a really bad idea in terms of comprehensibility and maintainability.

    RAII is far better than trying to manage resources manually, even in the face of exceptions, as long as you understand what's actually going on. Here's the thing: most folks don't bother with grasping the concepts behind it. Most of the maintainability/comprehensibility problems with RAII come from people who are conceptually muddled about the lifetimes of values, as I see it: they never learned how locals and temporaries should behave.

    @dkf said:

    I really don't like C++.

    Here's the thing: Java folks hate to admit this, but try...finally was a step backwards from RAII.


  • Discourse touched me in a no-no place

    I see that the C++IDF is out today! :trollface:

    @tarunik said:

    Here's the thing: Java folks hate to admit this, but try...finally was a step backwards from RAII.

    When you've got GC, you don't need nearly so much effort on resource management as most of the resources you really deal with on a frequent basis are just memory, and try-with-resources can handle most of the other cases. (This is what you were alluding to earlier when you mentioned Python, C# and Lisp.) The only awkward cases are where you've got something that has a lifespan not linked to a scope and yet which must be deleted correctly, and that's actually pretty rare (and is a case which RAII isn't too good for either).

    C++ is mainly notable for being the language which makes exiting the program take a long time and liable to crash. ;)



  • @dkf said:

    and try-with-resources can handle most of the other cases. (This is what you were alluding to earlier when you mentioned Python, C# and Lisp.)

    It can handle it, but I'm with tarunik: RAII handles it better. It handles it without introducing a new scope, it handles it without a bunch of boilerplate syntax, and most importantly it makes doing the wrong thing harder. For example, in Python you can still say f = open(...) and then fail to call f.close() and wind up with a file that is only closed when your GC gets to it. (CPython probably will close it immediately for now.) Same in C# or Java. In C++, you can't make an fstream that is open past the lifetime of the last reference to it unless you go out of your way and allocate it with new or something.

    In C++, the correct thing is easier and nicer than the incorrect thing. In most other languages the reverse is true, even if the difference is relatively minor.



  • @dkf said:

    When you've got GC, you don't need nearly so much effort on resource management as most of the resources you really deal with on a frequent basis are just memory,

    True.

    @dkf said:

    and try-with-resources can handle most of the other cases. (This is what you were alluding to earlier when you mentioned Python, C# and Lisp.)

    The problem is not the scoped semantics, but the syntax: hijacking EH syntax for this job was an exceptionally poor language design choice. (Which is why I mentioned with blocks already.)

    @dkf said:

    The only awkward cases are where you've got something that has a lifespan not linked to a scope and yet which must be deleted correctly, and that's actually pretty rare (and is a case which RAII isn't too good for either).

    Agreed that this is the interesting-but-hard case of object lifetime management.

    @EvanED said:

    It can handle it, but I'm with tarunik: RAII handles it better. It handles it without introducing a new scope, it handles it without a bunch of boilerplate syntax, and most importantly it makes doing the wrong thing harder. For example, in Python you can still say f = open(...) and then fail to call f.close() and wind up with a file that is only closed when your GC gets to it. (CPython probably will close it immediately for now.) Same in C# or Java. In C++, you can't make an fstream that is open past the lifetime of the last reference to it unless you go out of your way and allocate it with new or something.

    In C++, the correct thing is easier and nicer than the incorrect thing. In most other languages the reverse is true, even if the difference is relatively minor.


    QFT. RAII, applied consistently and thoughtfully, means that while memory might be harder to handle than in a GCed environment, other resources don't become harder to handle than memory is. Contrast this with, say, Java, where resource leakage (DB connections, files, etc) is far too common a bug.

    The with block syntax is sort of a 'poor man's RAII' for those who can't stand to have properly block-scoped value-semantics types in their language (Java, I'm looking hard at you for this one, everything may be an object in the strict OO sense, but that does not mean that everything should have reference semantics!).



  • @tarunik said:

    The problem is not the scoped semantics, but the syntax: hijacking EH syntax for this job was an exceptionally poor language design choice. (Which is why I mentioned with blocks already.)

    It's also not just that you have to write a couple extra things: the new scope can actually be pretty disruptive. For example, if you want to read in a couple things from a file and initialize a variable with them, then close the file, then do something with what you read in, you can't actually do that because the scoping is wrong. You either have to split the "read from file" step into a separate function (which may or may not solve the problem) or declare the variable that will carry the information before the with block, meaning you can't actually initialize it with the relevant value.


  • Discourse touched me in a no-no place

    @tarunik said:

    Contrast this with, say, Java, where resource leakage (DB connections, files, etc) is far too common a bug.

    That's a lot less common than it used to be, as the generational GC is really quite good at cleaning up unreferenced objects proactively (I've had that bite my caches a few times). You have to work quite hard to leak resources now, such as by screwing around with classloaders or thread-locals.

    Value semantics gets difficult elsewhere — all that complexity with copy constructors and move semantics and subtleties with references and so on. The general attitude of Java programmers is that if the JIT detects that it can optimise to use value semantics in a particular case, the JIT is free to use it. I know for sure that that's a very different approach to C++ programmers.


  • BINNED

    @tarunik said:

    RAII is far better than trying to manage resources manually, even in the face of exceptions, as long as you understand what's actually going on. Here's the thing: most folks don't bother with grasping the concepts behind it.

    You shouldn't have to grasp concepts to write programs. Didn't you get the memo? Everyone who wants to program should be able to!

    Filed under: trolling @blakeyrat



  • Even in Haskell?


  • Discourse touched me in a no-no place

    Especially in Haskell. Those guys deserve some WTFs.


Log in to reply