Waaah, I don't get paid to use new shiny languages


  • BINNED

    @dkf said in Waaah, I don't get paid to use new shiny languages:

    @boomzilla said in Waaah, I don't get paid to use new shiny languages:

    @NeighborhoodButcher said in Waaah, I don't get paid to use new shiny languages:

    @sloosecannon said in Waaah, I don't get paid to use new shiny languages:

    I just see no need for enforced no-nullness across a language.

    Ok, but you also need to realize that's a subjective feeling - if you don't see a need for something, it doesn't mean there is no need.

    I'm looking forward to when they reinvent checked exceptions. Their structured error types are almost there, except for the need to explicitly handle things everywhere…

    I always was of the opinion that checked exceptions where the correct idea. One of the worst things about C++ exceptions is that the function signature doesn't tell you about it (you have noexcept now because the previous throws specifier was an abomination).
    Except for OOM exceptions, which can theoretically appear almost anywhere and practically are not existent because Linux shoots you in the head instead of allowing you to have a correct program, I'd like to know what a function can throw.

    But the people who actually have experience with Java all said it's horrible.


  • BINNED

    @error said in Waaah, I don't get paid to use new shiny languages:

    @Gąska said in Waaah, I don't get paid to use new shiny languages:

    Maybe db is a bit overdramatic. But function arguments being null on accident happens all the time. Having a formal guarantee that an argument will never be null is an ENORMOUS improvement. Even just a dynamic check is great, because then you can catch it early and it won't mess other stuff, and the stack trace makes sense. But static check is even better. The mere fact that you have a compiled program you can deploy is by itself an indisputable proof that this situation will never happen. Not just never happen unless something so inconceivable happens that it doesn't even make sense to recover anymore. It will never happen, period.

    I've seen more than one third-party program crash with the error "pure virtual function call." As I understand it (and I don't use C++ any more), that means that somewhere, a method got called on an abstract class that wasn't defined. As far as I know, the language semantics are never supposed to allow this to happen; and yet, it happens. There are no hard guarantees.

    IIRC, It happens when you call a pure virtual function in the base class's constructor (The VMT doesn't yet correspond to the derived class that does implement the function) or destructor (the reverse). Don't do that.


  • ♿ (Parody)

    @topspin said in Waaah, I don't get paid to use new shiny languages:

    But the people who actually have experience with Java all said it's horrible.

    IME that's mainly :kneeling_warthog: talking, and I'm sympathetic to that, but it's really no different than using the pattern matching thing with those optional variables, except for a wider range of exceptional circumstances besides nulls.



  • @boomzilla IME it's mostly dealing with things like IO functions, that you always have to try/catch every operation (or give your function its own throws that you then have to propagate up the chain) because some file or network connection may disappear at any moment.


  • Banned

    @error said in Waaah, I don't get paid to use new shiny languages:

    @Gąska said in Waaah, I don't get paid to use new shiny languages:

    Maybe db is a bit overdramatic. But function arguments being null on accident happens all the time. Having a formal guarantee that an argument will never be null is an ENORMOUS improvement. Even just a dynamic check is great, because then you can catch it early and it won't mess other stuff, and the stack trace makes sense. But static check is even better. The mere fact that you have a compiled program you can deploy is by itself an indisputable proof that this situation will never happen. Not just never happen unless something so inconceivable happens that it doesn't even make sense to recover anymore. It will never happen, period.

    I've seen more than one third-party program crash with the error "pure virtual function call." As I understand it (and I don't use C++ any more), that means that somewhere, a method got called on an abstract class that wasn't defined. As far as I know, the language semantics are never supposed to allow this to happen; and yet, it happens. There are no hard guarantees.

    You encountered undefined behavior. One of the main problems of C++ (and C) is that it's incredibly simple to trigger UB by accident, writing very simple code. And once that happens even once, anywhere in the program, all guarantees are retracted and the program can do whatever. The code with UB doesn't even have to be called, it just has to exist. In your specific case, there's likely been a bad cast somewhere. The simplest way to cast types in C++ is also the most retarded, the least safe and the most likely to result in UB.Actually, @topspin is probably right. Virtual call in constructor.

    In sane languages (such as Java, C# or Rust), this doesn't happen. As in, you have to try really hard to arrive at a scenario where UB is a concern. When you're not doing any nasty low-level stuff that you shouldn't do anyway, you'll always be fine.


  • Considered Harmful

    @Gąska OK, my takeaway is that C++ is even worse than I thought. I'm still mistrustful of anything that claims to eliminate errors completely (or anything fantastical like that).


  • Banned

    @error said in Waaah, I don't get paid to use new shiny languages:

    @Gąska OK, my takeaway is that C++ is even worse than I thought.

    There should be a variant of Hofstadter's Law about C++: it's always worse than you think, even when you take into account this law.

    I'm still mistrustful of anything that claims to eliminate errors completely (or anything fantastical like that).

    And that's exactly why I'm always ranting about people not being able to read. Non-null types don't eliminate all errors. Nobody ever said anything even remotely close to that - definitely not in this thread. Non-null types only eliminate one specific kind of errors. They eliminate every single one of those errors completely, but only this one specific kind (unexpected null references). Big difference. But still, removing just this one specific kind of errors is a huge productivity boost.


  • ♿ (Parody)

    @hungrier said in Waaah, I don't get paid to use new shiny languages:

    @boomzilla IME it's mostly dealing with things like IO functions, that you always have to try/catch every operation (or give your function its own throws that you then have to propagate up the chain) because some file or network connection may disappear at any moment.

    Yes, exactly.


  • Banned

    To the "empty optional is no different from null" crowd, if you're still unconvinced that there's a fundamental difference, I also have an argument prepared that's deeply rooted in mathematical type theory, if you want to hear it out.


  • ♿ (Parody)

    @Gąska said in Waaah, I don't get paid to use new shiny languages:

    To the "empty optional is no different from null" crowd, if you're still unconvinced that there's a fundamental difference, I also have an argument prepared that's deeply rooted in mathematical type theory, if you want to hear it out.

    I'm torn here. I agree with the larger point about the beneficial effect of the optional vs surprise nulls, I'm not sure I agree with the statement from a bigger perspective, so I'd like to hear your mathematical argument.


  • BINNED

    @Gąska said in Waaah, I don't get paid to use new shiny languages:

    @error said in Waaah, I don't get paid to use new shiny languages:

    @Gąska OK, my takeaway is that C++ is even worse than I thought.

    There should be a variant of Hofstadter's Law about C++: it's always worse than you think, even when you take into account this law.

    That's true, but you'd have to add something like "even then, it's not worse because of the reason Mason mentioned."

    I'm still mistrustful of anything that claims to eliminate errors completely (or anything fantastical like that).

    And that's exactly why I'm always ranting about people not being able to read. Non-null types don't eliminate all errors. Nobody ever said anything even remotely close to that - definitely not in this thread. Non-null types only eliminate one specific kind of errors. They eliminate every single one of those errors completely, but only this one specific kind (unexpected null references). Big difference. But still, removing just this one specific kind of errors is a huge productivity boost.

    And Rust is removing sources of error that people have claimed either "I never make that mistake", resulting in one exploit after another, or in the opposite camp "you need a GC to prevent that" (at least for one class of these errors).


  • BINNED

    @Gąska said in Waaah, I don't get paid to use new shiny languages:

    To the "empty optional is no different from null" crowd, if you're still unconvinced that there's a fundamental difference, I also have an argument prepared that's deeply rooted in mathematical type theory, if you want to hear it out.

    I'm interested to hear it (mostly because I thought it's type theoretically obvious, but maybe there's more to it), but I'm not in the crowd you want to convince and I doubt it'll be convincing.



  • @error said in Waaah, I don't get paid to use new shiny languages:

    OK, my takeaway is that C++ is even worse than I thought

    I think this is similar to the speed of light, where regardless of your reference frame, it's always going to be the same.



  • @NeighborhoodButcher said in Waaah, I don't get paid to use new shiny languages:

    @sloosecannon said in Waaah, I don't get paid to use new shiny languages:

    but nothing you've said has succeeded in pivoting that, in my mind, to strongly typed languages that allow for null

    Instead of me copy-pasting, you can google it for yourself, starting with that quote, which comes from the person who invented null in the first place.

    I've read the quote. He's giving himself too much credit. The most salient line is where he says he did it anyway because it was just so easy. The null reference is a completely logical and natural thing to do to represent a common use case in a very efficient way. If he hadn't done it, someone else assuredly would have almost immediately.

    The mistake was in not setting up type systems to distinguish between nullable and guaranteed-not-null references.


  • Discourse touched me in a no-no place

    @topspin said in Waaah, I don't get paid to use new shiny languages:

    But the people who actually have experience with Java all said it's horrible.

    About the only thing wrong with checked exceptions are that it's a bureaucratic mechanism (they were worse before the general cause tracking mechanism was added). Also, mandatory stack traces are one of those things that's… expensive and annoying and… ever so useful and actually leave them turned on in prod because otherwise you'll never figure out WTF went wrong.

    The real problem is that some operations really ought to be throwing checked exceptions except they don't. Integer division is a classic such case, as is array indexing.



  • @sloosecannon said in Waaah, I don't get paid to use new shiny languages:

    @NeighborhoodButcher said in Waaah, I don't get paid to use new shiny languages:

    @sloosecannon said in Waaah, I don't get paid to use new shiny languages:

    Right, so instead of null checks populating your code, you have Optional checks populating your code?

    Yes, and that means you cannot pass an empty value, where it's not explicitly expected, therefore avoiding accidentally passing them. You also cannot get the actual value back without checking for it's existence, therefore avoiding accidentally accessing empty values. You also get means to expressively show if something is required or not, on the language level, therefore avoiding unnecessary checks. If you make a mistake, the code won't build, rather than exploding at runtime.

    Maybe I'm crazy, but I really don't see any reason for this to exist. It's just as complex to write (maybe even a little more?) and provides very little benefit, based off of everything I've done. In terms of issues I've actually encountered, null problems are like... 0.001% of them.

    What have you been working with? Most developers will tell you that null errors are one of the most common bugs that exist!

    This actually can reduce complexity quite a bit, because the null checks only have to be done once. When you've checked that something is not null, you can strip off the nullability and have an object of a guaranteed-not-null type, which (if this is encoded in the type system) can then be passed around to parameters that expect a guaranteed-not-null type, so there's no need for the checks anywhere. Right now , if you don't have the concept of that guarantee in your type system, defensive coding requires you to throw null checks around everywhere just to be sure.



  • @dkf You would have a checked exception for every integer division or array access that uses something other than a constant as the second operand? Do you realize how much boilerplate that would add, and leave you no more safe than before, because no one would take those exceptions seriously?


  • Discourse touched me in a no-no place

    @hungrier said in Waaah, I don't get paid to use new shiny languages:

    IME it's mostly dealing with things like IO functions, that you always have to try/catch every operation (or give your function its own throws that you then have to propagate up the chain) because some file or network connection may disappear at any moment.

    IO code really does have a lot of failure modes. So does anything else that touches the OS.


  • Discourse touched me in a no-no place

    @TwelveBaud said in Waaah, I don't get paid to use new shiny languages:

    Do you realize how much boilerplate that would add

    Yes, but if you don't use divide then you don't need to handle divide-by-zero problems.

    :thinking-ahead:



  • @topspin said in Waaah, I don't get paid to use new shiny languages:

    @dkf said in Waaah, I don't get paid to use new shiny languages:

    @boomzilla said in Waaah, I don't get paid to use new shiny languages:

    @NeighborhoodButcher said in Waaah, I don't get paid to use new shiny languages:

    @sloosecannon said in Waaah, I don't get paid to use new shiny languages:

    I just see no need for enforced no-nullness across a language.

    Ok, but you also need to realize that's a subjective feeling - if you don't see a need for something, it doesn't mean there is no need.

    I'm looking forward to when they reinvent checked exceptions. Their structured error types are almost there, except for the need to explicitly handle things everywhere…

    I always was of the opinion that checked exceptions where the correct idea. One of the worst things about C++ exceptions is that the function signature doesn't tell you about it (you have noexcept now because the previous throws specifier was an abomination).
    Except for OOM exceptions, which can theoretically appear almost anywhere and practically are not existent because Linux shoots you in the head instead of allowing you to have a correct program, I'd like to know what a function can throw.

    But the people who actually have experience with Java all said it's horrible.

    Checked exceptions are a pretty good idea. Java's implementation of it is horrible, and because that's the only place most people know about checked exceptions from, they end up thinking "checked exceptions are horrible."


  • BINNED

    @TwelveBaud said in Waaah, I don't get paid to use new shiny languages:

    @dkf You would have a checked exception for every integer division or array access that uses something other than a constant as the second operand? Do you realize how much boilerplate that would add, and leave you no more safe than before, because no one would take those exceptions seriously?

    Those are really are bugs / logic errors, not "exceptions" in the sense that "this shouldn't normally happen but can happen", like network problems.
    The safety of the language requires that these are conditions are checked (instead of unsafe languages' UB), but in this case it's kind of reasonable that they are not "checked exceptions".



  • @dkf Since you're likely not in a position to bubble the exception into your call signature, it means you have to deal with it right then and there, at a time and place where you probably don't have the context to do so, and can't let it follow a general "the state is not what I'd like it to be" escalation policy. Sure, you would much prefer that it be dealt with prior to that point, but that's a bug in your caller, and you can't fix your caller.



  • @topspin said in Waaah, I don't get paid to use new shiny languages:

    @error said in Waaah, I don't get paid to use new shiny languages:

    @Gąska said in Waaah, I don't get paid to use new shiny languages:

    Maybe db is a bit overdramatic. But function arguments being null on accident happens all the time. Having a formal guarantee that an argument will never be null is an ENORMOUS improvement. Even just a dynamic check is great, because then you can catch it early and it won't mess other stuff, and the stack trace makes sense. But static check is even better. The mere fact that you have a compiled program you can deploy is by itself an indisputable proof that this situation will never happen. Not just never happen unless something so inconceivable happens that it doesn't even make sense to recover anymore. It will never happen, period.

    I've seen more than one third-party program crash with the error "pure virtual function call." As I understand it (and I don't use C++ any more), that means that somewhere, a method got called on an abstract class that wasn't defined. As far as I know, the language semantics are never supposed to allow this to happen; and yet, it happens. There are no hard guarantees.

    IIRC, It happens when you call a pure virtual function in the base class's constructor (The VMT doesn't yet correspond to the derived class that does implement the function) or destructor (the reverse). Don't do that.

    Once again, C++'s object model is completely broken. "The VMT isn't of the object's actual type yet" is pure gibberish. In any sane OO language, the VMT is immutable, fixed as a reference to the correct type from the very beginning.



  • @topspin said in Waaah, I don't get paid to use new shiny languages:

    @Gąska said in Waaah, I don't get paid to use new shiny languages:

    @error said in Waaah, I don't get paid to use new shiny languages:

    @Gąska OK, my takeaway is that C++ is even worse than I thought.

    There should be a variant of Hofstadter's Law about C++: it's always worse than you think, even when you take into account this law.

    That's true, but you'd have to add something like "even then, it's not worse because of the reason Mason mentioned."

    Hey, don't go using me to try and claim C++ is less bad than anyone expected! :eek:


  • area_pol

    @Mason_Wheeler said in Waaah, I don't get paid to use new shiny languages:

    The null reference is a completely logical and natural thing to do to represent a common use case in a very efficient way.

    Just like the absolutely safe and error-proof nullable pointer in C...



  • @dkf said in Waaah, I don't get paid to use new shiny languages:

    The real problem is that some operations really ought to be throwing checked exceptions except they don't. Integer division is a classic such case, as is array indexing.

    What do you mean by that? Those are examples of programming errors; they shouldn't be throwing exceptions at all, because there's no meaningful way to catch them and continue operation with confidence that the state of the program remains consistent.


  • Discourse touched me in a no-no place

    @Mason_Wheeler said in Waaah, I don't get paid to use new shiny languages:

    @topspin said in Waaah, I don't get paid to use new shiny languages:

    @Gąska said in Waaah, I don't get paid to use new shiny languages:

    @error said in Waaah, I don't get paid to use new shiny languages:

    @Gąska OK, my takeaway is that C++ is even worse than I thought.

    There should be a variant of Hofstadter's Law about C++: it's always worse than you think, even when you take into account this law.

    That's true, but you'd have to add something like "even then, it's not worse because of the reason Mason mentioned."

    Hey, don't go using me to try and claim C++ is less bad than anyone expected! :eek:

    My “favourite” kind of C++ fun is this:

    // Two functions (or methods) with these signatures
    void foo(bool value) {
        std::cout << value;
    }
    void foo(std::string value) {
        std::cout << value;
    }
    
    // later
    foo("bar");
    // what is printed?
    


  • @error said in Waaah, I don't get paid to use new shiny languages:

    @error said in Waaah, I don't get paid to use new shiny languages:

    @dkf said in Waaah, I don't get paid to use new shiny languages:

    originates from code that provably never produces an invalid value.

    In a world where everything else honors its contracts 100% and never changes them, this is true. I don't live in that world.

    Invalid data can come from anywhere - user input, database values, web services, modules from npm written by 13 year olds... Your chain of trust in data integrity is as strong as the weakest link in that chain.

    I think as a web developer I'm used to receiving "dirty" values because I'm always taking data from somewhere else and passing it on. It's helped me to develop a healthy paranoia about verifying data at every step...
    With that said, it's exactly the reason I prefer dynamically typed languages. The data I work with can and does take all shapes and forms, and actually defining every possible variation would take me more code and more time than writing the domain logic itself.

    As a backend grunt, I check the data in raw format against a schema and reject the data with an error if it's invalid. Trying to make invalid data fit into the system is a pretty severe sin. The data I shovel in most gigs has to be correct, and if it's not, fondling it to make it fit destroys the data integrity and may even be criminal.


  • BINNED

    @Mason_Wheeler said in Waaah, I don't get paid to use new shiny languages:

    @topspin said in Waaah, I don't get paid to use new shiny languages:

    @Gąska said in Waaah, I don't get paid to use new shiny languages:

    @error said in Waaah, I don't get paid to use new shiny languages:

    @Gąska OK, my takeaway is that C++ is even worse than I thought.

    There should be a variant of Hofstadter's Law about C++: it's always worse than you think, even when you take into account this law.

    That's true, but you'd have to add something like "even then, it's not worse because of the reason Mason mentioned."

    Hey, don't go using me to try and claim C++ is less bad than anyone expected! :eek:

    That's not what I said, although I admit the phrasing could be parsed ambiguously.
    I didn't want to say: "because of the reason he mentioned, it's better".
    I did want to say: "it is worse, but not because of the invalid reason he mentioned."



  • @dkf It depends.
    bar under MSVC, 1 under GCC, Clang, and Ick. I think it has to do with the precise definition of bool as an intrinsic type.


  • ♿ (Parody)

    @Mason_Wheeler said in Waaah, I don't get paid to use new shiny languages:

    The mistake was in not setting up type systems to distinguish between nullable and guaranteed-not-null references.

    I...thought that's what everyone here has been saying.



  • @Mason_Wheeler Regardless of implementation details, it doesn’t seem unreasonable that you run into trouble when you call a method of a derived class when the derived constructor hasn’t been run yet - and equally for the destructor. With the RAII paradigm of C++ it almost seems like a cardinal sin to do that.

    A bit silly though that it isn’t caught at compile time.



  • @Grunnen It is caught and you're hit with a diagnostic if you make the call directly. But if you call another function - which can legally be called outside of constructors as well, and doesn't necessarily have to be in the same code compilation unit so you can't reason about it and it can't reason about you - and that function calls a virtual function that's pure during construction...


  • Banned

    @boomzilla @topspin okay, here it goes.

    In type theory, types are essentially sets of all possible values of that type. A subtype is a (non-proper) subset of some type's values, and conversely, a supertype is a (non-proper) superset of some type's values. One of the consequences is that every type is its own subtype as well as its own supertype. Not important here, but nice to know.

    The fun begins when you understand the difference between types in the abstract sense and types that can be defined in a given language. Imagine a type "integers from 1 to 10". This type absolutely exists in Java. All these values exist. You can imagine a set of all of them. That's your type. What you can't do is express this type in Java's type system. (There are languages where you can express that - Ada, for example.)

    Now for the fun part. What's the smallest set you can imagine? An empty set, of course. An empty set of values is a type too. It just has no values. Meaning you can't have an actual value that has this type, but you can still use it in other ways. As we all know, an empty set is a subset of every set. In consequence, an empty type is a subtype of every other type. In type theory it's called the bottom type, but in programming languages it usually has a more useful name. E.g. Scala has Nothing, TypeScript has never, and Rust has ! (don't ask). Conversely, there's a top type, usually called something like Any.

    Now, let's take a look at Java's type system. Ignore primitives for a moment and focus on just classes. Java's class hierarchy is rooted, meaning every object has type Object, which means Object is a supertype of all classes. Nothing wrong with that. Now, let's think of what a subtype of all classes is. Obviously no such thing can be expressed in Java's type system, but it's a thing nonetheless. And there are two of them. One is obviously the empty type. The other is, well, null. Since you can use null in place of any object, it's included in the set of possible values of every class type - so a type consisting of just the null value

    The existence of this subtype isn't itself a problem. The problem is that null violates the API contract of every class. It has none of the fields, none of the methods, and none of the interfaces that are mandated by its supertypes. And it's a "valid" value everywhere. That's why it's so bad.

    On the other hand, you have the Option type. Preferably in a language that doesn't have all-encompassing null (but I find it very useful even in languages that do have null). Technically speaking, Option isn't a type - it's a type constructor, or colloquially speaking, a generic type. Essentially a functor that takes a type as argument and returns another type. Like a function.

    For any specific Option<T>, there are two subtypes: Some and None. Some is basically a function that takes T and returns an instance of Option<T>. None is a constant. I could go further about how it all works exactly, but there are just two important points to take remember:

    • Values of type T are NOT included in type Option<T>. You need the Some function to do the conversion. There are languages that do it automatically, but type coercions are a whole other can of worms that I don't want to open right now - the bottom line is, something has to do the conversion somehow before a value can be used as an optional.
    • The value None is NOT included in type T. It can only appear as Option<T>, never as T itself.

    In result, there is no subtype relationship between T and Option<T>, neither one way nor another. Option<T> doesn't form any type hierarchy T, except maybe for the root Object type if one exists. So None not having any functionality of T isn't a problem, because there isn't any contract that says it should (unlike with null).


    TL;DR: the difference is that {null} is a subtype of all class types (violating their contracts), while {None} is only a subtype of the Option types (just the Option, not the underlying type itself).


  • Banned

    @Mason_Wheeler said in Waaah, I don't get paid to use new shiny languages:

    @topspin said in Waaah, I don't get paid to use new shiny languages:

    @dkf said in Waaah, I don't get paid to use new shiny languages:

    @boomzilla said in Waaah, I don't get paid to use new shiny languages:

    @NeighborhoodButcher said in Waaah, I don't get paid to use new shiny languages:

    @sloosecannon said in Waaah, I don't get paid to use new shiny languages:

    I just see no need for enforced no-nullness across a language.

    Ok, but you also need to realize that's a subjective feeling - if you don't see a need for something, it doesn't mean there is no need.

    I'm looking forward to when they reinvent checked exceptions. Their structured error types are almost there, except for the need to explicitly handle things everywhere…

    I always was of the opinion that checked exceptions where the correct idea. One of the worst things about C++ exceptions is that the function signature doesn't tell you about it (you have noexcept now because the previous throws specifier was an abomination).
    Except for OOM exceptions, which can theoretically appear almost anywhere and practically are not existent because Linux shoots you in the head instead of allowing you to have a correct program, I'd like to know what a function can throw.

    But the people who actually have experience with Java all said it's horrible.

    Checked exceptions are a pretty good idea. Java's implementation of it is horrible, and because that's the only place most people know about checked exceptions from, they end up thinking "checked exceptions are horrible."

    How would you change checked exceptions from Java's implementation of them, without compromising their main objective?


  • ♿ (Parody)

    @Gąska said in Waaah, I don't get paid to use new shiny languages:

    TL;DR: the difference is that {null} is a subtype of all class types (violating their contracts), while {None} is only a subtype of the Option types (just the Option, not the underlying type itself).

    Yeah, OK, fair enough. My thinking was not quite so formal, and went along the lines of...

    An otherwise null result becomes an empty optional.

    Obviously it's not null literally, which is trivially true and about as useful as your whole derivation, I'd say from a practical standpoint. But thanks for taking the time.



  • @Gąska said in Waaah, I don't get paid to use new shiny languages:

    @Mason_Wheeler said in Waaah, I don't get paid to use new shiny languages:

    @topspin said in Waaah, I don't get paid to use new shiny languages:

    @dkf said in Waaah, I don't get paid to use new shiny languages:

    @boomzilla said in Waaah, I don't get paid to use new shiny languages:

    @NeighborhoodButcher said in Waaah, I don't get paid to use new shiny languages:

    @sloosecannon said in Waaah, I don't get paid to use new shiny languages:

    I just see no need for enforced no-nullness across a language.

    Ok, but you also need to realize that's a subjective feeling - if you don't see a need for something, it doesn't mean there is no need.

    I'm looking forward to when they reinvent checked exceptions. Their structured error types are almost there, except for the need to explicitly handle things everywhere…

    I always was of the opinion that checked exceptions where the correct idea. One of the worst things about C++ exceptions is that the function signature doesn't tell you about it (you have noexcept now because the previous throws specifier was an abomination).
    Except for OOM exceptions, which can theoretically appear almost anywhere and practically are not existent because Linux shoots you in the head instead of allowing you to have a correct program, I'd like to know what a function can throw.

    But the people who actually have experience with Java all said it's horrible.

    Checked exceptions are a pretty good idea. Java's implementation of it is horrible, and because that's the only place most people know about checked exceptions from, they end up thinking "checked exceptions are horrible."

    How would you change checked exceptions from Java's implementation of them, without compromising their main objective?

    Have a look at this article on the Midori project's error model. The author goes into detail about the advantages and disadvantages of error codes, Java-style checked exceptions, and unchecked exceptions, and how to build a checked exception model that gives you the best of all worlds.

    I don't agree 100% with everything he says here -- particularly with regard to stack traces -- but generally speaking this is more or less how I would design the error model if I were to build a new language from scratch.



  • @Mason_Wheeler said in Waaah, I don't get paid to use new shiny languages:

    What do you mean by that? Those are examples of programming errors; they shouldn't be throwing exceptions at all, because there's no meaningful way to catch them and continue operation with confidence that the state of the program remains consistent.

    The typical way people would handle those conditions is with a prior check somewhere (unless they can prove that the situation never arises). That prior check presumably has an else-branch.

    Focusing first on the integer division (array out of bounds is a bit more messy). The "problem" with the prior approach is that you're duplicating the checks (at least on common architectures). The CPU will "check" when executing the instruction and raise the appropriate hardware exception (e.g., integer division). You can perfectly well catch that exception and handle it (~whatever the else-branch of the version with the prior check would do). The difference is that the latter is less costly in the common case, i.e., when there are no errors.

    Probably doesn't matter in 99% of the cases, but this and similar cases are something useful to keep in the back of the mind.

    The reason why the array indexing case is mess for real hardware architectures is that those (typically) don't have the information regarding array lengths, and since stuff is frequently laid out compactly in memory, an out-of-bounds access would succeed there, returning garbage (and thus become a problem).


  • Banned

    @Mason_Wheeler holy fucking shit. Usually I say TLDR as a troll or out of laziness, but this... This is WAAAY too long. An online world count tool says 19785. That's like 79 pages. No fucking way I'm gonna read all that. Especially since I don't even know what Midori is, let alone plan to ever use it in future. Nope, that's too much. If you're not going to make a summary that's shorter than 79 pages, forget I ever asked anything.


  • Considered Harmful

    @boomzilla said in Waaah, I don't get paid to use new shiny languages:

    @Mason_Wheeler said in Waaah, I don't get paid to use new shiny languages:

    The mistake was in not setting up type systems to distinguish between nullable and guaranteed-not-null references.

    I...thought that's what everyone here has been saying.

    Counterpoint: what if the mistake was when they didn't set up type systems to tell between references that are not-null and those that are null?


  • BINNED

    @Gąska said in Waaah, I don't get paid to use new shiny languages:

    The value None is NOT included in type T. It can only appear as Option<T>, never as T itself.

    Tangent question: what happens when T itself is {None}? I guess nothing at all and you either have Some None or None.


  • Considered Harmful

    @Gąska said in Waaah, I don't get paid to use new shiny languages:

    a functor is a function that operates on something else than values

    Finally someone explained it in English.


  • BINNED

    @Gąska said in Waaah, I don't get paid to use new shiny languages:

    @Mason_Wheeler holy fucking shit. Usually I say TLDR as a troll or out of laziness, but this... This is WAAAY too long. An online world count tool says 19785. That's like 79 pages. No fucking way I'm gonna read all that. Especially since I don't even know what Midori is, let alone plan to ever use it in future. Nope, that's too much. If you're not going to make a summary that's shorter than 79 pages, forget I ever asked anything.

    I did read the first parts of it when the link was posted here before. Yes, it’s way too fucking long and I was too :kneeling_warthog: to read it, but just FYI it is interesting.


  • BINNED

    @error said in Waaah, I don't get paid to use new shiny languages:

    @boomzilla said in Waaah, I don't get paid to use new shiny languages:

    @Mason_Wheeler said in Waaah, I don't get paid to use new shiny languages:

    The mistake was in not setting up type systems to distinguish between nullable and guaranteed-not-null references.

    I...thought that's what everyone here has been saying.

    Counterpoint: what if the mistake was when they didn't set up type systems to tell between references that are not-null and those that are null?

    8E5EA602-FB9F-423E-8B63-B582868CF8A9.jpeg


  • Considered Harmful

    @Gąska said in Waaah, I don't get paid to use new shiny languages:

    Especially since I don't even know what Midori is, let alone plan to ever use it in future.

    I plan to use it in the near future. 🍸


  • Banned

    @topspin said in Waaah, I don't get paid to use new shiny languages:

    @Gąska said in Waaah, I don't get paid to use new shiny languages:

    The value None is NOT included in type T. It can only appear as Option<T>, never as T itself.

    Tangent question: what happens when T itself is {None}? I guess nothing at all and you either have Some None or None.

    Exactly. It's so useless that in most languages, None doesn't have a separate type at all - it's directly part of Option definition. Scala is the only exception I know about, but it only further proves how dumb it is.


  • Discourse touched me in a no-no place

    @cvi said in Waaah, I don't get paid to use new shiny languages:

    The typical way people would handle those conditions is with a prior check somewhere (unless they can prove that the situation never arises). That prior check presumably has an else-branch.

    More to the point, those prior checks are often combine-able. That frequently allows the checks to be, for example, hoisted out of loops and extremely fast machine code used without sacrificing correctness. That sort of thing (along with directed loop unrolling) is pretty magic…


  • Considered Harmful

    @topspin said in Waaah, I don't get paid to use new shiny languages:

    I guess nothing at all and you either have Some None or None.

    Again, JS has null and undefined, which I think of as known unknown and unknown unknown (null means you've explicitly stated something is unknown and undefined usually means something's not been initialized, though passing undefined as a parameter explicitly is technically different than passing no parameters).


  • BINNED

    @error said in Waaah, I don't get paid to use new shiny languages:

    though passing undefined as a parameter explicitly is technically different than passing no parameters.

    So really JS has null, undefined, and emptyFILE_NOT_FOUND. 🚎


  • Considered Harmful

    @topspin said in Waaah, I don't get paid to use new shiny languages:

    So really JS has null, undefined, and emptyFILE_NOT_FOUND.

    Close enough. You can interrogate the arguments object to figure out the exact arguments that were passed, including how many there were, so you can distinguish the two cases.

    I've seen this used for overloading, since JS doesn't actually have function overloads as a first class feature you have to do it yourself, and sometimes you see a pattern like foo() is a get, while foo( bar ) sets foo to bar. In this specific case, foo( undefined ) sets foo to undefined where foo() of returns the value of foo.


Log in to reply