Earnestly thinking NULL is a mistake is a symptom



  • @LB_ said:

    Passing an object which may not exist to a function, and expecting the function to take over ownership. Same with returning.

    Pass a smart pointer, which builds on the nullable semantics of the raw pointer and adds ownership semantics.

    @Gaska said:

    No you shouldn't. You should make two separate functions - one taking N parameters and the other taking N-1 parameters. You might want to make one call the other to reduce the code duplication - but this depends on exact code.
    That's the sane thing to do, but we started out from the premise that we want optional things. So I wanted to cover every case.

    @Gaska said:

    ...Let me get this straight. You are worried about a dozen or two bytes of bonus padding within object, but not additional dereference?
    I've explained this several times now. Most of the time, it doesn't matter. When it does matter, there are better solutions than Optional. Optional's niche is "I kinda care, but not really enough to think about it."

    @Gaska said:

    Not so ugly in Rust
    Great if you're using Rust, not so great if you're using any other language's implementation of optional types (including C++'s). I don't dislike Rust (although I was disappointed by some of it's choices), I'm arguing against Optional types in general, not against's Rust's specific implementation.

    Sure, I'm using C++'s language to do it, but I imagine the same concepts can be translated to other languages. If Rust can apply the same ideas with a better syntax, all the better. Makes my case even stronger :)

    @Gaska said:

    This is why constructors are bad - because of some technical reasons, you really need them even if they don't make sense, and they end up not really being constructors, so existence of default constructor doesn't make it clear whether default-constructing the object is a valid usage.
    Well, I disagree. The constructor creates an object, and establishes invariants. It's up to the user to know if they need it, or give it a sensible value. In my case, default initialization just makes the implementation easier. I could have avoided that case and gone for placement new directly, but why do that if the default behavior is good enough?

    @Gaska said:

    A generic data structure that holds data in place (ie. not in remote location accessible by pointer), gives you all the value semantics you want and allows you to directly obtain reference to the contained object.
    I'm confused. How do you directly get a reference to something that may not exist safely? I mean, sure, I could have a Value& GetValue(Key key) method that requires that the object exist and throws (or skips the runtime check and has undefined behavior instead) otherwise. But it doesn't really add anything.

    Holding the data in place just means instead of single variables you'd have an array (or a bigger buffer), and would have to redefine the options_ flag to work with positions instead of names. Possibly have a class built around a char array that is one eighth the size of the values array (one bit per value). It's not much of a challenge, although the solution would be a bit verbose.

    You can make it generic by templating the code on "MyType", which was kind of the point. The solution is already generic. Unless I misunderstand what you mean.

    In other words, that's all already covered, the solution is not meaningfully different.

    @cvi said:

    If you want to return the object by value, and there's no cheap default-constructible state that you can use to flag the objects invalidity, you're pretty much stuck with something like an optional<>.
    Hmm. Ok, this is one situation where yeah, I can see the usefulness of an optional type and I can't see an easy workaround.

    Although I'd be wary of a function that claims to maybe create objects.

    @cvi said:

    Use std::aligned_storage (TR1/C++11).
    Oh, I'll have to remember that! :)


  • Java Dev

    @Kian said:

    Great if you're using Rust, not so great if you're using any other language's implementation of optional types

    If T (or worse, Optional<T>) are implicitly nullable, then the optional doesn't make sense. So IMO it only makes sense to talk about optional as a concept in a language where nullable isn't the default.


  • Banned

    @Kian said:

    That's the sane thing to do, but we started out from the premise that we want optional things.

    As return values and member fields, not function arguments.

    @Kian said:

    I've explained this several times now. Most of the time, it doesn't matter. When it does matter, there are better solutions than Optional.

    And when it doesn't, is there any better way to express that the value might or might not be there to both the programmer and the compiler?

    @Kian said:

    Optional's niche is "I kinda care, but not really enough to think about it."

    s/Optional/Shared pointer/

    @Kian said:

    Great if you're using Rust, not so great if you're using any other language's implementation of optional types (including C++'s).

    You can say this about any feature. Generics suck hard in Java. Interfaces suck hard in Go. Class inheritance sucks hard in C++.

    @Kian said:

    I don't dislike Rust (although I was disappointed by some of it's choices), I'm arguing against Optional types in general, not against's Rust's specific implementation.

    Your idea of general is very specific - so specific it somehow excludes Rust.

    @Kian said:

    Sure, I'm using C++'s language to do it, but I imagine the same concepts can be translated to other languages. If Rust can apply the same ideas with a better syntax, all the better. Makes my case even stronger

    The case that you should make extreme microoptimizations when you're absolutely forced to and there is no way to back out? Yeah, I agree with that. But only if you're absolutely forced to and there is no way to back out.

    @Kian said:

    Well, I disagree. The constructor creates an object, and establishes invariants. It's up to the user to know if they need it, or give it a sensible value. In my case, default initialization just makes the implementation easier.

    You can't really call it initialization if it yields uninitialized object.

    @Kian said:

    I'm confused. How do you directly get a reference to something that may not exist safely?

    ...allows you to directly obtain reference to the contained object if it exists. Better?

    @Kian said:

    Holding the data in place just means instead of single variables you'd have an array (or a bigger buffer), and would have to redefine the options_ flag to work with positions instead of names.

    Now it's my turn to be confused and having no idea of what you're talking about. By in-place, I mean that any optional<T> is contained in whole at single memory location in one consecutive block of memory, and you can create this block of memory that optional<T> will occupy everywhere that you can allocate primitive types. In particular, the stack, the heap, or as part of another object - whatever works at the given moment.

    @Kian said:

    Although I'd be wary of a function that claims to maybe create objects.

    Fun fact: new might return null. Be wary when using it.


  • Discourse touched me in a no-no place

    @Gaska said:

    Fun fact: new might return null.

    Under what circumstances? (Excluding placement-new targeting a zero pointer when running highly-privileged code. That'd be mean.)


  • Banned

    @dkf said:

    Under what circumstances?

    Failure to allocate memory, of course! Which can theoretically happen at any moment.

    Edit: oh and don't forget to catch the std::bad_alloc exception!


  • Discourse touched me in a no-no place

    @Gaska said:

    Failure to allocate memory, of course!

    That's what an exception is for.

    No, really. Stop laughing at the back!


  • Banned

    @dkf said:

    That's what an exception is for.

    Unless you use nothrow version. Or your compiler is non-standard-compliant, like MSVC used to be.


  • Discourse touched me in a no-no place

    @Gaska said:

    Unless you use nothrow version.

    Ah, then I'd expect it to abort() the process on memory allocation failure. 😃



  • @Gaska said:

    As return values and member fields, not function arguments.

    Fine, so less room for optional then. Which is my point in the first place: optional is rarely necessary. Thanks for recognizing how right I am.

    @Gaska said:

    And when it doesn't, is there any better way to express that the value might or might not be there to both the programmer and the compiler?
    Yes. Pointers. Or nullable references in lamer languages. You can continue praising Rust for it's compile time checks that ensure people remember to check the things they use exist, but that misses the point. The conversation isn't about Rust, it's about Optional types. Rust optionals are nice, we get it. They don't justify optional types outside of Rust, though.

    @Gaska said:

    s/Optional/Shared pointer/
    Shared pointers have a very specialized niche where they are the best solution. That is: sharing resources between threads when at least one thread is detached. Most people shouldn't use them. Optionals aren't even the best solution in their niche.

    @Gaska said:

    You can say this about any feature. Generics suck hard in Java. Interfaces suck hard in Go. Class inheritance sucks hard in C++.
    Except you're doing the opposite. Generics are a good idea in general, but the implementation in Java is terrible.

    Optionals are a bad idea in general, but Rust has a great implementation that removes most of the suck.

    So I would tell you to stay away from generics in Java, but tell you to stay away of optionals in general.

    Also, C++'s inheritance rules are perfect. Everyone else's are a pale reflection of it's grandeur.

    @Gaska said:

    Your idea of general is very specific - so specific it somehow excludes Rust.
    My idea of "general" is correct. It excludes Rust, and includes everyone else. How is that not "general"?

    @Gaska said:

    You can't really call it initialization if it yields uninitialized object.
    Ehhh, I've been glossing over a slight distinction that C++ lets you make because it wasn't relevant to the implementation. Value initialization vs default init, I think are the correct terms. Essentially, some types have no invariants. For example, a raw int. Whatever bit pattern is in there is a "valid" int on a binary level.

    The problem is, if you default init an int, you get noise. Whatever was at that memory location before you constructed the int. So even though it's a valid int, it's value is unspecified. So if you read it, you're reading noise, which is defined by the language as the wrong thing to do. However, you can do other operations such as set it.

    Allowing this small bit of unspecified behavior (reading an uninitialized int) allows for more optimized code, since you can avoid zeroing the memory if you don't care to (which was valuable when you initialized values by passing them as output parameters to a function, as is the C way, and you didn't have optimizing compilers). So default init just reserves the memory but leaves it as it was, value init actually sets it to a default value that is safe to read. In either case, you have an object you can assign to, which is all I need initialized. Saying this isn't initialized is like saying empty lists aren't initialized because you can't read the value they contain. Of course you can't, you haven't set it to anything. But it's there to receive whatever value you want to assign to it.

    Yes, it's nice that Rust has better syntax. It's also irrelevant, because I'm not arguing about syntax, I'm using the tools the language already has to explain why adding this other tool (optional values) is redundant. If I could add syntax to C++, I'd absolutely take some of the things Rust did.

    @Gaska said:

    Fun fact: new might return null. Be wary when using it.
    That's only the default if your implementation disabled exceptions. Which is wrong to do, but C++ allows you to do wrong bad things if you abuse it enough. You can specifically ask for that behavior, in which case, yeah. Not exactly a shock, and I'd be wary when doing it. In particular, I'd ask why we have to do it that way and if there isn't some better design that would allow me to use the regular new.



  • @Kian said:

    So I would tell you to stay away from generics in Java, but tell you to stay away of optionals in general.

    Since we're talking about Java, I believe I already pointed out the (only) use case for optionals in Java further up the thread. It's not a coincidence that they didn't appear in Java before this... they had no use case until now..

    https://what.thedailywtf.com/t/earnestly-thinking-null-is-a-mistake-is-a-symptom/51020/130?u=powerlord


  • Banned

    @Kian said:

    Yes. Pointers. Or nullable references in lamer languages.

    They don't communicate the intent very well. In particular, it's hard to tell whether a given value is actually optional or not (both pointers and references are commonly used for both optional and required values).

    @Kian said:

    You can continue praising Rust for it's compile time checks that ensure people remember to check the things they use exist, but that misses the point. The conversation isn't about Rust, it's about Optional types. Rust optionals are nice, we get it. They don't justify optional types outside of Rust, though.

    My point is, optional type is the best way to do this thing, except in most languages it's impossible to implement them perfectly right. Java optionals will never be in-place, and C++ optionals will never have compile-time check before access. But it doesn't change the fact that optional type is still the best way to do optional values - if only because they have "optional" in name.

    @Kian said:

    Except you're doing the opposite. Generics are a good idea in general, but the implementation in Java is terrible.

    Optionals are a bad idea in general, but Rust has a great implementation that removes most of the suck.


    I've yet to hear why optionals are worse than pointers. So far, you proved that you can do the same with pointers, and that optionals might incur some memory overhead that in very rare cases are significant (which is also true for both pointers and default-constructor-that-constructs-de-facto-invalid-object).

    @Kian said:

    Also, C++'s inheritance rules are perfect.

    Try to inherit from std::string then. Remember that std::string has non-virtual destructor.

    @Kian said:

    My idea of "general" is correct. It excludes Rust, and includes everyone else. How is that not "general"?

    .......... :facepalm:

    @Kian said:

    So default init just reserves the memory but leaves it as it was, value init actually sets it to a default value that is safe to read. In either case, you have an object you can assign to, which is all I need initialized. Saying this isn't initialized is like saying empty lists aren't initialized because you can't read the value they contain.

    Actually, I can. I can iterate over all values, get element count, convert to another container, sort, reduce, and do anything else I would ever want to do with a list even if the list is empty. Default-constructed list is fully initialized and fully usable object. This differs drastically from a default constructor that constructs an invalid object that I can't use right away.

    @Kian said:

    Yes, it's nice that Rust has better syntax. It's also irrelevant, because I'm not arguing about syntax, I'm using the tools the language already has to explain why adding this other tool (optional values) is redundant.

    This is your problem - you keep the discussion about completely abstract programming concepts like optional values in the context of C++. You might want to say that your objections apply to more languages than just C++, but it doesn't change much - you still bound the discussion to these languages. Even if it's thousand different languages, you're still speaking specifically about those thousand different languages - and I'm talking generally, about pure concepts, unbounded by their implementations in any particular languages. When I refer to Rust, it's only because it's a handy example of optional values done right (also because I can't Haskell).



  • Because in general, Optional types feel like over-complicating something just so you can kick doing a simple check as far back as possible.

    Haskell has "general" functors. For example, if you have a list of ints, you can do:

    fmap show list
    

    to get a list of strings out of it. In general, fmap works on any "container" type (and more besides), and respects the container's semantics. For example, suppose you have a value obj which is type Maybe Int. Then you can do

    fmap show obj
    

    This will have the type Maybe String. And will be either Nothing or Just "1" (etc) depending on what obj was. In other words, your query code doesn't have to do checks at all.

    In other words, you can write functions that just do not care about null values (like show, which has the type Show a => a -> String, and apply them to values that might be null, and have it all work out automatically.

    Your driver code may or may not have to, depending on whether a nice combinator already exists to do what you want.



  • @Kian said:

    Pass a smart pointer, which builds on the nullable semantics of the raw pointer and adds ownership semantics.

    That leads to more confusing code, the point is to make the code less confusing. I've already talked about this once and you clearly either didn't understand or didn't agree. I just hope I never have to work in a codebase where you have been.

    @Gaska said:

    Fun fact: new might return null. Be wary when using it.

    new will never return nullptr unless you explicitly use the non-throwing version. Please don't spread false information.

    @Kian said:

    if you default init an int, you get noise

    No, actually. That happens if you don't construct it. Primitive types have constructors and even default constructors, but the default constructor must be explicitly called if you want to default init a primitive type. If you are not explicit, no constructor is ever called and then you get garbage.



  • @Gaska said:

    They don't communicate the intent very well.

    In languages that don't have what C++ calls references (constant non-null pointers that transparently act like the pointed value), all you're doing is forcing users to make two checks: check that the optional isn't null, and then check if the optional has something. In C++, references already tell the users "this parameter is required", and pointers "this parameter may be null".

    It only doesn't communicate intent to people that haven't yet learned the very basics of the language. And in both cases, they add cost by adding one more thing they have to learn. So your solution to the problem of people not having learned the very basics is adding more stuff for them to learn. Had they used that mental power to learn the basics of passing parameters around, the problem would have been fixed already.

    @Gaska said:

    My point is, optional type is the best way to do this thing, except in most languages it's impossible to implement them perfectly right.
    And my point is, the thing optionals try to do is a dumb thing in the first place. They are a means of preserving ambiguity and thus increase the complexity of the code. The fact that implementations are imperfect and thus add extra cost on top of them being the wrong thing to do just makes them even more wrong.

    I mean, they could be excused if they were a dumb thing to do but harmless. But if they're bad an expensive, why the hell use them?

    There are some spots where the ambiguity exists. In those spots, different languages already have conventions to handle the ambiguity. Adding an additional concept just overcomplicates something where the only real problem is forgetfulness. Naming the parameter "optional_bar" documents the optionality just as well as naming the type "optional", and is free.

    @Gaska said:

    I've yet to hear why optionals are worse than pointers.
    Because I already have pointers, while optionals are new and don't do anything that justifies their presence. Good design is about simplifying things, not complicating them. If I can do the same with one concept that I can do with two, the additional complexity of the second concept is bad, and thus should not be added.

    @Gaska said:

    Try to inherit from std::string then. Remember that std::string has non-virtual destructor.
    Done. I can pass this type to things that expect a string, I just can't store it and later delete it as a string. This allows me to not need a vtable. So it's a trade off that the original designer of the class makes.

    This is not a problem with the C++ inheritance model. If anything, it's something you might bring up with the writers of the STL. In general, inheriting from concrete classes is wrong. You should have abstract interfaces you implement instead.

    @Gaska said:

    This differs drastically from a default constructor that constructs an invalid object that I can't use right away.
    You can use it right away. To set a value to it.

    @Gaska said:

    This is your problem - you keep the discussion about completely abstract programming concepts like optional values in the context of C++.
    I see programming languages as a means of telling concrete machines that exist what I want them to do. Not as exercises in pure logic abstracted from the murkiness of reality.

    I use C++ as the grounds to explore how the abstract concepts are implemented because it provides a way to express how these abstract concepts might make the transition to reality when we finally get around to using them. Creating types with interesting properties at the language level is worthless if their implementation adds hidden costs that the language glosses over.

    Garbage collection is lovely in theory. It means you don't need to think about the structure of your program. But it's also super expensive in practice, and you can eliminate the need for it by thinking about who needs what and for how long.

    @Captain said:

    Your driver code may or may not have to, depending on whether a nice combinator already exists to do what you want.
    What does fmap do in your example? I don't Haskell.

    @LB_ said:

    That leads to more confusing code
    Well, you asked the code to do something more complex. That leads to more complex code. From least complex to more complex:

    // Pass a value the function doesn't grab for itself:
    void Foo(const Value& val);
    
    // Pass an optional value the function doesn't grab for itself:
    void Foo(const Value* val);
    
    // Pass a value the function grabs for itself:
    void Foo(Value val);
    
    // Pass an optional value the function grabs for itself:
    void Foo(std::unique_ptr<Value> val);
    

    The last too are more complex because they play around with ownership. You can't ask for more complex behavior that doesn't increase code complexity. Read the "No silver bullet" paper for an explanation of why.

    @LB_ said:

    No, actually. That happens if you don't construct it. Primitive types have constructors and even default constructors, but the default constructor must be explicitly called if you want to default init a primitive type. If you are not explicit, no constructor is ever called and then you get garbage.
    Right, as I said, I can never recall the right nomenclature for the different behaviors of:

    foo.cpp
    
    static int global;  // I think this gets initialized to 0
    
    int Foo() {
      int garbage;
      int zero ();
      return zero;
    }

  • Discourse touched me in a no-no place

    @Kian said:

    you can eliminate the need for [garbage collection] by thinking about who needs what and for how long

    Yes, but most programmers are famously bad at that sort of thing. (You know, thinking…)



  • @Kian said:

    Good design is about simplifying things, not complicating them. If I can do the same with one concept that I can do with two, the additional complexity of the second concept is bad, and thus should not be added.

    You contradict yourself - simpler code does not use the same thing in two different ways. Or are you saying assembly is simpler?

    @Kian said:

    From least complex to more complex:

    I put that list in a different order. The unique_ptr is least complex because its use and semantics are known to be exactly one way, and it is clear that ownership is being transferred to the function. The Food(Value val) is slightly more complex because it might be passed by copy or by move. The reference is more complex because it is unknown whether actions in the function might unintentionally change the value even though the value cannot be directly changed. The pointer is most complex because you have no idea if it is allowed to be null, if it is expected that ownership is being transferred, etc. - too many different things have to be researched to understand how to use the function.

    Clearly you and I have very different ideas of what makes code simple or complex. I consider simple code to be code that you instantly know how to deal with when you see it, and complex code to be code that requires a lot of reading of comments/documentation/sourcecode in order to understand the proper use. unique_ptr is simple because you instantly know the semantics and expected behavior whenever you see it used in code. Raw pointers are complex because you have to read documentation, comments, and sometimes even the source code to understand the semantics and behavior that is expected in each and every situation. optional is simple because you know right away what it means.

    @Kian said:

    int zero ();

    MVP means that this is a function declaration. Use curly braces instead. (I'm not going to defend C++'s confusing syntax)



  • @Kian said:

    // Pass an optional value the function doesn't grab for itself:
    void Foo(const Value* val);

    Actually, I'm curious about a thing here.

    Let's say I have a different implementation of optional<T>, let's call it OptionalEx<T> for the lack of a better name. My implementation of OptionalEx<T> checks whether or not the type T has a known sentinel-value that indicates invalidity, and if so it only stores a copy of T and the no-value state simply has T's value equal to the sentinel (e.g., NULL for pointers). Types without sentinel values are implemented like the standard optional<T>, i.e., possibly by inheriting (privately) a std::pair<T,bool> so that you can use empty base class optimization. Also, everything is inlineable.

    Now, void foo( OptionalEx<Value const*> ) is no more expensive at run-time than void foo( Value const* ), but makes it very obvious that you are allowed to pass NULL to foo().

    Incidentally, you could also make it possible to have a void foo( OptionalEx<Value const&> ), which is no more expensive than the normal const-ref version. Similar for most standard types, so, the sentinel for a shared_ptr<T> would be the NULL-shared_ptr; for std::string it could be an empty string, ...

    FWIW, not doing that right now, but it's crossed my mind a few times.


  • Banned

    @Kian said:

    In languages that don't have what C++ calls references (constant non-null pointers that transparently act like the pointed value)

    Actually, the only part that is important here is "non-null pointers" - everything else you wrote in parentheses is pretty irrelevant to the problem with optionals.

    @Kian said:

    all you're doing is forcing users to make two checks: check that the optional isn't null, and then check if the optional has something.

    As I already said over and over again, yes, you can't have sensible optional types with nullable references. Can you stop repeating this part of conversation, please?

    @Kian said:

    And my point is, the thing optionals try to do is a dumb thing in the first place. They are a means of preserving ambiguity and thus increase the complexity of the code.

    Ambiguity - how so? Complexity - they don't add any more complexity than you would have anyway, because they just force you to do the very thing you should already have done.

    @Kian said:

    The fact that implementations are imperfect and thus add extra cost on top of them being the wrong thing to do just makes them even more wrong.

    You can't say the concept is wrong because the implementations are bad - it's like saying that asian food is inedible to white people because you suffered food poisoning the other day at the Chinese restaurant across street. You might say it's impractical in a particular language, and you would be right.

    @Kian said:

    I mean, they could be excused if they were a dumb thing to do but harmless. But if they're bad an expensive, why the hell use them?

    They're no more expensive than shared_ptr with default allocator.

    @Kian said:

    There are some spots where the ambiguity exists. In those spots, different languages already have conventions to handle the ambiguity. Adding an additional concept just overcomplicates something where the only real problem is forgetfulness.

    What about replacing it?

    @Kian said:

    Naming the parameter "optional_bar" documents the optionality just as well as naming the type "optional", and is free.

    But it's a bad variable name. Hungarian notation is so 90s. Also, compilers don't understand variable names or documentation.

    @Kian said:

    Because I already have pointers, while optionals are new and don't do anything that justifies their presence.

    Imagine you're creating a brand new language with literally zero existing code written in it. Would you include optionals in it?

    @Kian said:

    Done. I can pass this type to things that expect a string, I just can't store it and later delete it as a string.

    Deleting it as a string isn't very useful if you have new non-trivial member fields.

    @Kian said:

    So it's a trade off that the original designer of the class makes.

    This is exactly the reason why inheritance in C++ sucks - you can only use it if the original developers have predicted that someday, someone might want to extend their class. That, or you have to remember the kajillion things you are not allowed to do with your new class, such as put it in smart pointer of base type if you need non-trivial destructor.

    @Kian said:

    This is not a problem with the C++ inheritance model. If anything, it's something you might bring up with the writers of the STL. In general, inheriting from concrete classes is wrong. You should have abstract interfaces you implement instead.

    C++ is the only language I'm aware of that has this restriction. And this severely cripples you as a library user. I know it's a design tradeoff made for very good reasons, but it doesn't change the fact that it makes the inheritance less useful and pleasant than in other languages. Not to mention multiple inheritance problem.

    @Kian said:

    You can use it right away. To set a value to it.

    Damn, sometimes you're just as obtuse as @blakeyrat. No, setting it to some value doesn't count as usage, because conceptually it's destroying the old value and replacing it with a new one (if it's not, you have much bigger problems than a couple bytes of padding). Neither initializing the object after construction is using it, because conceptually, it's still constructing the object.

    @Kian said:

    I see programming languages as a means of telling concrete machines that exist what I want them to do. Not as exercises in pure logic abstracted from the murkiness of reality.

    The main problem is that you see programming languages and not the abstract ideas that they implement.

    @Kian said:

    I use C++ as the grounds to explore how the abstract concepts are implemented because it provides a way to express how these abstract concepts might make the transition to reality when we finally get around to using them.

    C++ is very bad choice for it - it's old, has tons of legacy burden (both back from C and from C++ itself), and its design makes many, many new concepts that emerged over years impossible to implement, so if you enclose yourself in the C++ shell, you cannot even perceive them, much less understand. The last part is actually true for any programming language, but especially for C++ (and similarly old ones).

    @Kian said:

    Creating types with interesting properties at the language level is worthless if their implementation adds hidden costs that the language glosses over.

    Only if you can't afford this cost. But it's hardly relevant to the optional types discussion because they incur nearly no overhead (and intrusive optional types are literally free).

    @Kian said:

    Garbage collection is lovely in theory.

    Actually, no. It solves some problems, but creates others - even at the theoretical level.

    @Kian said:

    It means you don't need to think about the structure of your program.

    Bullshit.

    @Kian said:

    But it's also super expensive in practice

    Not that expensive nowadays.

    @Kian said:

    and you can eliminate the need for it by thinking about who needs what and for how long.

    That's not always possible. And is far more expensive in development time terms.

    @Kian said:

    Right, as I said, I can never recall the right nomenclature for the different behaviors

    Learning programming languages theory helps in this regard.

    @cvi said:

    Let's say I have a different implementation of optional<T>, let's call it OptionalEx<T> for the lack of a better name. My implementation of OptionalEx<T> checks whether or not the type T has a known sentinel-value that indicates invalidity, and if so it only stores a copy of T and the no-value state simply has T's value equal to the sentinel (e.g., NULL for pointers).

    That's exactly what Rust does. And it works very well even in practice.



  • @Gaska said:

    C++ is the only language I'm aware of that has this restriction.

    I'd like you see you derive from java.lang.String please.

    @Gaska said:

    C++ is very bad choice for it

    Actually, C++ is one of the only languages I would feel comfortable using optional wrappers in. Sure, I'd use optional wrappers in other languages if they were deemed good practice, but it doesn't feel right in any language other than C++ and Rust. Basically I am biased toward value semantics. Languages where you can only refer to class types via nullable references are not fun for me.


  • Banned

    @LB_ said:

    Actually, C++ is one of the only languages I would feel comfortable using optional wrappers in.

    Yeah, but you can't get the full potential of optional types. Namely, compile-time checks.

    @LB_ said:

    Sure, I'd use optional wrappers in other languages if they were deemed good practice, but it doesn't feel right in any language other than C++ and Rust.

    And any other language with value semantics for variables. Nullable structs in C# are nearly equivalent to boost::optional, and they work just as great (and have the same limits, though static analysis of unchecked usage might be easier in C# than C++, for reasons way beyond the scope of this topic).


  • Banned

    inb4 CoyneTheDup joins in...



  • @Gaska said:

    Fail, as in fail to perform the task as specified.

    Oh. I misunderstood. Yes, that would be correct.@Gaska said:

    I'd say it's macroagression

    👍
    @LB_ said:

    new will never return nullptr unless you explicitly use the non-throwingshooting-yourself-in-the-foot version. Please don't spread false information.

    FTFY

    @Kian said:

    Because I already have pointers, while optionals are new and don't do anything that justifies their presence. Good design is about simplifying things, not complicating them. If I can do the same with one concept that I can do with two, the additional complexity of the second concept is bad, and thus should not be added.

    I can't agree with either statement. Yes, I started out not understanding Optional<T> but now I do, and it does do something useful: It provides a means to clearly communicate from code to programmer that an item is optional. Without it, gee, we hope the programmer reads the doc, but my experience is that they don't; and gee, we hope people won't test for null in items that are mandatory, which is a waste.

    This clarifies that with a simple regimen, consistently used: If it is Optional<T> then you must handle the case of the empty optional; otherwise you can depend on the value being non-null. Mixing of disciplines should not be permitted; that is, the code should ultimately not be designed to mix null tests and Optional tests.

    @Kian said:

    I see programming languages as a means of telling concrete machines that exist what I want them to do. Not as exercises in pure logic abstracted from the murkiness of reality.

    That's great if you're the only programmer. I work in a team of dozens, and anything that makes it less likely that the worst team members will be sloppy helps much more than you'd think.



  • @Gaska said:

    Namely, compile-time checks.

    I think I am a bit confused about this - could you explain this in more detail? I keep seeing this being mentioned in other posts and I'm not sure what all it entails.



  • @LB_ said:

    Or are you saying assembly is simpler?

    I think we're mixing different meanings of simple. Assembly is simple. Doing things with it is hard because it's simple, but it also means implementing a compiler for it is easy.

    @LB_ said:

    I put that list in a different order.
    Ok, I get what you mean. I was sticking to the caveat that I said before about never using pointers to handle ownership. Yes, this means that the code is simpler if everyone sticks to the same guidelines. If I could actually enforce them with checks in the compiler, linter or something like that, I'd be much happier. Until then, conventions and code reviews are all we can rely on.

    @LB_ said:

    Raw pointers are complex because you have to read documentation, comments, and sometimes even the source code to understand the semantics and behavior that is expected in each and every situation. optional is simple because you know right away what it means.
    Until someone misuses it, and then you have to read documentation, comments, and sometimes even the source itself. Because some genius used optional and then failed if the optional was empty.

    @cvi said:

    Now, void foo( OptionalEx<Value const*> ) is no more expensive at run-time than void foo( Value const* ), but makes it very obvious that you are allowed to pass NULL to foo().
    Well, to me, and I don't know how long it's supposed to take others to learn it, the pointer ALREADY makes it obvious that the value can be null. There's even a keyword for a null pointer: nullptr. I'm not convinced that people that fail to grasp this basic fact about one of the fundamental types of the language would gain much from it. If they don't check whether the pointer is null, why would you expect them to test if the optional has something in it?

    @Gaska said:

    Ambiguity - how so? Complexity - they don't add any more complexity than you would have anyway, because they just force you to do the very thing you should already have done.
    Ambiguity because a type who's explicit purpose is to say "I may hold a value, look inside to find out!" is by definition ambiguous. Complexity because having an explicit type to do something will feel to some people as an encouragement to use it. So some genius will make all the fields in his classes optional, because "what if I don't have it?!" and then I'll eventually have to deal with code where you have to make constant runtime checks to see if a bunch of fields exist or not at any given moment.

    Or they'll use it to say "this field is optional" when it turns out that no, the application fails if you don't have it, it just doesn't do something helpful like fail to compile or crash because they tried to do some retarded error handling instead.

    Also, it only forces you to do things right in Rust, possibly Haskell and other such fairly obscure languages. Now, I love that new languages do that, but that doesn't justify retrofitting old languages with things they can't properly support. And if we could retrofit the language, it would fix the problem of dereferencing null pointers in the first place and the issue would be fixed.

    @Gaska said:

    You can't say the concept is wrong because the implementations are bad
    I'm not saying it's wrong because the implementation is bad. I'm saying it's wrong because it can't possibly be done well, even theoretically. There are basically two kinds of optionals, those that can be implemented for free and those that need an additional bit to signal the optionality. Calling these two things the same is wrong, because they are different things with different behaviors. But to a user that just types Optional<T>, these different behaviors will be surprising because they wrote one thing. Why is sizeof(Optional<T1>) == sizeof(T1), but sizeof(Optional<T2>) == sizeof(T2)+alignof(T2)? They look the same, shouldn't they behave the same?

    @Gaska said:

    Imagine you're creating a brand new language with literally zero existing code written in it. Would you include optionals in it?
    I would include something like what Rust does, yes. Forcing to check at compile time that you're doing the right thing. I've said as much before. I don't know if I'd call them optionals, and I would not have the second kind of optional, the one that requires additional memory to store the optional bit.

    @Gaska said:

    This is exactly the reason why inheritance in C++ sucks - you can only use it if the original developers have predicted that someday, someone might want to extend their class.
    So the problem with C++'s inheritance model is that it allows developers to make valid choices? Your position is that everyone should pay a price today, just to make it easier for someone who decides to do something somewhere in the future?

    Most classes don't need to be polymorphic. Forcing everything to be virtual is dumb. And complaining that you can't fundamentally alter the design of a program (moving from concrete implementations to polymorphic behavior) with a couple keystrokes isn't much of an argument. The tools to make the program use interfaces were there. The original designers chose against using them. If you have access to the code, retrofitting interfaces isn't much of a hassle.

    @Gaska said:

    it doesn't change the fact that it makes the inheritance less useful and pleasant than in other languages.

    It's less useful because it can do more things? It doesn't even make it particularly difficult to copy the behavior of other languages. It's just not the default.

    @Gaska said:

    C++ is very bad choice for it - it's old, has tons of legacy burden (both back from C and from C++ itself), and its design makes many, many new concepts that emerged over years impossible to implement, so if you enclose yourself in the C++ shell, you cannot even perceive them, much less understand.
    Really? That's funny, because I thought compilers for Haskell, Rust, and such were first written with C or C++ generally, or some other such systems language. The same languages you claim are incapable of exploring, or even implementing, the very concepts these languages introduce.

    I suppose the Rust compiler sprung fully formed out of the minds of it's designers.

    @CoyneTheDup said:

    Without it, gee, we hope the programmer reads the doc, but my experience is that they don't; and gee, we hope people won't test for null in items that are mandatory, which is a waste.
    And with it, you still have to hope for those things. Bad programmers will pervert every feature unless the compiler prevents them. For many, "it compiles" means their job is done. Just look at the snippets posted on this site. There are people that are confused by conditional statements, and those basically read like sentences already.

    @LB_ said:

    I think I am a bit confused about this - could you explain this in more detail? I keep seeing this being mentioned in other posts and I'm not sure what all it entails.
    The way I understand it, as far as the compiler is concerned, with Rust's enums, you can't access the value outside of a block that checks for it. In C++, you could have an optional with isValid and get methods, and you can call the get method without checking that isValid is true. In Rust, if you try, the compiler errors out. If a function returned an optional, getting at the value won't compile unless you check that it's valid.

    It basically forces you to do it right, so you can't ever forget.



  • @Kian said:

    Most classes don't need to be polymorphic. Forcing everything to be virtual is dumb.

    I agree.

    @Kian said:

    you can call the get method without checking that isValid is true

    Ah, I see, thanks. For some reason I was thinking of something else.

    @Kian said:

    Until someone misuses it

    The point is that that's less common than otherwise, and anything that makes it less likely to be surprised is better IMO. I can't force people to be better programmers, but I can write code that makes them more likely to be better programmers.



  • @Kian said:

    Until someone misuses it, and then you have to read documentation, comments, and sometimes even the source itself. Because some genius used optional and then failed if the optional was empty.

    Every feature of every language can be misused. It's only a problem if it's easier to misuse it than it is to use it correctly.

    @Kian said:

    Well, to me, and I don't know how long it's supposed to take others to learn it, the pointer ALREADY makes it obvious that the value can be null

    Until, of course, you find a method which takes a non-optional pointer.

    @Kian said:

    Ambiguity because a type who's explicit purpose is to say "I may hold a value, look inside to find out!" is by definition ambiguous.

    A list may or may not hold many values. Therefore lists are ambiguous too?

    @Kian said:

    So some genius will make all the fields in his classes optional, because "what if I don't have it?!" and then I'll eventually have to deal with code where you have to make constant runtime checks to see if a bunch of fields exist or not at any given moment.

    Some genius has probably already made all the fields in their class pointers, too. And since pointers can be null, you should be checking those as well.
    What exactly is the difference here?

    @Kian said:

    I'm not saying it's wrong because the implementation is bad. I'm saying it's wrong because it can't possibly be done well, even theoretically.

    @Kian said:
    I would include something like what Rust does, yes.

    So it can't be done well even theoretically, but if you were designing your own language you'd include Rust's Option type instead of nulls?

    @Kian said:

    Really? That's funny, because I thought compilers for Haskell, Rust, and such were first written with C or C++ generally, or some other such systems language. The same languages you claim are incapable of exploring, or even implementing, the very concepts these languages introduce.

    I suppose the Rust compiler sprung fully formed out of the minds of it's designers.


    Just because you can implement a feature in some other language doesn't mean it's simple to do so.
    In many cases, it requires mountains of code to set up the initial bootstrap compiler.
    In any case, it may just be better if you do some research before making statements like this; Rust's first compiler was written in OCaml. One of the first Haskell compilers was Yale Haskell, written in Lisp.

    @Kian said:

    The way I understand it, as far as the compiler is concerned, with Rust's enums, you can't access the value outside of a block that checks for it.

    You can with unwrap. Even if the method didn't exist, it would be trivial to write one. The compiler can only prevent some stupidity; it can't defend against laser-guided retardation.



  • @Salamander said:

    What exactly is the difference here?

    That's my point, there isn't one! So why use an optional type that's going to be misused in the same way pointers already are? And then what, we add Optional_ForRealzYouGuys<T>?

    @Salamander said:

    So it can't be done well even theoretically, but if you were designing your own language you'd include Rust's Option type instead of nulls?
    Nice quote cherry picking. You missed the part, in the same four sentence paragraph, where I exclude half of their Option implementation. The part that needs extra memory to work. Having a "pointer type" that the compiler forces you to check before using it would make me happy.

    @Salamander said:

    One of the first Haskell compilers was Yale Haskell, written in Lisp.
    I didn't spend too long looking, so I didn't find the whole history of the language. This site https://downloads.haskell.org/~ghc/6.4/docs/html/building/sec-porting-ghc.html suggested as the bootstrap strategy having the compiler spit C code for itself, then building that with gcc in the new arch, then using that to build itself.

    The point I was getting at was, saying a less advanced language can't possibly be used to think about how a feature in a more advanced language would be implemented is ridiculous. That's literally how new language features are implemented.

    @Salamander said:

    it can't defend against laser-guided retardation.
    Well, I won't hold that against it.



  • But, checked exceptions FORCE me to deal with potential errors I may forget/ignore at compile time! I'm totally a better programmer because of it. ~1995



  • @Kian said:

    You missed the part, in the same four sentence paragraph, where I exclude half of their Option implementation. The part that needs extra memory to work. Having a "pointer type" that the compiler forces you to check before using it would make me happy.

    So let me get this straight: you say that the entire concept of Optional cannot be done well, and your reasoning for this is that there are some implementations that take more space than a pointer?
    That sounds to me like arguing Lists suck because LinkedLists take up more space than arrays.



  • @Kian said:

    So why use an optional type that's going to be misused in the same way pointers already are?

    I repeat: it is less likely that optional will be used in a confusing manner, and anything that decreases the likelihood of surprises is a win in my book.


  • Winner of the 2016 Presidential Election

    @Kian said:

    Optionals are a bad idea in general, but Rust has a great implementation that removes most of the suck.

    Kotlin and Ceylon (actually, probably a lot more new languages) as well. No, they're not a bad idea, they're the right thing to do to help the programmer figure out where to do null checks and where not to. It's just hard to add them to existing languages in a backwards-compatible way that doesn't suck.

    (I like Ceylon's implementation of optional types most, because union types are useful in a lot of other situations as well.)


  • Winner of the 2016 Presidential Election

    @Gaska said:

    Remember that std::string has non-virtual destructor.

    Not a problem if you use smart pointers.


  • Winner of the 2016 Presidential Election

    @Kian said:

    Garbage collection is lovely in theory. It means you don't need to think about the structure of your program.

    No, it doesn't. You still have to make sure you don't keep around references to objects you don't need anymore, so the garbage collector can do its job. In the end, every abstraction is leaky and you cannot write a large application without thinking about object ownership at all.


  • Discourse touched me in a no-no place

    @CoyneTheDup said:

    I work in a team of dozens, and anything that makes it less likely that the worst team members will be sloppy helps much more than you'd think.

    QFT

    The reason why GC is a good thing is not because it helps good programmers. It's because it makes bad programmers badness less catastrophic. (It also simplifies exception handling implementations hugely, which is why Java and C# have had good exception systems for a long time.)


  • Discourse touched me in a no-no place

    @asdf said:

    every abstraction is leaky

    Some are leakier than others. A slow drip is one thing, a gushing sluice another.


  • Banned

    @LB_ said:

    I think I am a bit confused about this - could you explain this in more detail? I keep seeing this being mentioned in other posts and I'm not sure what all it entails.

    When a language is constructed a certain way, it can make it impossible to access the inner value without performing null check. Like, there's no syntax for it at all.

    @Kian said:

    Well, to me, and I don't know how long it's supposed to take others to learn it, the pointer ALREADY makes it obvious that the value can be null.

    It's only obvious after you learn it in the first place.

    @Kian said:

    There's even a keyword for a null pointer: nullptr.

    And there are thousands of functions that will segfault when they receive nullptr, even in the standard library. Mostly it's C functions, but still.

    @Kian said:

    If they don't check whether the pointer is null, why would you expect them to test if the optional has something in it?

    Compile-time checkssssss

    @Kian said:

    Ambiguity because a type who's explicit purpose is to say "I may hold a value, look inside to find out!" is by definition ambiguous.

    You know, by C++ definition, pointers can be null too. That's as much ambiguity as you describe here, except many people decide to not do null checks by design.

    @Kian said:

    Complexity because having an explicit type to do something will feel to some people as an encouragement to use it. So some genius will make all the fields in his classes optional, because "what if I don't have it?!" and then I'll eventually have to deal with code where you have to make constant runtime checks to see if a bunch of fields exist or not at any given moment.

    Bad coders gonna code badly. If there's no optional, they're gonna find another way to shoot themselves in the foot.

    @Kian said:

    Also, it only forces you to do things right in Rust, possibly Haskell and other such fairly obscure languages.

    Rust is obscure for reasons not related to the language itself. And Haskell is special.

    @Kian said:

    Now, I love that new languages do that, but that doesn't justify retrofitting old languages with things they can't properly support.

    C++ can (kinda, sorta) support it - and's been doing it for a decade now (Boost is pretty much de-facto-standard library now, and every codebase that uses Boost also uses optionals). Java can't, so yes, it's stupid, as I've said like twenty times already.

    @Kian said:

    There are basically two kinds of optionals, those that can be implemented for free and those that need an additional bit to signal the optionality.

    Note that if you can't theoretically implement an optional without adding additional flag to the object (because all values of an object are valid), then if your code needs the value that can or cannot be there, you have to have additional flag anyway - whether it's simple bool next to it, bitwise magic or default constructor that yields invalid value.

    @Kian said:

    Calling these two things the same is wrong, because they are different things with different behaviors.

    Depending on what you mean by behavior. If you mean how you use them in code and what happens to the values if you copy the optional, move the optional, move the value into an optional or take it out of optional, then no, they're exactly the same.

    @Kian said:

    Why is sizeof(Optional<T1>) == sizeof(T1), but sizeof(Optional<T2>) == sizeof(T2)+alignof(T2)? They look the same, shouldn't they behave the same?

    First - size of Optional<T> is implementation detail; under normal circumstances you shouldn't care about it. Second - size of a type is hardly its behavior. Third - Optional<T> is different type than T, so, DUH! It's obvious it can have different size! And you know what? It doesn't matter because you can't treat Optional<T> like T anyway, so different size doesn't break anything!

    @Kian said:

    So the problem with C++'s inheritance model is that it allows developers to make valid choices?

    The problem with C++'s inheritance model is that there is a choice to make at all.

    @Kian said:

    Your position is that everyone should pay a price today, just to make it easier for someone who decides to do something somewhere in the future?

    No - my argument is that inheritance in C++ sucks. Just like car doors that open towards the front of the car suck compared to doors that open towards the rear - and the increased safety doesn't make it any more comfortable to got in and out of the car.

    @Kian said:

    Most classes don't need to be polymorphic.

    Not now at least. And who knows what the future will bring? C++ forces you to make the choice whether to go polymorphic or not out front, and the consequences can drag on for decades.

    @Kian said:

    Forcing everything to be virtual is dumb.

    Well, it works for Java programmers... and C# programmers... and Python programmers... and Ruby programmers... and so on and so on.

    @Kian said:

    fundamentally alter the design of a program (moving from concrete implementations to polymorphic behavior)

    It's only a fundamental change in the sense it breaks ABI. It wouldn't be so fundamental if C++ were designed differently. Mind you, I'm not saying C++ would be better if it went all-virtual way (probably it would be worse) - all I'm saying is that inheritance in C++ sucks.

    @Kian said:

    If you have access to the code, retrofitting interfaces isn't much of a hassle.

    Further proving the point above.

    @Kian said:

    It's less useful because it can do more things?

    No, it's less useful because it can do less things.

    @Kian said:

    Really? That's funny, because I thought compilers for Haskell, Rust, and such were first written with C or C++ generally, or some other such systems language.

    Rust compiler is written in Rust. Haskell compiler is written in Haskell. Historically, they were made in OCaml and LML, respectively.

    @Kian said:

    That's my point, there isn't one! So why use an optional type that's going to be misused in the same way pointers already are?

    In-place storage. It's very important sometimes, other times it's more efficient, and there are cases it's the only sane way to manage ownership of the object.

    @asdf said:

    Not a problem if you use smart pointers.

    Yeah, lack of virtual destructor isn't as much a problem as the fact the client code will call the wrong size().



  • @Gaska said:

    Well, it works for Java programmers... and C# programmers... and Python programmers... and Ruby programmers... and so on and so on.

    Nitpick: C# doesn't make everything virtual. That's one of the non-API differences between it and Java.


  • Discourse touched me in a no-no place

    @powerlord said:

    C# doesn't make everything virtual.

    TIL. What parts are non-virtual?



  • @dkf said:

    @powerlord said:
    C# doesn't make everything virtual.

    TIL. What parts are non-virtual?

    C# methods, properties, indexers, and events are non-virtual by default. That's why C# has the virtual keyword.


  • Banned

    It's different kind of virtual than in C++.


  • :belt_onion:

    Checked exceptions were a great idea that was very poorly executed:

    1. Having unchecked exceptions breaks everything hard (it will throw one of these three exceptions ... or anything it feels like throwing)
    2. Lack of type aliasing or type inference means that compositional methods either become tiny method names wrapped in acres of exceptions ... or get marked throws Exception (which then nukes the usefulness from orbit)
    3. Not providing an easy method to wrap and re-throw errors (e. g. an AutoPromote marker trait) so you can avoid boilerplate try { /* my method here */ } catch (Type1 | Type2 | Type3 e) { throw new MyMethodFailedException(e); } and instead just declare myMethod() throws MyMethodFailedException and have any error thrown in it auto-promoted to a MyMethodFailedException.

  • Discourse touched me in a no-no place

    @svieira said:

    Not providing an easy method to wrap and re-throw errors

    Some of us have such things in their arsenal. The support code to make this sort of thing work tends to be hairy, but it's a cost that you bear once.


  • Banned

    @svieira said:

    Having unchecked exceptions breaks everything hard (it will throw one of these three exceptions ... or anything it feels like throwing)

    Unchecked exceptions were meant to be never handled. They were supposed to be the managed way to do what POSIX signals do.



  • @Salamander said:

    So let me get this straight: you say that the entire concept of Optional cannot be done well, and your reasoning for this is that there are some implementations that take more space than a pointer?

    I say optional is bad in general. For example, in a map, the correct thing to do would be to check if a key exists and then get the value. But that's expensive, as it requires two searches or some complicated caching behavior. So as an optimization, you can return an optional type of some kind (for example, a pointer), which will either have the value you searched for, or be empty.

    That happens to makes writing code easier, but it makes reasoning about the code harder, which is one of the worst possible trade-offs to make. Sure, it makes you more productive because you can get things done faster, but maintenance becomes more expensive because you have to mentally juggle two possible states as you read the code.

    Some newer languages try to help by knowing about this ambiguity at the syntax level, and making sure you remember to check before using it. That helps. But then they try to extend this to types with optional members, or functions with optional variables, which is when you need to add a bit to say whether the value actually exists or not. So you have a bit of memory set aside for the object, but you don't actually know if the object exists.

    Which is a fundamentally different concept from the pointer kind of optional. But despite these two concepts being fundamentally different (one is something that may exist, the other is something that may point to something that exists), because they seem to be used the same way, they are treated the same.

    It's this fundamental difference that makes it impossible for there to be a perfect implementation. You are hiding two different abstractions under the same name.

    @LB_ said:

    I repeat: it is less likely that optional will be used in a confusing manner, and anything that decreases the likelihood of surprises is a win in my book.
    Well, you have more trust in your fellow coders than I do :) You still hold on to the hope that the problem is that the language is obtuse, not that they just don't care. I think the problem is they don't care.

    Which doesn't excuse terrible syntax (C++ is guilty of this), but also means that giving them more tools will just help them to find more ways to misuse them (just wait until you see pointers to optionals).

    @Gaska said:

    It's only obvious after you learn it in the first place.
    Yes, the first time you're introduced to the concept. "This is a pointer. It may point to a valid object, or be null". You can then go into more esoteric stuff, like that it's ok for a pointer to point to one past the end of an array, but it's wrong to read it in that case. I wouldn't expect that to be obvious. But that nullptr exists? It's one of the keywords of the language! People love to use pointers in conditional statements, so the fact that it's something you can toggle on should also make you think "hmm, so pointers can have two states."

    @Gaska said:

    That's as much ambiguity as you describe here, except many people decide to not do null checks by design.
    Yes, I don't like using pointers in general either, especially in function parameters. References should be used whenever required, making it obvious that any remaining pointers may be null. The fact that people misuse features just proves that the problem is people, not the tools they have access to.

    @Gaska said:

    Rust is obscure for reasons not related to the language itself. And Haskell is special.
    I keep meaning to try Haskell out, but then I see the syntax and put it off 'till later. I had high hopes for Rust, but I was disappointed that it doesn't use exceptions :(

    @Gaska said:

    Note that if you can't theoretically implement an optional without adding additional flag to the object (because all values of an object are valid), then if your code needs the value that can or cannot be there, you have to have additional flag anyway - whether it's simple bool next to it, bitwise magic or default constructor that yields invalid value.
    True, but then it also means that I have to think about why I want it to be optional in the first place. It was probably a dumb idea, and the fact that I have to add the flag makes me think about it. And hopefully decide against it. Or implement it better than one bool next to the value for each optional value I have.

    Also, now I'm curious: how do optional types handle inheritance? If B inherits from A, I can pass a pointer to B to a function that takes a pointer to A. Can I pass an optional<B*> to a function asking for an optional<A*>?

    @Gaska said:

    Well, it works for Java programmers... and C# programmers... and Python programmers... and Ruby programmers... and so on and so on.
    And that's why, despite hardware becoming faster all the time, applications stay just as slow. But hey, except for large server farms, virtual servers and "cloud" applications, and mobile computing, performance is not something anyone needs to care about, right? Who cares if companies waste millions, or if your cellphone battery lasts 30 minutes. Some programmer somewhere may have an easier time changing an implementation, maybe.

    @Gaska said:

    No, it's less useful because it can do less things.
    A team could choose to make their library use interfaces instead of concrete classes, and then you can implement your own interfaces without recompiling. The original writers could have reasons for why they don't provide you with the means to extend it's behavior. Maybe they had performance or security concerns.

    The question is, can you lock down the code your library calls in languages such as Java and C#? Isn't the final keyword intended to do just that? So if someone wrote a library and they used classes marked final, you still couldn't extend them, even if they're virtual.

    So your problem is with other programmers making choices, not with the language providing a choice that is also available to every other language.

    @svieira said:

    Checked exceptions were a great idea that was very poorly executed
    I think checked exceptions betray a misguided belief about errors: that they have to be handled as soon as they happen. What checked exceptions actually cause is what you describe as "a good thing", having exceptions change name without actually handling them meaningfully.

    So my user presses something. Seven functions deep, an allocation fails. That bad alloc then goes through seven type changes as each function receives it and converts it, until I get it back and tell the user "Sorry, can't do that. Something broke." Only I don't actually know what broke, or how to fix it. Most of the time, there isn't anything to do to fix it even. At least with a bad_alloc, I could tell the user "You're out of memory, maybe do something about that."

    The best that might happen is if exceptions get wrapped in one another, and then I have to unwrap seven layers of exceptions until I find the cause of the issue, instead of who reported it.


  • Java Dev

    @Kian said:

    I don't actually know what broke

    That's where 1k+-line stacktraces come in.



  • @Kian said:

    I think checked exceptions betray a misguided belief about errors: that they have to be handled as soon as they happen. What checked exceptions actually cause is what you describe as "a good thing", having exceptions change name without actually handling them meaningfully.

    So my user presses something. Seven functions deep, an allocation fails. That bad alloc then goes through seven type changes as each function receives it and converts it, until I get it back and tell the user "Sorry, can't do that. Something broke." Only I don't actually know what broke, or how to fix it. Most of the time, there isn't anything to do to fix it even. At least with a bad_alloc, I could tell the user "You're out of memory, maybe do something about that."

    The best that might happen is if exceptions get wrapped in one another, and then I have to unwrap seven layers of exceptions until I find the cause of the issue, instead of who reported it.

    Fun fact: Errors are not Exceptions (checked or unchecked) in Java. i.e. OutOfMemoryError won't be caught if you're just catching Exception.

    For that matter, if you're trying to catch random Exceptions instead of a specific type of exception you know how to handle, you'd better be logging it somewhere.


  • :belt_onion:

    @Kian said:

    I think checked exceptions betray a misguided belief about errors: that they have to be handled as soon as they happen.

    The throws keyword is a thing. It seems you may not understand how they work...

    If you're at the wrong level to handle an exception, throw it up to the next level. Checked exceptions have to be handled somewhere. They don't have to be handled at the lowest level.

    I have a project that does just that - it validates syntax on some files. If at any point it encounters unexpected input, I throw an exception. I keep tossing that exception up the stack till I get to the code iterating over files, where I can handle it successfully. It's a checked exception, but each of my submethods throw it, meaning I have no code in any of them to deal with it. It just bubbles back up the call stack.

    That means this:
    @Kian said:

    each function receives it and converts it,

    is incorrect. Because they recieve it but don't attempt to do anything with it. It just goes up to the caller.

    Note: Java-land. Other languages may behave differently.


  • Banned

    @Kian said:

    That happens to makes writing code easier, but it makes reasoning about the code harder

    I don't see anything hard to reason about. I have to check if value exists anyway - the only difference is whether I check before I get the element or after I get the element.

    @Kian said:

    Sure, it makes you more productive because you can get things done faster, but maintenance becomes more expensive because you have to mentally juggle two possible states as you read the code.

    There are two situations - you care what the state is or you don't. If you do, you make the check up front and get two execution branches, and inside each of them you have only one possible state. And if you don't care, then you also have just one state - the "I don't care" state.

    @Kian said:

    But then they try to extend this to types with optional members, or functions with optional variables, which is when you need to add a bit to say whether the value actually exists or not.

    *le sigh*

    I like you, so I'll repeat this once again: you need this bit anyway, whether you use optional type or something else.

    @Kian said:

    So you have a bit of memory set aside for the object, but you don't actually know if the object exists. Which is a fundamentally different concept from the pointer kind of optional.

    Agreed.

    @Kian said:

    But despite these two concepts being fundamentally different (one is something that may exist, the other is something that may point to something that exists), because they seem to be used the same way, they are treated the same.

    Pointers are treated in so many different ways it's impossible to say if you're correct or not unless you specify which particular usage I have in mind. Which is one of the problems with pointers - it has at least four different meanings and you must consult the documentation or source code of a specific function to know which it is (if the documentation or source code exists, which is not always the case).

    @Kian said:

    Yes, the first time you're introduced to the concept. "This is a pointer. It may point to a valid object, or be null".

    "This is an optional. It may contain a valid object, or be empty". Same deal. The only difference is that (assuming you're between 20 and 30 years old) you learned about pointers five to ten years earlier than about optionals.

    @Kian said:

    You can then go into more esoteric stuff, like that it's ok for a pointer to point to one past the end of an array

    This particular thing is so esoteric that if I were to create C++ curriculum, I would put it after integer overflow handling.

    @Kian said:

    I keep meaning to try Haskell out, but then I see the syntax and put it off 'till later.

    Me too.

    @Kian said:

    I had high hopes for Rust, but I was disappointed that it doesn't use exceptions

    We had this discussion the other month.

    @Kian said:

    True, but then it also means that I have to think about why I want it to be optional in the first place.

    Probably because of crazy move semantics of C++11. Rule of Five and all that.

    @Kian said:

    Also, now I'm curious: how do optional types handle inheritance?

    The same way the regular variables handle inheritance: they don't.

    @Kian said:

    Can I pass an optional<B*> to a function asking for an optional<A*>?

    In Rust, yes. In C++, I don't know. Gut feeling says it shouldn't work, but Boost is so magical it probably would.

    @Kian said:

    And that's why, despite hardware becoming faster all the time, applications stay just as slow.

    No, no it isn't. JITs are very, very good at optimizing out vtable lookups and inlining functions nowadays. BTW, a codebase full of shared_ptrs has about the same performance as if it was idiomatic Java 8.

    @Kian said:

    A team could choose to make their library use interfaces instead of concrete classes, and then you can implement your own interfaces without recompiling. The original writers could have reasons for why they don't provide you with the means to extend it's behavior.

    Most often forgetfulness or short-sightedness. But whatever the reason, the very fact I cannot freely derive from anything I want and have it Just Work™ is crippling me.

    @Kian said:

    The question is, can you lock down the code your library calls in languages such as Java and C#? Isn't the final keyword intended to do just that?

    Yes, but instead of defaulting to less possibilites, they default to more possibilities. It's still bad, but not as bad as C++.

    @Kian said:

    So your problem is with other programmers making choices

    Why, yes. The library writer should have as little options as possible to forbid me from writing whatever the fuck I want.



  • fmap has the type

    fmap :: (Functor fun) => (a -> b) -> (fun a -> fun b)
    

    So, consider that

    data Maybe a = Just a | Nothing
    

    and

    instance Functor Maybe where
      fmap f (Just a) = Just f a
      fmap f (Nothing) = Nothing
    

    In this instance definition, Maybe takes the place of the fun type variable above.

    fmap applies the function f that has type a -> b to the value "inside" a Maybe a, to produce a Maybe b. fmap is more general than this, and works for lists and other things (some of which are kind of mind-bendy) in the same way. Functor is an interface for mapping, and fmap is its method.

    There's even a vaguely 'linguistic' reading: "If you have just an a, just apply f to a"

    @Kian:
    I keep meaning to try Haskell out, but then I see the syntax and put it off 'till later.

    @Gaska:
    Me too.

    Do it. http://learnyouahaskell.com/



  • @asdf said:

    Not a problem if you use smart pointers.

    How? Slicing is still a huge issue. One copy assignment later and BAM, so much for your custom data members. (Though I still don't understand why you would ever want to derive from any string class ever anyway)


Log in to reply