WTF Bites


  • Fake News

    @boomzilla said in WTF Bites:

    @HardwareGeek said in WTF Bites:

    @Gern_Blaanston I still don't know where drummers. Who does?

    There drummers.

    https://www.youtube.com/watch?v=3Smf24hNVjM


  • Discourse touched me in a no-no place

    @Zerosquare said in WTF Bites:

    Here's a real-world example: let's say you want to store the value returned by some sensor measuring a physical quantity.

    You could encode the result like this:
    123.45: quantity measured is 123.45
    +∞: quantity is higher than highest measurable value
    -∞: quantity is lower than lowest measurable value
    NaN: the sensor is disconnected or returning an error condition
    null: the sensor does not exist in the current configuration

    Another example is storing state for debugging purposes. A ∞ / NaN may be exactly what you're looking for.

    I suppose we'll just have to use "Infinity" and "NaN" for them then. Stringly typed values to the rescue! (Bwahahahaha!!!)

    Infinity is a well-defined value, but NaN is an awful thing, indicating that something has gone properly wrong but the math engine has been configured to not throw exceptions. The simplest way of getting one is 0.0 / 0.0, which has no single sane meaning at all. Also, there are multiple bit patterns for NaN; most IEEE math implementations ignore that... but not all. Thanks, HP!


  • Trolleybus Mechanic

    @dkf said in WTF Bites:

    NaN is an awful thing

    I don't think so. It is very much like SQL NULL, which simply means "no data, we don't know the value". Both are not equal to anything, and propagate through expressions and functions. I don't want a simple function calculating an expression to throw, just like I don't want a SUM() query to throw, because one row is null.

    What's awful is the null pointer as used in C or Java, because it breaks strict typing. NaN doesn't.



  • @dkf said in WTF Bites:

    Also, there are multiple bit patterns for NaN;

    Filed under: "nan tagging"


  • 🚽 Regular

    @boomzilla said in WTF Bites:

    There drummers.

    :rimshot:


  • BINNED

    @sebastian-galczynski

    @sebastian-galczynski said in WTF Bites:

    @dkf said in WTF Bites:

    NaN is an awful thing

    I don't think so. It is very much like SQL NULL, which simply means "no data, we don't know the value". Both are not equal to anything, and propagate through expressions and functions. I don't want a simple function calculating an expression to throw, just like I don't want a SUM() query to throw, because one row is null.

    What's awful is the null pointer as used in C or Java, because it breaks strict typing. NaN doesn't.

    NaN breaks all kinds of things, like sorting.


  • Trolleybus Mechanic

    @topspin said in WTF Bites:

    NaN breaks all kinds of things, like sorting.

    Not really. Depending on what compare function you use, it either goes to the top or to bottom. Of course, at some point usually you must discard the NaN, but the point is that you don't need to handle runtime errors while performing calculations. And in a world without NaN and Inf every operation (not just division) would possibly cause a runtime error.

    More advanced languages (which avoid runtime errors) handle this by providing optional types. The problem is that with floating points every arithmetic operation must return this optional type unless you write conditional logic, so you essentially get back to a NaN. The people at IEEE aren't that stupid.


  • BINNED

    @sebastian-galczynski yes really. The default comparison function is not reflexive. In JavaScript it’s apparently special cased if you use the default sort, but if you try to use your own predicate you need to handle it. If you use for example C’s qsort or C++’s sort with <, you’ll get UB and a potential exploit.


  • Discourse touched me in a no-no place

    @sebastian-galczynski said in WTF Bites:

    And in a world without NaN and Inf every operation (not just division) would possibly cause a runtime error.

    Apart from the opinions of a few weird math holdouts (of which Crockford appears to be one), infinity is considered a reasonable math value, especially in IEEE math. But NaN is there only to indicate an error. Languages that throw an exception when they encounter a NaN are behaving reasonably. (Yes, this means that division is a throwing op for some.) We have NaN because we can't always throw exceptions. It poisons the calculations it is involved in. The presence of NaN in the type system means that a failing logic path is marked as having failed.

    A null value is something else, meaning an absence of data (without stating why it is absent). Abusing NaN to mean a null is all sorts of wrong. It's also depressingly common.


  • Trolleybus Mechanic

    @topspin said in WTF Bites:

    The default comparison function is not reflexive.

    Well that sucks, because the documentation explicitly says what are the requirements for the comparison function. Why doesn't the default match them?

    @topspin said in WTF Bites:

    In JavaScript it’s apparently special cased if you use the default sort

    In JavaScript the default sort is lexical, which will not sort numbers correctly, but that's another story.

    What do you propose instead of NaN? Throwing runtime errors?


  • Banned

    And then there's Rust, which simply refuses to let you sort floats unless you provide a custom total ordering. (Floats implement PartialOrd but not Ord. Comparison operators are tied to the former, while sorting is tied to the latter.)


  • Trolleybus Mechanic

    @dkf said in WTF Bites:

    We have NaN because we can't always throw exceptions.

    We shouldn't throw exceptions at all. We should write in languages in which incorrect programs can't be compiled.


  • Banned

    @sebastian-galczynski said in WTF Bites:

    @dkf said in WTF Bites:

    We have NaN because we can't always throw exceptions.

    We shouldn't throw exceptions at all. We should write in languages in which incorrect programs can't be compiled.

    For extra laugh, I'm pretending your post is compliant with RFC 2119.


  • BINNED

    @sebastian-galczynski said in WTF Bites:

    @topspin said in WTF Bites:

    The default comparison function is not reflexive.

    Well that sucks, because the documentation explicitly says what are the requirements for the comparison function. Why doesn't the default match them?

    Because NaN != NaN, whereas an order relationship requires reflexivity, i.e. x <= x for all x.
    Why is NaN defined not to compare equal to itself? Not quite sure, but if it did there’d probably be just as many error scenarios where that’s the wrong thing to do, too.

    @topspin said in WTF Bites:

    In JavaScript it’s apparently special cased if you use the default sort

    In JavaScript the default sort is lexical, which will not sort numbers correctly, but that's another story.

    Really? Ugh, that’s even more annoying. I looked it up w.r.t. NaN and came to a different conclusion, but I’m not a JS guy.

    What do you propose instead of NaN? Throwing runtime errors?

    Not sure. I’m not saying to get rid of the concept, it’s useful. You just have to be aware of the potential pitfalls when working with it.

    ETA: I sloppily mixed equivalence and (non-strict) order relationships w.r.t. reflexivity. Same problems though, and I’m typing on a phone, so whatever.


  • Trolleybus Mechanic

    @Gustav said in WTF Bites:

    For extra laugh, I'm pretending your post is compliant with RFC 2119.

    RFC:

    there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood

    I wholeheartedly agree.


  • Banned

    @sebastian-galczynski I don't. The IT world is overflowing with old nerds full of themselves who know what they're doing (and I mean actually know), "considered" the options and decided to write something that works perfectly fine and meets all requirements, but is a nightmare to maintain if you're not the original author.

    We shall always do the right thing out of principle rather than only when it doesn't inconvenience us too much.


  • Trolleybus Mechanic

    @Gustav said in WTF Bites:

    We shall always do the right thing out of principle rather than only when it doesn't inconvenience us too much.

    So you're for "must" in place of "should"? I'm ok with that, but I simply can't do this in this environment. And I think at this point that's not due to old nerds (was Dijkstra an old nerd when he warned us?), but due to the 20-something soydev webshits piling layers upon layers of almost-correct stuff.


  • Banned

    @sebastian-galczynski except those webshits only follow what the tutorial told them to do. And the tutorial was written to match the philosophy of the web framework it uses. And the framework was designed mostly by old nerds, who are very fluent at cutting corners when it's not an immediate concern. Think of the evolution of React. There was a moment where they moved away from stateful classes toward "functional components" which are in and of themselves stateless, with all the state living elsewhere, in the state store. It was good design, but a little annoying to work with because suddenly you had to think much more about managing state. So they came up with useState function, which is now used everywhere, bringing back all the problems of stateful classes and then some because it's even more unstructured, meaning it's even harder to know where all that state is coming from. To those old nerds, it's not a problem because they can keep their entire code in their head - but it's a problem to everyone who later inherits that code.

    Ultimately, it's a culture problem. Our industry doesn't have the culture of doing the right thing. This is what makes Rust unique, but also what makes it hated by everyone. Because it slows down development, because it makes things harder than they need to be, because suddenly you need to care about things you'd really rather not care about, because some of the popular design patterns become impossible due to all the extra safety requirements. Most developers care about those things more than they care about handling all the thousand edge cases that will likely never happen.


  • Considered Harmful

    @dkf said in WTF Bites:

    @Zerosquare said in WTF Bites:

    Here's a real-world example: let's say you want to store the value returned by some sensor measuring a physical quantity.

    You could encode the result like this:
    123.45: quantity measured is 123.45
    +∞: quantity is higher than highest measurable value
    -∞: quantity is lower than lowest measurable value
    NaN: the sensor is disconnected or returning an error condition
    null: the sensor does not exist in the current configuration

    Another example is storing state for debugging purposes. A ∞ / NaN may be exactly what you're looking for.

    I suppose we'll just have to use "Infinity" and "NaN" for them then. Stringly typed values to the rescue! (Bwahahahaha!!!)

    Infinity is a well-defined value, but NaN is an awful thing, indicating that something has gone properly wrong but the math engine has been configured to not throw exceptions. The simplest way of getting one is 0.0 / 0.0, which has no single sane meaning at all. Also, there are multiple bit patterns for NaN; most IEEE math implementations ignore that... but not all. Thanks, HP!

    Hey, no agreeing with Crockford.


  • Considered Harmful

    @Gustav said in WTF Bites:

    Most developers care about those things more than they care about handling all the thousand edge cases that will likely never happen.

    As they had better damn'd well be able to. I hope you don't believe this is a bad thing.


  • Trolleybus Mechanic

    @Gustav said in WTF Bites:

    @sebastian-galczynski except those webshits only follow what the tutorial told them to do.

    They can't, because the tutorial only describes how to make a todo list. When you need something more complex, like maybe, i don't know, TWO tables with a foreign key, you're on your own.

    And the tutorial was written to match the philosophy of the web framework it uses. And the framework was designed mostly by old nerds, who are very fluent at cutting corners when it's not an immediate concern.

    Maybe when it comes to React and Redux specifically it's true, but the wider JS ecosystem is mostly composed of frameworks and libraries which are also written by webshits who don't know any other ecosystem. There are no adults in the room at anymore. Last year I posted about the ORM libraries in the node ecosystem which all have obvious bugs, can't handle transactions etc., npm not locking transitive dependencies for like 10 years etc. It's all like this.

    Think of the evolution of React. There was a moment where they moved away from stateful classes toward "functional components" which are in and of themselves stateless, with all the state living elsewhere, in the state store. It was good design, but a little annoying to work with because suddenly you had to think much more about managing state. So they came up with useState function, which is now used everywhere, bringing back all the problems of stateful classes and then some because it's even more unstructured, meaning it's even harder to know where all that state is coming from. To those old nerds, it's not a problem because they can keep their entire code in their head - but it's a problem to everyone who later inherits that code.

    I think the whole web stack is unsalvageable. What you call 'state' is not really the whole state anyway - the inputs have their own state, which is not represented in the DOM, and when you try to bind it to the JS state (so called controlled inputs) with keyDown events it causes race conditions - you can literally corrupt data by typing too fast.

    Ultimately, it's a culture problem. Our industry doesn't have the culture of doing the right thing. This is what makes Rust unique, but also what makes it hated by everyone. Because it slows down development,

    I don't think these layers of crap we deal with make development any faster. At some point they did, but for the last 8 years or so they make it even slower than using very rudimentary tools, because the cognitive load of dealing with them is overwhelming. We already reached Tainter's point of zero return for increase of complexity, and we're going down on the curve.

    What I see is that 10y ago I could crank out a CRUD website with server-side rendering, some raw JS and some backend framework like Django or Symfony. Nowdays they brought in all this crap with GraphQL, Nest, Next, Apollo, what have you, and it all doesn't play with each other well at all.
    The effect is that making a single form takes two guys two weeks, and after they "finished" it deletes data when you click "Save" too fast (before the data was loaded), or does some other weird unpredictable stuff. It doesn't matter that you have a purely functional React component, when the whole stack is composed of buggy, barely documented shit.


  • Banned

    @sebastian-galczynski said in WTF Bites:

    I don't think these layers of crap we deal with make development any faster.

    They do in the short run. By a whole lot. So it's an easy sell. Doing things right saves time in the long run. But you can't show the long run without doing the short run first, and in short run it's quite a bit slower. People see the short run, become very unimpressed and abandon the idea. Same with thorough unit testing. People never see the benefits of fast, easy refactoring because they never get to the point where fast, easy refactoring is possible. They ditch the effort long before they get near-100% coverage because the costs are big and intermediate payoffs minuscule. And everyone who tries to explain it gets branded a zealot due to obvious analogies to religious preachers promising paradise at the end of a difficult road.


  • Notification Spam Recipient

    @sebastian-galczynski said in WTF Bites:

    @dkf said in WTF Bites:

    We have NaN because we can't always throw exceptions.

    We shouldn't throw exceptions at all. We should write in languages in which incorrect programs can't be compiled.

    🤔 So how does Goloang handle this?


  • BINNED

    @Tsaukpaetra golang doesn’t care for correctness. 🐠


  • Trolleybus Mechanic

    @Gustav said in WTF Bites:

    They do in the short run. By a whole lot.

    I'm not sure I have ever seen this work with an SPA-type frontend. In the project I'm talking about the "short run" simply never happened - they got bogged down before the client even saw any results. And in one previous job around 2017, where there was also an SPA frontend, it was pretty much the same - a guy with a macbook full of stickers and a leetcode account produced maybe one form per week, with no validation.


  • Discourse touched me in a no-no place

    @sebastian-galczynski said in WTF Bites:

    @dkf said in WTF Bites:

    We have NaN because we can't always throw exceptions.

    We shouldn't throw exceptions at all.

    I think I disagree. (Doesn't mean that the C++ implementation of them is right though; the history if exceptions in that language is a sorry one.) In abstract language semantics, they're much more like how errors are handled as types in Rust, except that you don't have to do explicit unpacking and repacking all over the place. It might not seem so worth it... except when you start to have many types of failure that need different responses.

    The reality is we have many operations that can fail, and that can do so in many ways. There are ways that make sense to handle that, and then there are stupid ways of doing it (I'm looking at you, C, sitting in the corner there and chewing on the third box of crayons today).


  • Considered Harmful

    @sebastian-galczynski said in WTF Bites:

    @topspin The issue is that NaN is a valid value of type double, as defined by IEEE 754.

    That's stretching the meaning of "valid" quite a bit. Also, if you just dumped all NaN's as NaN (as opposed say to some escape character followed by the the non-fixed bits of the representation), it wouldn't even round-trip correctly. Then again, doing the bit pattern thing would give you the possibility to remotely crash any program that didn't explicitly handle NaNs in deserialized numeric data.


  • Trolleybus Mechanic

    @dkf said in WTF Bites:

    The reality is we have many operations that can fail, and that can do so in many ways.

    There are two different broad types of failure - one is related to IO (someone entered wrong data, a server you tried to query died, file not found, no space on disk etc), the other are simply errors in your code. This roughly corresponds to checked vs unchecked exceptions in Java.
    I can agree that the former are useful, but the unchecked exceptions (like NullPointerException) are simply the result of your code being wrong. The language should instead enforce correctness in such cases.



  • @sebastian-galczynski said in WTF Bites:

    @dkf said in WTF Bites:

    The reality is we have many operations that can fail, and that can do so in many ways.

    There are two different broad types of failure - one is related to IO (someone entered wrong data, a server you tried to query died, file not found, no space on disk etc), the other are simply errors in your code. This roughly corresponds to checked vs unchecked exceptions in Java.
    I can agree that the former are useful, but the unchecked exceptions (like NullPointerException) are simply the result of your code being wrong. The language should instead enforce correctness in such cases.

    For example, introduce a Nonzero type and only define division for when the denominator is Nonzero. Any potential zero assignment to a Nonzero (including casts from types that do allow zero) would need compile-time checks to confirm that the assigned value could in turn never in fact be zero. Similar type constraints for other domain-constrained operations (e.g. arcsin).


  • Considered Harmful

    @dkf said in WTF Bites:

    I'm looking at you, C, sitting in the corner there and chewing on the third box of crayons today

    It's your fault for not being able to deal with gotos.


  • Considered Harmful

    @Watson said in WTF Bites:

    @sebastian-galczynski said in WTF Bites:

    @dkf said in WTF Bites:

    The reality is we have many operations that can fail, and that can do so in many ways.

    There are two different broad types of failure - one is related to IO (someone entered wrong data, a server you tried to query died, file not found, no space on disk etc), the other are simply errors in your code. This roughly corresponds to checked vs unchecked exceptions in Java.
    I can agree that the former are useful, but the unchecked exceptions (like NullPointerException) are simply the result of your code being wrong. The language should instead enforce correctness in such cases.

    For example, introduce a Nonzero type and only define division for when the denominator is Nonzero. Any potential zero assignment to a Nonzero (including casts from types that do allow zero) would need compile-time checks to confirm that the assigned value could in turn never in fact be zero. Similar type constraints for other domain-constrained operations (e.g. arcsin).

    This could cause... issues for seminumerical algorithms. As in, you can't use that because we can't prove it sort of issues, when you're doing perfectly cromulent unprovable cheating.



  • @sebastian-galczynski said in WTF Bites:

    @cvi said in WTF Bites:

    FWIW, my limited experience with JSON is that it's needlessly pedantic and inflexible about shit that doesn't matter, while still being hell-bent on being a pain to parse and work with.

    The lack of comments and needless autism about commas are a pain,

    For a serialization format, those are actually sensible.

    For definition format to be, at least sometimes, written by hand, use something more convenient for that role—toml¹, yaml², jsonnet³, hcl⁴, bicep⁵ … (the first two should cover generic needs and from some point on each configuration has different needs leading to different syntaxes)

    as is the lack of integers. While Crockford envisioned JSON outliving IEEE specs, it already crumbles due to improvements in JavaScript:

    Uncaught TypeError: Do not know how to serialize a BigInt
        at JSON.stringify (<anonymous>)
    

    This is a bug in the serializer (and likely the deserializer too). JSON does not put any limit on numbers, so bigint should be serialized as number, and any number that has no fractional part and integeral part larger than can be represented in a double (~2×10¹⁵) should be deserialized as BigInt. It would lose type information, but since BigInts interoperate with numbers, not data (side note: python 3 uses similar logic when reading numeric literals).

    @topspin said in WTF Bites:

    Well, if you're on one of those old, crazy machines that don't support IEEE, what else are you gonna do? Otherwise, it's what you'd expect.

    Do such things even exist? That is machines that have generic floating point arithmetics, but don't follow IEEE 754? My impression is that everybody who bothered with generic floating point converged to IEEE 754 pretty quickly, so they don't.


    ¹ A fully defined generalization of ini format to nested structures.
    ² Unfortunately too flexible for its own good.
    ³ Venturing into the land of factoring out common parts.
    ⁴ Domain-specific language for defining objects that cross-reference each other.
    ⁵ Similar thing for smaller domain. It notably replaces JSON-extended-with-expressions (written a specifically formatted strings)


  • Trolleybus Mechanic

    @Bulb said in WTF Bites:

    This is a bug in the serializer (and likely the deserializer too). JSON does not put any limit on numbers, so bigint should be serialized as number, and any number that has no fractional part and integeral part larger than can be represented in a double (~2×10¹⁵) should be deserialized as BigInt

    That would make the serialization-deserialization process changing not only values, but also type, if you happen to have a BigInt(10) somewhere.

    @Watson said in WTF Bites:

    For example, introduce a Nonzero type and only define division for when the denominator is Nonzero. Any potential zero assignment to a Nonzero (including casts from types that do allow zero) would need compile-time checks to confirm that the assigned value could in turn never in fact be zero. Similar type constraints for other domain-constrained operations (e.g. arcsin).

    This would work. What about doubly-parametrized mutable Arrays? Is there any language which does this correctly, or are they all covariant (causes runtime errors) or invariant (goddamn useless)? I would like to be able to declare Array<Foo, never> - which means you can get Fooand all its subtypes, but you can't put anything. Of course, Array<Foo, never> would be a subtype of Array<Foo, Foo>. That would solve problems with collection subtyping and unwanted mutability at once. Has anybody thought of this?



  • @Bulb said in WTF Bites:

    Do such things even exist? That is machines that have generic floating point arithmetics, but don't follow IEEE 754?

    IIRC, NVIDIA GPUs weren't fully IEEE 754 until fairly late ... 2010 maybe? I don't remember about others off the top of my head, but having weird rounding modes, ignoring denorms and similar stuff used to be fairly common.

    Nan too. It's been a while, so I don't remember the details, but things did behave differently on different archs/vendors.


  • Java Dev

    @cvi In a GPU, I don't know if they're implementing 64-bit modes. I believe in rendering traditionally 16-bit floating point formats are used, and those aren't in IEEE 754. Additionally, there's probably floating point units out there which do not support denormalised numbers.



  • @sebastian-galczynski said in WTF Bites:

    @Bulb said in WTF Bites:

    This is a bug in the serializer (and likely the deserializer too). JSON does not put any limit on numbers, so bigint should be serialized as number, and any number that has no fractional part and integeral part larger than can be represented in a double (~2×10¹⁵) should be deserialized as BigInt

    That would make the serialization-deserialization process changing not only values, but also type, if you happen to have a BigInt(10) somewhere.

    The value will not change, only the type. BigInt(10) is still equal to 10. It's a different type, but that's consequence of both the language and the serialization format being untyped, and is no different from how Date will be parsed back as string. In fact, it is still better, because at least the fact it's intended to be a number is kept.

    Every untyped, not-language-specific serialization ends up having such cases. And JSON is untyped—there is no way to explicitly specify a type, of anything—and not meant to be language-specific—it is intended for sending data between the browser app, until recently in JavaScript by necessity, and the server side, which is typically something else.

    If you care about exact types, YAML supports tags, so you can have something like !js!BigInt 10. I don't think I've seen any YAML serializer and parser to use these tags though.


  • Trolleybus Mechanic

    @Bulb said in WTF Bites:

    The value will not change, only the type. BigInt(10) is still equal to 10.

    Only in the soft sense. Integers and floats, or numbers of different size are not the same values, languages like C simply implicitly cast them.

    > BigInt(10) === 10
    false
    

    but that's consequence of both the language and the serialization format being untyped

    JSON is not untyped. It clearly distinguishes between numbers, strings, boolean and null, as well as objects and arrays. If it was untyped, then there would be no difference between "10" and 10, since those two are also "equal". It's only the number type that's underspecified, and that's just a bug, which is why everyone assumes it's just a double. A sane format would accept literals like 1000UL with clearly defined meanings.



  • @PleegWat said in WTF Bites:

    In a GPU, I don't know if they're implementing 64-bit modes.

    Some do, nowadays. Again, I think that started to appear around the same time as the other IEEE 754 stuff in NVIDIA GPUs. In consumer models (GTX, RTX), 64-bit perf is traditionally nerfed quite badly. Professional models typically had "full" 64-bit performance -- I haven't kept track of it on the current models (who needs double precision?!), but from what I remember, 1:2 or even 1:8 single to double "overhead" was considered very good.

    I believe in rendering traditionally 16-bit floating point formats are used,

    Less than you'd think. 16-bit floats are useful for storing data, but most computations are done in full 32-bit floats (the hardware registers that you have in shaders are all 32-bits and most ops are for 32-bits, with a few exceptions for "packed" data).

    Even for data, half floats are often a bit of an suboptimal choice. Three-channel data doesn't pack nicely with 16-bits/channel, so something like RGB10A2 is tempting (HDR10 formats use that; you can also pack a quaternion into that, for animations or TBN frames). Also, nowadays, going for a custom fixed-point (or whatever) format for input data is probably tempting (if you can). You're doing most of the processing in shaders anyway, so manually decoding the data isn't as big a deal.

    See e.g., Unity's G-buffer format for an example. (They do use half floats for storage in HDR mode, though.)

    those aren't in IEEE 754

    According to Wikipedia, they are. Actual IEEE document is behind a paywall, and I'm too :kneeling_warthog: ATM to deal with that.



  • @cvi said in WTF Bites:

    my limited experience with JSON is that it's needlessly pedantic and inflexible about shit that doesn't matter

    Please invite that Mr/Mrs/whatever JSON to participate in this forum. They will perfectly fit here!



  • @Gustav said in WTF Bites:

    God fucking dammit English Wikipedia got updated to that awful French layout. It looks absolutely horrendous in 1440p.

    Out of all the recent website/app redesigns, I don't mind this one. The article body stayed pretty much exactly the same as before, and the table of contents got moved to the side where it's way more useful than sitting on top of the article


  • Discourse touched me in a no-no place

    @sebastian-galczynski said in WTF Bites:

    This would work. What about doubly-parametrized mutable Arrays? Is there any language which does this correctly, or are they all covariant (causes runtime errors) or invariant (goddamn useless)? I would like to be able to declare Array<Foo, never> - which means you can get Fooand all its subtypes, but you can't put anything. Of course, Array<Foo, never> would be a subtype of Array<Foo, Foo>. That would solve problems with collection subtyping and unwanted mutability at once. Has anybody thought of this?

    You're into the covariance/contravariance mess. That's a pretty deep rabbit hole, as it demonstrates that higher-kinded types generate not-always-obvious rules of their own.


  • Considered Harmful

    @Bulb said in WTF Bites:

    I don't think I've seen any YAML serializer and parser to use these tags though.

    But at least it's supported.



  • @sebastian-galczynski said in WTF Bites:

    @Bulb said in WTF Bites:

    The value will not change, only the type. BigInt(10) is still equal to 10.

    Only in the soft sense. Integers and floats, or numbers of different size are not the same values, languages like C simply implicitly cast them.

    They are the same value, just like 10 and 10.0 are the same value. They are different representations of that value, but they are the same value.

    > BigInt(10) === 10
    false
    

    That's comparing representations, not values.

    > BigInt(10) == 10
    true
    

    but that's consequence of both the language and the serialization format being untyped

    JSON is not untyped. It clearly distinguishes between numbers, strings, boolean and null, as well as objects and arrays.

    True, it is not completely untyped. But its type system is, intentionally, different from that of JavaScript. So you have to map them.

    If it was untyped, then there would be no difference between "10" and 10, since those two are also "equal". It's only the number type that's underspecified, and that's just a bug, which is why everyone assumes it's just a double.

    It's not a bug. It's a property that is appropriate for some use-cases and inappropriate for other. Say the server-side serializes the number from a decimal (C# has such type; incidentally it matches the designer's example). Well, the client-side, being JavaScript, does not have that type. So it should pick whatever type it can represent the value in most faithfully. If it can't fit it in a double, but can fit it in a BigInt, that's what it should do.

    Of course then there are the cases where you want to serialize the structure and want it to match exactly upon deserialization, including representation. Well, then you can't use JSON. JSON never promised to support it. You can use YAML, that, with carefully crafted serialization settings, supports it.

    A sane format would accept literals like 1000UL with clearly defined meanings.

    The meaning would, necessarily, be language-specific, because some languages do have ‘unsigned long’ and other languages don't. It's different use-cases and they require different serialization formats. One size does not fit all, there are no silver bullets etc. (and JSON is very tarnished).



  • @Gustav said in WTF Bites:

    God fucking dammit English Wikipedia got updated to that awful French layout. It looks absolutely horrendous in 1440p.

    If you're logged in you can pick a different skin.

    I'm not a fan of the "all white with nothing separating parts of the page from each other and the text limited to part of the window" type of design, so I went back to the previous look. Also not a fan of floating bars that cover up what I'm trying to read.

    I guess there's also a right-side sidebar coming. Yay.


  • Banned

    @Parody said in WTF Bites:

    If you're logged in

    🖕


  • Trolleybus Mechanic

    @dkf said in WTF Bites:

    @sebastian-galczynski said in WTF Bites:

    This would work. What about doubly-parametrized mutable Arrays? Is there any language which does this correctly, or are they all covariant (causes runtime errors) or invariant (goddamn useless)? I would like to be able to declare Array<Foo, never> - which means you can get Fooand all its subtypes, but you can't put anything. Of course, Array<Foo, never> would be a subtype of Array<Foo, Foo>. That would solve problems with collection subtyping and unwanted mutability at once. Has anybody thought of this?

    You're into the covariance/contravariance mess. That's a pretty deep rabbit hole, as it demonstrates that higher-kinded types generate not-always-obvious rules of their own.

    I know. I'm just curious whether this solution would work, and if not, why. It seems obvious, but no major language (including Typescript) used it.


  • Considered Harmful

    @Bulb said in WTF Bites:

    If you care about exact types, YAML supports tags, so you can have something like !js!BigInt 10. I don't think I've seen any YAML serializer and parser to use these tags though.

    Both Perl's YAML and YAML::XS can use them to serialize objects:

    perl -CS -MYAML -MDateTime -E'say Dump(DateTime->now)' | \
    perl -CS -MYAML -MDateTime -E'$YAML::LoadBlessed=1; say Load(do{local$/;<>})->day;'
    23
    

    Not particularly portable, but I'd be surprised if JS parsers didn't support this at all.



  • @LaoC said in WTF Bites:

    Not particularly portable

    The purpose is being able to serialize a data structure and restore it exactly, which can be done inside one language only, so it does not need to be particularly portable.

    For sending data between different applications, possibly written in different languages, you need to know what data is expected. That is, you should have some kind of schema. And you can define which members should be deserialized as which type there.


  • Discourse touched me in a no-no place

    @sebastian-galczynski said in WTF Bites:

    @dkf said in WTF Bites:

    You're into the covariance/contravariance mess. That's a pretty deep rabbit hole, as it demonstrates that higher-kinded types generate not-always-obvious rules of their own.

    I know. I'm just curious whether this solution would work, and if not, why. It seems obvious, but no major language (including Typescript) used it.

    I think nobody uses it because it is relatively well known that it doesn't work. The problem is that you tend to have both sorts of type relation in the same type (because you want to both put values into that array and get them out); Java does do what you're asking for with its arrays and that is widely regarded as a mistake (i.e., you get runtime exceptions for type reasons in some cases despite the type system saying everything is fine; I don't know if that also applies to C# or if that is limited in other ways). That's why generics in those languages allow you express covariance and contravariance, and why generic type parameters have to be matched closely.


  • Trolleybus Mechanic

    @dkf said in WTF Bites:

    Java does do what you're asking for with its arrays and that is widely regarded as a mistake

    I'm not asking for covariant collections, I'm asking for doubly-parametrized ones, so that the subtyping would be strict.
    If you could declare class Array<GetType, PushType>, then you could write generic code on collections and not cause either compile or runtime errors. In particular you could declare function parameter as Array<Foo, never> (which is a strict supertype of Array<Foo, Foo>, because functions are contravariant on arguments and covariant on return values) to indicate that you don't want to mutate them and get covariance when needed. Of course it should be probably restricted so that PushType extends GetType, unless you want a write-only collection.


Log in to reply