OOP is TRWTF



  • @_P_ said in OOP is TRWTF:

    You're expecting the JS ecosystem to actually learn the history of programming languages? :kneeling_warthog:

    As James Iry put it in his classic "Brief, Incomplete, and Mostly Wrong History of Programming Languages":

    1995 - Brendan Eich reads up on every mistake ever made in designing a programming language, invents a few more, and creates LiveScript. Later, in an effort to cash in on the popularity of Java the language is renamed JavaScript. Later still, in an effort to cash in on the popularity of skin diseases the language is renamed ECMAScript.



  • @Mason_Wheeler He did a very shitty job there. Throughout the series he's basically re-stating everything in the same but more verbose words, wrapping the wordings into the "object-oriented" language, dumping C# code instead of Haskell, then proceeds into "look, these things are actually monads, magic!", and finally go on and babble about SelectMany, which is much longer than every other part. I don't know why everyone is trying to explain monads with the same setup and claim they're different than the others, but they certainly don't help at all.

    whose principal purpose is to "enhance" a base type with some new property, that can be applied to basically any base type

    The same thing can be said about every other type, so it doesn't explain anything either.



  • @_P_ said in OOP is TRWTF:

    @Mason_Wheeler He did a very shitty job there.

    :crazy:

    Throughout the series he's basically re-stating everything in the same but more verbose words, wrapping the wordings into the "object-oriented" language, dumping C# code instead of Haskell,

    81c0bf93-88fb-4b1b-8de2-bd0a8f77730b-image.png

    then proceeds into "look, these things are actually monads, magic!",

    Umm... did you actually read it? He starts with talking about how these familiar examples are monads, then explains the principles behind what they have in common, how it can be useful, what the hidden gotchas are and how to avoid them, etc.

    This is why his explanation actually works: he gets it, and you obviously don't. If you did, you wouldn't have mentioned Haskell, because nobody gets Haskell. He's not trying to explain this to the 0.00001% of people who actually use Haskell; he's explaining it to normal programmers, using familiar language and examples that make sense to normal programmers. If you don't understand that, it's not Eric's fault.



  • @Mason_Wheeler said in OOP is TRWTF:

    He starts with talking about how these familiar examples are monads, then explains the principles behind what they have in common, how it can be useful, what the hidden gotchas are and how to avoid them, etc.

    I don't see how that's different from the typical Haskell-based monad tutorial:

    1. Describe monad as some kind of "mysterious wrapper" that does some kind of wrapper jumbo juggling with some kind of bind method (which is why people keep asking if >>= is doing some kind of wrapper jumbo juggling with the full-of-mysterious-type-symbol type m a -> (a -> m b) -> m b, because the "wrapper" analogy is confusing, period. Changing it to static M<R> ApplySpecialFunction<A, R>( M<A> wrapped, Func<A, M<R>> function) does not make it any clearer)
    2. Let's talk about an example. Maybe! Look at how this is a monad!
    3. "Monads are clearly useful because I said so"! You better know this or you'll be kinkshamed to oblivion, it'll be in the tests!
    4. (When people then ask if Future/Promise is a monad) begin to get triggered vigorously and deny it with full force because "magical monad laws" has to be fed and satisfied
    5. Profit

    I mean, it's a good way to make people develop the illusion that they understood what is monad, not make people actually understand what is monad. It introduces more questions than answers.

    @Mason_Wheeler said in OOP is TRWTF:

    he gets it, and you obviously don't. If you did, you wouldn't have mentioned Haskell, because nobody gets Haskell

    :wtf_owl: Since when did I bring up Haskell when explaining monads in the first place? I was the only person in this entire thread who's trying to get the explanation off any kind of code or "wrapper" nonsense.

    @Mason_Wheeler said in OOP is TRWTF:

    he's explaining it to normal programmers, using familiar language and examples that make sense to normal programmers

    The only explanation that make sense to normal programmers would be "well, monad is not really that important despite everyone telling you so, in fact it's just these". Because it is indeed nothing special. TRWTF is somehow making monad something everyone thinks understanding it makes them very smart. It doesn't.

    For instance, from the other side of the world aka OO world, "Visitor Pattern" has been a big fuss all the time, right? Just look at how many explanations goes on using complicated UML diagrams and OO jargon nonsense. They don't explain shit, when in fact people should've just said "oh it's not a big deal, it's just a type-function mapping so you can choose the method to be invoked based on the type".


  • Considered Harmful

    @Mason_Wheeler said in OOP is TRWTF:

    This is why his explanation actually works: he gets it, and you obviously don't.

    And here is your immediate clue that @_P_ is probably right and you are probably wrong - @_P_ does get it, he was one of the functional people previously in the thread. If you claim that a person who actually understands this shit obviously doesn't, it's a good idea that your explanation has surpassed previous levels of shittiness.

    I repeat my previous assertion. The best monad tutorial is to go read the docs for flatMap and SelectMany, and imagine it generalized to a design pattern, and that's a monad.



  • @pie_flavor said in OOP is TRWTF:

    And here is your immediate clue that @_P_ is probably right and you are probably wrong - @_P_ does get it, he was one of the functional people previously in the thread.

    And here's the immediate clue that you don't understand what I was actually saying. The "it" in question is not "how do I monad?", but rather "how to make monads comprehensible to ordinary programmers?" Eric Lippert gets it. "Functional people" almost universally do not.



  • @Mason_Wheeler said in OOP is TRWTF:

    The "it" in question is not "how do I monad?", but rather "how to make monads comprehensible to ordinary programmers?"

    You do realize the pre-requisite for "making monads comprehensible to ordinary programmers" requires your answer to be correct, right? Otherwise you might as well consult uncyclopedia for a definition of monads.



  • @_P_ said in OOP is TRWTF:

    You do realize the pre-requisite for "making monads comprehensible to ordinary programmers" requires your answer to be correct, right?

    ...huh?

    What pre-requisite, specifically, are you referring to? And which answer of mine? I've posted quite a few of them.

    Right now, what you just said is so vague as to be entirely meaningless word salad.


  • ♿ (Parody)

    @_P_ said in OOP is TRWTF:

    I don't see how that's different from the typical Haskell-based monad tutorial:

    Whenever a normie looks at Haskell code he goes, ":wtf: is this line noise?!" And then he gets back to work.



  • @boomzilla said in OOP is TRWTF:

    @_P_ said in OOP is TRWTF:

    I don't see how that's different from the typical Haskell-based monad tutorial:

    Whenever a normie looks at Haskell code he goes, ":wtf: is this line noise?!" And then he gets back to work.

    But somehow they're bothered enough to keep bugging others to explain to them what is that M-word. Interesting.

    Perhaps the entire deal with explaining monads with nothing but Haskell is exactly to keep away normies from knowing what is monads, so that they don't apply it everywhere and make the world run on Haskell or something.



  • @_P_ said in OOP is TRWTF:

    @boomzilla said in OOP is TRWTF:

    @_P_ said in OOP is TRWTF:

    I don't see how that's different from the typical Haskell-based monad tutorial:

    Whenever a normie looks at Haskell code he goes, ":wtf: is this line noise?!" And then he gets back to work.

    But somehow they're bothered enough to keep bugging others to explain to them what is that M-word. Interesting.

    ...yeah. This is totally a thing that is actually happening. :rolleyes:


  • ♿ (Parody)

    @_P_ said in OOP is TRWTF:

    @boomzilla said in OOP is TRWTF:

    @_P_ said in OOP is TRWTF:

    I don't see how that's different from the typical Haskell-based monad tutorial:

    Whenever a normie looks at Haskell code he goes, ":wtf: is this line noise?!" And then he gets back to work.

    But somehow they're bothered enough to keep bugging others to explain to them what is that M-word. Interesting.

    Yeah, and somehow they keep trying to explain using Haskell.

    Perhaps the entire deal with explaining monads with nothing but Haskell is exactly to keep away normies from knowing what is monads, so that they don't apply it everywhere and make the world run on Haskell or something.

    It's a sound strategy. The world is grateful.


  • BINNED

    @pie_flavor said in OOP is TRWTF:

    @Mason_Wheeler said in OOP is TRWTF:

    This is why his explanation actually works: he gets it, and you obviously don't.

    And here is your immediate clue that @_P_ is probably right and you are probably wrong - @_P_ does get it,

    @_P_ said in OOP is TRWTF:

    Throughout the series he's basically re-stating everything in the same but more verbose words, wrapping the wordings into the "object-oriented" language, dumping C# code instead of Haskell

    No. What @_P_ gets is FP, not why average C# programmers struggle with explanations containing the word endofunctor. That was the point of Lippert's series.



  • @topspin Endofunctors are like stupid easy... An endofunctor is a functor from one category to itself.

    That's what the m part is in the m a type signature, with additional structure that makes the functor into a monad.


  • BINNED

    @Captain monads are like stupid easy, too. They’re a monoid in the category of endofunctors.



  • @topspin True. It's all very easy once you unpack it.

    Unpacking it all is the hard part.

    And you need good notation for that.

    And C# is not it.


  • ♿ (Parody)

    @Captain said in OOP is TRWTF:

    And you need good notation for that.

    And C# is not it.

    Maybe. But it's not worse than Haskell.



  • @boomzilla said in OOP is TRWTF:

    @Captain said in OOP is TRWTF:

    And you need good notation for that.

    And C# is not it.

    Maybe. But it's not worse than Haskell.

    Anyone who tries to explain monads should be automatically disqualified if they have to bring up code snippets from their most familiar language.



  • @boomzilla I disagree, but I know Haskell better than I know C#. Not that that invalidates your experience/opinion.

    SQL might be an easy language to explain it in, since there is the "table" or "select" monad, and there's already a "join" operation, and stuff like that.



  • @Captain said in OOP is TRWTF:

    @topspin Endofunctors are like stupid easy... An endofunctor is a functor from one category to itself.

    This is only "easy" if you happen to have a very specific mathematical background such that "functor" and "category" make sense in this context. For the vast majority of programmers, this is not the case, because programming is not math; it's baking.


  • ♿ (Parody)

    @_P_ said in OOP is TRWTF:

    @boomzilla said in OOP is TRWTF:

    @Captain said in OOP is TRWTF:

    And you need good notation for that.

    And C# is not it.

    Maybe. But it's not worse than Haskell.

    Anyone who tries to explain monads should be automatically disqualified if they have to bring up code snippets from their most familiar language.

    TDEMSYR. In any case, the point is to explain it in a language familiar to the reader.



  • @boomzilla said in OOP is TRWTF:

    In any case, the point is to explain it in a language familiar to the reader.

    ...English?



  • @Mason_Wheeler said in OOP is TRWTF:

    This is only "easy" if you happen to have a very specific mathematical background such that "functor" and "category" make sense in this context. For the vast majority of programmers, this is not the case, because programming is not math; it's baking.

    It's really too bad that all those CS people took Mathematical phrases and misused them to the point where you don't see why a C++ functor is a mathematical functor.


  • ♿ (Parody)

    @Captain said in OOP is TRWTF:

    @boomzilla I disagree, but I know Haskell better than I know C#. Not that that invalidates your experience/opinion.

    That was kind of my point. Honestly, I struggle to see where anything happens with Haskell. It's too terse for me. And not even in a "write only" way like Perl, which requires you to understand a lot of stuff, but it's (usually, or at least possibly) actually there to be read and give you clues. The paradigm of Haskell's syntax is so completely foreign to just about anything else that my mind just sort of freezes up when I look at some Haskell code.

    SQL might be an easy language to explain it in, since there is the "table" or "select" monad, and there's already a "join" operation, and stuff like that.

    SQL being something I use all the time, I'd be interested in reading that.


  • ♿ (Parody)

    @_P_ said in OOP is TRWTF:

    @boomzilla said in OOP is TRWTF:

    In any case, the point is to explain it in a language familiar to the reader.

    ...English?

    Pshaw. Practically no one uses COBOL any more.



  • @topspin said in OOP is TRWTF:

    What @_P_ gets is FP, not why average C# programmers struggle with explanations containing the word endofunctor. That was the point of Lippert's series.

    To be fair my FP skills is sub-par compared to the actual FP fellows.

    However, I also believe that using Haskell/C#/whatever to explain monads is complicated the matters for no reasons, and it only gets worse when every explanation starts with an analogy (a "wrapper") but not the correct intuition. It's the OO favourite way of explaining things, apparently.

    Monads are not like some arcane bullshit once you stop assuming it's some difficult arcane bullshit that "is the foundation of FP programs", "powers all FP programs", "is the solution to side effects to a paradigm without side effects", blablabla. Try it out!



  • @boomzilla said in OOP is TRWTF:

    @_P_ said in OOP is TRWTF:

    @boomzilla said in OOP is TRWTF:

    In any case, the point is to explain it in a language familiar to the reader.

    ...English?

    Pshaw. Practically no one uses COBOL any more.

    I thought you meant SQL.


  • ♿ (Parody)

    @Captain said in OOP is TRWTF:

    @Mason_Wheeler said in OOP is TRWTF:

    This is only "easy" if you happen to have a very specific mathematical background such that "functor" and "category" make sense in this context. For the vast majority of programmers, this is not the case, because programming is not math; it's baking.

    It's really too bad that all those CS people took Mathematical phrases and misused them to the point where you don't see why a C++ functor is a mathematical functor.

    Eh...I'm a math guy (though it has been a while) who sees writing programs as doing math. Seeing language like that (go endofunct yourself!), even when I was actively studying, wasn't something that I could fluently read. I found that mathematical language, while beautifully precise, required careful reading and parsing to comprehend.

    OK, even reading code in a language I know, I don't read super fluently. There are too many edge cases and weird things that can happen that you have to be actively thinking when you read code (or so I've found). But taking the math and transforming it to programming requires an extra level of cognitive load.

    And then it's like when people start talking about pattern names. Fuck all y'all. I never studied those and can't be arsed to learn the damn names (mostly), even if I use a bunch of them a lot. Not that I really blame people for using concise labels for things to make communication easier. More that my eyes glaze over when they start pontificating and I start hitting page down.



  • Also, just in case we're still complaining about Haskell as a scapegoat to the M-word, FP or anything again:

    FFS, Haskell is an outdated 80s language with a half-working type system that can't even do any of the modern type-related stuff (aka dependent types) that they have to keep patching it with ugly language extensions that only works on one particular compiler. And that compiler has basically become the current language standard. And it's not particularly good for any FP purpose either; if you want to be productive you'd be using OCaml or Clojure instead.

    And the M-word is a Haskell meme that only people who knew Haskell would propagate it. Change my mind.



  • @boomzilla said in OOP is TRWTF:

    That was kind of my point. Honestly, I struggle to see where anything happens with Haskell. It's too terse for me. And not even in a "write only" way like Perl, which requires you to understand a lot of stuff, but it's (usually, or at least possibly) actually there to be read and give you clues. The paradigm of Haskell's syntax is so completely foreign to just about anything else that my mind just sort of freezes up when I look at some Haskell code.

    Fair enough.

    Part of the reason for that is that the monad code I typically post (i.e., THIS IS HOW YOU DEFINE THE MAYBE MONAD type stuff) is abstract and doesn't do anything but apply functions and pass values around. It does something similar to what an OO language's . operator does.

    So it really isn't doing much at all, except for setting up the plumbing to define a monad. You'd use that monad in your "consumer" code, and it ends up looking quite a lot like C# or python or whatever.

    SQL might be an easy language to explain it in, since there is the "table" or "select" monad, and there's already a "join" operation, and stuff like that.

    SQL being something I use all the time, I'd be interested in reading that.

    I might have time later for a detailed exposition. The basic idea is that you have one "kind" of object your interested in -- a list of rows. And you can select a list of rows from a list of rows. This means that select is an endofunctor over the category of lists of rows, But if you select a list of rows, you end up with a type that you can do a select on. This means that select has "monoidal" (additive) structure -- you can build a stack of selects.

    So, to tie it all together, you can force evaluation order by stacking selects (or, what I do more typically when this even is an issue (which it sometimes is), by using left joins).

    OK, even reading code in a language I know, I don't read super fluently. There are too many edge cases and weird things that can happen that you have to be actively thinking when you read code (or so I've found). But taking the math and transforming it to programming requires an extra level of cognitive load.

    Yes, I get that.

    This is one of the things I like about Haskell. There are very few edge cases that end up mattering. Like how we ended up talking about evaluation semantics last time we all participated in this thread. The evaluation semantics (almost) really don't matter.



  • @Captain said in OOP is TRWTF:

    OK, even reading code in a language I know, I don't read super fluently. There are too many edge cases and weird things that can happen that you have to be actively thinking when you read code (or so I've found). But taking the math and transforming it to programming requires an extra level of cognitive load.

    Yes, I get that.
    This is one of the things I like about Haskell. There are very few edge cases that end up mattering. Like how we ended up talking about evaluation semantics last time we all participated in this thread. The evaluation semantics (almost) really don't matter.

    One thing about Haskell culture in general is that lots of people who touched Haskell tend to start asking questions about its underlying evaluation model (and IO, and the M-word) so thorough that they're basically trying to interrogate the language till what is compiled down to for no particular reasons. There are always cranks ("noob" is not the right word here because they aren't self-righteous) who keep arguing "whether IO is pure" or "what is the nature of the M-word" every day or so. All of these are completely pointless, and I have a feeling that they're suffering from the good old "I can do better than the compiler" or "my language is better than this language" syndrome.


  • Discourse touched me in a no-no place

    @boomzilla said in OOP is TRWTF:

    Whenever a normie looks at Haskell code he goes, " is this line noise?!" And then he gets back to work.

    Did you just assume his activity level?



  • @Mason_Wheeler said in OOP is TRWTF:

    normal programmers

    :sideways_owl: I'm not sure it's accurate to describe any programmers as normal.



  • @HardwareGeek said in OOP is TRWTF:

    @Mason_Wheeler said in OOP is TRWTF:

    normal programmers

    :sideways_owl: I'm not sure it's accurate to describe any programmers as normal.

    Depends on the base comparison sample.



  • @Benjamin-Hall said in OOP is TRWTF:

    Depends on the base comparison sample.

    If you compare them to other programmers, they will all look normal 🍹



  • @_P_ said in OOP is TRWTF:

    It introduces more questions than answers.

    So far, all the attempts I've seen to introduce answers have just convinced me that I don't care about the questions.



  • @Captain said in OOP is TRWTF:

    @boomzilla Yeah, I mean, monads are definitely abstract, and probably new idea to most programmers. Plus there's the difficulty of expressing an abstract idea in a language that is foreign to most programmers.

    I think the difficulty to express it is probably because most programmers are not educators. They don't know how to take a concept and transform an explanation full of jargon into something using natural language.

    There are some C# tutorials on Monads...

    OOOOOOHHHH!!!!! I think I get it now! The smallest thing can sometimes work wonders. In my case just now, it was the simple fact that the article noted that F# calls monads "Computation Expressions", which you must agree is a far more useful and descriptive name than "monad."

    So then, a monad is an object that separates construction from evaluation by taking an input collection of some type in its constructor and providing a piece of code that takes that collection and a function, performs the function on the collection, then returns the resulting collection (which is most likely of a different type than the input) as the monad's output, which may be fed as the input into another monad.

    This allows chaining monads (one monad taking another monad as input, or in other words, nesting expressions), because each monad is essentially a node for a queue, pushing operations onto the head of the queue, and popping them off the tail and evaluating them as the output from preceding operations become available as inputs.

    It doesn't matter why the evaluation needs to be delayed, because the monad hides the boilerplate code to process the delay. The reason may be chaining access to nullable fields, chaining operations on collections that may vary in sizes, chaining asynchronous operations, or chaining sub-expressions into a closure in order to use a value later in the expression chain. (Incidentally, I think this also helped me to understand closures, which was another concept that's been bugging me a bit.)

    ...Have I broken the "monad curse"?



  • @djls45 Pretty much.

    What you're calling a monad is typically called a "monadic action" though. You can chain monadic actions because they belong to the same monad (which is the abstract data type for a "kind" of monadic action. By "kind", I mean like... the nullable example, or the async example... You can't just mix and match them without some more machinery).

    The key is what you write here:

    It doesn't matter why the evaluation needs to be delayed, because the monad hides the boilerplate code to process the delay. The reason may be chaining access to nullable fields, chaining operations on collections that may vary in sizes, chaining asynchronous operations, or chaining sub-expressions into a closure in order to use a value later in the expression chain.

    Ask questions if I made it confusing.



  • @_P_ said in OOP is TRWTF:

    e.g We say that List is a monad because List<a> for all types a can satisfy the requirements for monads by return v = [v] and (>>=) = concatMap / flat_map

    I think knowing what return v = [v], (>>=) = concatMap, and flat_map are is key to understanding what you're saying.

    (the verification is left as an exercise to the reader).

    This would have been a great place to explain how these key concepts relate to monad requirements, instead of brushing it off.

    The rest of your post is not bad, though I think that it is less clear than it could be by using a bit too much jargon in some places.



  • @djls45 said in OOP is TRWTF:

    @_P_ said in OOP is TRWTF:

    e.g We say that List is a monad because List<a> for all types a can satisfy the requirements for monads by return v = [v] and (>>=) = concatMap / flat_map

    I think knowing what return v = [v], (>>=) = concatMap, and flat_map are is key to understanding what you're saying.

    (the verification is left as an exercise to the reader).

    This would have been a great place to explain how these key concepts relate to monad requirements, instead of brushing it off.

    The rest of your post is not bad, though I think that it is less clear than it could be by using a bit too much jargon in some places.

    Yeah but you didn't ask me to make a complete, standalone explanation :kneeling_warthog:



  • @djls45 said in OOP is TRWTF:

    F# calls monads "Computation Expressions", which you must agree is a far more useful and descriptive name than "monad."

    It's better than something that sounds like a contraction of "mono-gonad."



  • @Captain said in OOP is TRWTF:

    What you're calling a monad is typically called a "monadic action" though.

    Ah, okay. So "monad" is more like the name of the pattern?

    You can chain monadic actions because they belong to the same monad (which is the abstract data type for a "kind" of monadic action. By "kind", I mean like... the nullable example, or the async example... You can't just mix and match them without some more machinery).

    Well, obviously, you have to match the types. But can't you use monads to chain, say, an async read from a database, a nullable field access from the returned rows, and a lazy-evaluated operation on the fields?



  • @djls45 Hold up there. I don't think you actually do.

    @djls45 said in OOP is TRWTF:

    F# calls monads "Computation Expressions", which you must agree is a far more useful and descriptive name than "monad."

    Calling it "Computation Expressions" would mean it's a very specific kind of monads in the context of continuation. Also even that's incorrect:

    Depending on the kind of computation expression, they can be thought of as a way to express monads, monoids, monad transformers, and applicative functors.

    @djls45 said in OOP is TRWTF:

    a monad is an object that separates construction from evaluation by taking an input collection of some type in its constructor and providing a piece of code that takes that collection and a function, performs the function on the collection, then returns the resulting collection (which is most likely of a different type than the input) as the monad's output, which may be fed as the input into another monad.

    This allows chaining monads (one monad taking another monad as input, or in other words, nesting expressions), because each monad is essentially a node for a queue, pushing operations onto the head of the queue, and popping them off the tail and evaluating them as the output from preceding operations become available as inputs.

    Wat? :thonking: I think you're thinking too much about the bind operator.

    What you described is a different, specific kind of "monad" usage, namely the collection/list. So it's one particular interpretation of the monad interface. Before making interpretations of the monad interface, you need to understand how FP people work: they take things from category theory or first-order logic, translate it to a type signature via Curry-Howard correspondence, and then investigate if there are things in the wild that fits said type/pattern. From the beginning they don't care about what monad is; but rather "what kind of objects and operations fits this monad signature", so don't do meaningless jobs for them :trollface:

    (This reminds me of quantum mechanics and all those efforts of trying to interpret it. It's such a massive can of worms...)

    @djls45 said in OOP is TRWTF:

    It doesn't matter why the evaluation needs to be delayed, because the monad hides the boilerplate code to process the delay. The reason may be chaining access to nullable fields, chaining operations on collections that may vary in sizes, chaining asynchronous operations, or chaining sub-expressions into a closure in order to use a value later in the expression chain.

    There is a very big problem there: "monad"s doesn't do that for you, or magically. Monad is a interface. If you're writing a class, and you think your class can be made an implementation of monad, you need to provide an implementation for its two operations such that they obey the three laws/invariants. Then as an API user you look at the class and see, "hey, this class says it's a instance of monad, which means those 2 monadic operations are available and I can call them through the monad interface". Even if you don't make this class an implementation of monad interface these methods would still exist; they just aren't exposed through a ubiquitous Haskell interface (because again, monad is a Haskell meme). We can still live a good, happy life with that. You can also argue if monad is even a meaningful interface to begin with; for any languages that isn't Haskell, the answer is probably "no".

    Also, async/await does not satisfy the requirement of being a monad because IIRC its bind operation is not associative. I suspect the author doesn't quite know what they're saying either.

    @djls45 said in OOP is TRWTF:

    (Incidentally, I think this also helped me to understand closures, which was another concept that's been bugging me a bit.)

    Monad is but only one particular implementation of closures, so I think you're getting it backwards there. And I don't think anyone would, or had explained closures using monads, ever, because it really isn't helping.

    @djls45 said in OOP is TRWTF:

    ...Have I broken the "monad curse"?

    Nope, but if I have to give a suggestion, knowing what a monad is doesn't bring up anything useful either, so 🍹


  • BINNED

    @HardwareGeek said in OOP is TRWTF:

    @djls45 said in OOP is TRWTF:

    F# calls monads "Computation Expressions", which you must agree is a far more useful and descriptive name than "monad."

    It's better than something that sounds like a contraction of "mono-gonad."

    What happens when you trigger the multi-ball power-up?

    NARRATOR: Stay tuned to find out.



  • @topspin said in OOP is TRWTF:

    @HardwareGeek said in OOP is TRWTF:

    @djls45 said in OOP is TRWTF:

    F# calls monads "Computation Expressions", which you must agree is a far more useful and descriptive name than "monad."

    It's better than something that sounds like a contraction of "mono-gonad."

    What happens when you trigger the multi-ball power-up?

    NARRATOR: Stay tuned to find out.

    MonadFail?



  • @topspin said in OOP is TRWTF:

    @HardwareGeek It was meant more as praise than as an insult.

    Also, voiceless dental fricatives and mid-central unrounded vowels? That might have been @djls45, though, not quite sure.

    If you're going for what I think you're going for, the first phoneme would be preceded by a mid-high unrounded vowel, and the second would be a diphthong moving from a high unrounded vowel to a mid rounded vowel – the mid-central unrounded vowel is the wrong pronunciation for the symbol.

    To which I'd respond, "Sticks and stones may break my bones, but words induce a psychological response in my emotional cortex, which generally happens to be quite mild in my case."

    I aspire to be @HardwareGeek's apprentice.


  • BINNED

    @djls45 said in OOP is TRWTF:

    If you're going for what I think you're going for

    I don't remember the exact weird-linguistics-that-make-your-head-spin post I was referring to there (otherwise I would've just looked up who wrote it), but you just reassured me that it was probably yours.

    Good thing we're already in the Cricket thread.

    E: @error_bot !xkcd sticks and stones
    (especially the alt-text)


  • Discourse touched me in a no-no place

    @HardwareGeek said in OOP is TRWTF:

    I'm not sure it's accurate to describe any programmers as normal.

    Some are at 90 degrees to all possible tangents, so yes, normal can apply…


  • BINNED

    @dkf said in OOP is TRWTF:

    @HardwareGeek said in OOP is TRWTF:

    I'm not sure it's accurate to describe any programmers as normal.

    Some are at 90 degrees to all possible tangents, so yes, normal can apply…

    This forum is really good at derailing on all possible tangents. 🏆



  • @error said in OOP is TRWTF:

    @boomzilla said in OOP is TRWTF:

    @dkf said in OOP is TRWTF:

    @DogsB said in OOP is TRWTF:

    Reminds me of HashMaps in java. When you pull iterators out of them you can't guarantee the order of the items. Everywhere I've worked I've had to fix a bug related that.

    I'll take “what is LinkedHashMap and why would you care?” for 10, Bob.

    I think you misunderstood. I read his comment as similar to @blakeyrat's thing where DBs should return records in random order (at least some of the time) in the absence of an explicit ordering. Because some dev will see the order and make assumptions about it that come back to bite them.

    I seem to recall DirectX initializes newly allocated buffers with noise in debug mode but zeroes in release, to prevent similarly baseless assumptions.

    I think it was the other way around. Release tries to maximize efficiency, and always clearing new memory allocations can get expensive quickly. On the other side, debugging should have a clean environment between test runs, so zeroing the memory is helpful and the extra time required is an acceptable inefficiency.


Log in to reply