OOP is TRWTF


  • BINNED

    @error said in OOP is TRWTF:

    We tend to refer to as "reactive" programming

    Toby Fair, with Functional Programming I at least know WTF it means. When you say "reactive programming" my only thought is "it's 3px too large?"


  • Considered Harmful

    @topspin said in OOP is TRWTF:

    When you say "reactive programming" my only thought is "it's 3px too large?"

    It sounds like you're conflating reactive with responsive.


  • ♿ (Parody)

    @error said in OOP is TRWTF:

    So, yes?

    Yeah, I guess your post was an endofunctor of what I said.



  • @boomzilla said in OOP is TRWTF:

    endofunctor

    This sounds like something that happens when you get some Haskell in your butt.


  • BINNED

    @pie_flavor said in OOP is TRWTF:

    The IO monad is effectively an input representing the entire computer (literally - IO a is a typedef to RealWorld -> (a, RealWorld)), and when you call something on it and return it, you get an output different from the input. Therefore, the order of the functions can matter because each function transforms the object further and further.

    @Gąska said in OOP is TRWTF:

    The IO that gets returned from the first call is (conceptually) different from the IO passed to the first call, so the second call gets different argument and therefore cannot be memoized with the first.

    There's still one missing piece for me about the idempotence of pure functions. I'm going to sum up my (probably completely wrong) takeaway like this:

    The imperative code

    main():
      print("Hello")
      print("World")
    

    gets transformed into something like this in functional code:

    main(StateOnEntry : IO):
      StateWithHelloPrinted = print(StateOnEntry, "Hello")
      StateWithHelloWorldPrinted = print(StateWithHelloPrinted, "World")
      return StateWithHelloWorldPrinted
    

    Now, since the different IO / World State variables are immutable, it could still be evaluated (for no apparent good reason other than "fuck you, that's why) without change of semantics as the following:

    main(StateOnEntry : IO):
      StateWithHelloPrinted  = print(StateOnEntry, "Hello")
      StateWithHelloPrinted2 = print(StateOnEntry, "Hello")
      StateWithHelloWorldPrinted  = print(StateWithHelloPrinted, "World")
      StateWithHelloWorldPrinted2 = print(StateWithHelloPrinted, "World")
      StateWithHelloPrinted3 = print(StateOnEntry, "Hello")
      return StateWithHelloWorldPrinted
    

    This is technically valid, and the magic lies in that the return from main is StateWithHelloWorldPrinted, which only has the two outputs applied that we wanted, in the correct sequence.
    There have been some completely unnecessary world states created on the way due to pure function shenanigans, but since they never "return from main", those complete worlds get discarded like an alternate universe and are never transacted to actual side effects. The runtime / implementation of IO makes sure the actual world only sees what should be the final "world state".

    Or: The interpreter is still free to make up additional calls to pure functions, but "world states" thus created and discarded never get transacted to reality.


  • BINNED

    @error said in OOP is TRWTF:

    It sounds like you're conflating have no fucking clue

    :thats_the_joke:



  • @error Yes, and your wave-function-collapse intuition is right too. A monad action is like a black box that results in some "value" when it is "looked at" in the right way.

    @Gąska said in OOP is TRWTF:

    Maybe I would value it if I knew what it was. So what is this other way of thinking? I assume it's something more than just reordering, pre-evaluating, memoizing and every control flow transformation that you can do by exploiting the fact that some computation is pure, which is the single most common optimization method that all imperative compilers and interpreters use.

    OK fair enough. Bear in mind none of this has anything directly to do with IO, but only why we distinguish between evaluation and execution.

    So as I said, Haskell is defined in terms of its evaluation model. There, we have expressions. The expressions are evaluated by the run-time system, and can result in a (bona-fide) value or in a "proto-value" called bottom, which is an equivalence class of errors like non-termination or "you threw the special error function which always typechecks".

    This distinction allows what is called "fast and loose" equational reasoning in the Haskell community.

    Check out the introduction in https://www.cs.ox.ac.uk/jeremy.gibbons/publications/fast+loose.pdf

    Along similar lines, fast and loose reasoning allows you to derive functions from parametric types.

    This can even be automated (and has been).

    Heck, you can even think of a free type variable as a type in its own right, and see that if a value has type value :: a (so that a, a free type variable, is value's type), then value has to be bottom. That's another free theorem.

    So this dynamic is what connects the Hindley-Milner type system, and the Howard-Curry Isomorphism theorem, and all its power to the real world.


  • Banned

    @error said in OOP is TRWTF:

    So far, my muddled understanding is this:

    • in a procedural/OOP language, you pass arguments into functions or methods and they return values to you
    • the invocations are directly tied in some way to what is being passed in and out
    • in FP, you define a set of mappings from input types to output types without ever looking at the values
    • because you don't look at the values, the compiler is free to perform lots of weird optimization tricks with the set of rules you have given it
    • a monad is a defined class of value types that functions can operate on, with its own unique rules about how that data can be handled and operations rearranged

    Am I anywhere in the ballpark?

    No, not really.

    I did more googling and I think I'm pretty close to understanding all this mess.

    • First things first. At its core, FP is all about functions taking input as arguments and producting output as return value (except there's no return statement so it's just called function's value). No hidden magic here; it's exactly the same as the OOP model you described, except you cannot modify the object you get as argument and that's what makes it a pure function.
    • FP languages are big about first-class functions, which means treating functions like any ordinary value. Whatever you can do with an int or a string, you can also do with a function as well (except type-specific operations like adding, of course). For example, you can have a function that takes a function as argument, and produces some output that uses it (sounds pretty obvious nowadays, but it was a big difference between imperative and functional languages a decade ago).
    • Conceptually, the IO monad (forget it's a monad, being a monad is the least important thing in all of this) is nothing more than a type alias for a function that takes RealWorld as an argument and returns a tuple of RealWorld and some other type (in code it would be like type IO a = RealWorld -> (a, RealWorld), where a is a generic argument). RealWorld being a magic value from the depths of the interpreters the only purpose of which is to be different each time an interaction with outside world happens to prevent memoization and enforce sequencing. Haskell program starts with just one instance of RealWorld and new instances cannot be created without disposing of the old one, and new one always gets created whenever any outside interaction happens - that guarantees every function in a chain runs exactly once.
    • So, when you read a function signature like string -> IO int, you should think of it as string -> RealWorld -> (int, RealWorld), ie. a function that takes 2 arguments, a string and the current RealWorld, and returns an int and the new RealWorld for use in the next function.
    • Operators are functions too. The >> is a higher order function, ie. a function that takes functions as arguments. Its signature is IO a -> IO b -> IO b - it takes two functions (each taking RealWorld as argument and returning potentially different types), and produces a function that can be called with some RealWorld instance to run the two functions in order, returning the result of the second function and the RealWorld after the second function executes so it can be used for other functions. The >>= is very similar, except the result of the first function is passed to the second function (the signature is IO a -> (a -> IO b) -> IO b).
    • When I said IO is conceptually a type alias, it's because it's not - it's actually defined with some compiler intrinsics, because it must be or else the sequencing wouldn't actually work. It cannot be reimplemented in pure Haskell.


  • @_P_ said in OOP is TRWTF:

    My take at moands:

    That's a funnier typo than gonads, you win the thread


  • ♿ (Parody)

    @mott555 said in OOP is TRWTF:

    @boomzilla said in OOP is TRWTF:

    endofunctor

    This sounds like something that happens when you get some Haskell in your butt.

    It's what you use when you get a monoid there, actually.


  • BINNED

    @boomzilla said in OOP is TRWTF:

    @mott555 said in OOP is TRWTF:

    @boomzilla said in OOP is TRWTF:

    endofunctor

    This sounds like something that happens when you get some Haskell in your butt.

    It's what you use when you get a monoid there, actually.

    That sounds categorically false.


  • ♿ (Parody)

    @Gąska said in OOP is TRWTF:

    @error said in OOP is TRWTF:
    "Am I anywhere in the ballpark?"

    No, not really.

    So what we've learned is that our respective explanations are just new entries in the "no one understands anyone else's monad tutorial." Because you effectively restated his thing just like I did.


  • Banned

    @boomzilla well, he started with "FP is nothing like the good old putting input in arguments and taking output from return value that you see in OOP". Couldn't be farther from truth than that...


  • ♿ (Parody)

    @Gąska said in OOP is TRWTF:

    @boomzilla well, he started with "FP is nothing like the good old putting input in arguments and taking output from return value that you see in OOP". Couldn't be farther from truth than that...

    Yes, I guess it was probably that part that threw me off, too. Nevertheless, the rest of it tracks if you throw out that misunderstanding.



  • Trying to read this article. I'm currently halfway through I think, and to me this guy seems to have a bad case of gratuitous factory. Particularly egregious in the InputValidator example: the "refactored" code adds an interface... with one implementation and one factory that always returns an object of the same class, meaning no one ever needed to "abstract" this in the first place. You don't need to make an interface unless you plan to have several different behaviors, and you don't need a factory for those if the choice is entirely caller-controlled in the first place...

    Also, for some reason when he first "debunks" inheritance, he mentions a "child object" and "parent object"... Which shows an impressive lack of understanding of what OOP inheritance is, since there's actually only one object.



  • This however, I will agree with:

    Functional programming, on the other hand, allows us to achieve the same polymorphism in a much more elegant way…by simply passing in a function that defines the desired runtime behavior. What could be simpler than that? No need to define a bunch of overloaded abstract virtual methods in multiple files (and the interface).

    C# delegates are far easier to use than what we had to do in my (admittedly, pre-Java-5) Java classes.

    Edit: However, they'll probably make the blogger scream heresy, since they carry both function and mutable state. And it's a good thing they do, this way you can pass a function that, say, counts stuff to a loop iterator.



  • @anonymous234 said in OOP is TRWTF:

    No one makes any big "shared mutable state for everyone to touch" class,

    I wish you were right



  • @Captain said in OOP is TRWTF:

    @boomzilla Yeah, I mean, monads are definitely abstract, and probably new idea to most programmers. Plus there's the difficulty of expressing an abstract idea in a language that is foreign to most programmers.

    There are some C# tutorials on Monads...

    This tutorial vaguely gives me an idea about monads as function composition stuff (just to make sure I got it right: The Maybe generic class from the tutorial is a monad for chaining functions with null propagation, and the Future generic class from the tutorial is another monad, this time for chaining functions with Task.ContinueWith? -- and why does OnComplete break the pattern?), but that doesn't translate at all to to how one executes function "in a given monad" or how Haskell's IO monad has apparently more in common with a function library than with compositing functions.

    Just to make sure I got the "monad" part right, though, would this be the "unit"/"neutral" monad, that performs no special operation?

    public class Unit<T>
    {
        private readonly T value;
    
        public Unit(T someValue)
        {
            if (someValue == null) { throw new ArgumentNullException(nameof(someValue)); }
            this.value = someValue;
        }
    
        public Unit<U> Bind<U>(Func<T, Unit<U>> func) where U : class
        {
            if (func == null) { throw new ArgumentNullException(nameof(func)); }
            return func(value);
        }
    }
    

    Also, how do I chain functions switching monads halfway through? (like, a Future whose last function returns a Maybe)?



  • @error said in OOP is TRWTF:

    @topspin said in OOP is TRWTF:

    When you say "reactive programming" my only thought is "it's 3px too large?"

    It sounds like you're conflating reactive with responsive.

    And I'm learning declarative at the moment! Just thought I'd add some more -ives to the mix.


  • Considered Harmful

    @Medinoc said in OOP is TRWTF:

    Haskell's IO monad has apparently more in common with a function library than with compositing functions.

    It only seems like that because of some syntactical sugar. Remember, IO a means RealWorld -> (a, RealWorld). It itself is a function type, and all the actual IO functions use the RealWorld instance but operate on the IO instance.



  • @pie_flavor Problem is I have trouble with the Haskell syntax bits posted here, in particular I'm not sure I grasp what IO a actually means. How would you write it in a language I understand? (C, C++, C#)


  • Considered Harmful

    @Medinoc IO<a>, except not quite because generics in Haskell are just functions that return types, which is why the generic parameter syntax looks like the function parameter syntax.



  • @Medinoc said in OOP is TRWTF:

    @Captain said in OOP is TRWTF:

    @boomzilla Yeah, I mean, monads are definitely abstract, and probably new idea to most programmers. Plus there's the difficulty of expressing an abstract idea in a language that is foreign to most programmers.

    There are some C# tutorials on Monads...

    This tutorial vaguely gives me an idea about monads as function composition stuff (just to make sure I got it right: The Maybe generic class from the tutorial is a monad for chaining functions with null propagation, and the Future generic class from the tutorial is another monad, this time for chaining functions with Task.ContinueWith?

    Yes

    -- and why does OnComplete break the pattern?),

    I didn't read the tutorial, so I'm not sure. Is OnComplete a void function in the C# sense? That would do it.

    but that doesn't translate at all to to how one executes function "in a given monad"

    It depends on the function's type. Typically, fmap, or >>= and return are relevant.

    or how Haskell's IO monad has apparently more in common with a function library than with compositing functions.

    From the point of view of the language definition, IO isn't special and composition in the IO monad works exactly the same as the others. It is handled slightly differently by the runtime system.

    Just to make sure I got the "monad" part right, though, would this be the "unit"/"neutral" monad, that performs no special operation?

    public class Unit<T>
    {
        private readonly T value;
    
        public Unit(T someValue)
        {
            if (someValue == null) { throw new ArgumentNullException(nameof(someValue)); }
            this.value = someValue;
        }
    
        public Unit<U> Bind<U>(Func<T, Unit<U>> func) where U : class
        {
            if (func == null) { throw new ArgumentNullException(nameof(func)); }
            return func(value);
        }
    }
    

    Yes. That's the Unit monad in C#, under mild assumptions like "the arguments are never null" etc.

    Also, how do I chain functions switching monads halfway through? (like, a Future whose last function returns a Maybe)?

    You use a special monad that takes a monad as an argument, called a "monad transformer" . It lets you "lift" the inner monad into the outer monad, with a typeclass method called lift or similar. This means that you build up a stack of types, like StateT UIState (MaybeT UserInput IO) () to build up an action that can use different kinds of effects.

    One extension to this idea is using typeclasses, so that you end up writing code with types like:

    useStateAndMaybeAndIO :: (MonadIO m, MonadState m s, MonadMaybe m) => m ()
    

    Most of the publicly available monads do at least the first, if not the second. You can see what these both look like in:

    This is getting somewhat deep into Haskell program design, which I'm happy to discuss, but maybe I can suggest reading the "Gentle Introduction to Haskell" so you know all the basic language features: https://www.haskell.org/tutorial/


  • Banned

    @pie_flavor said in OOP is TRWTF:

    @Medinoc IO<a>, except not quite because generics in Haskell are just functions that return types, which is why the generic parameter syntax looks like the function parameter syntax.

    Except not quite because those "type functions" can only have a single simple type definition and nothing else - no conditionals or anything else that you'd want to put in a regular "value" function. Saying that Haskell's IO a is same as C#'s IO<a> is very accurate.

    Generics (a.k.a. type constructors) are sometimes called type functions because they take some argument and produce a type out of it - just like a function. It works the same in classic, C++-like languages too.



  • Also, I'm a bit put off by the "all is immutable" stuff, because to me it causes every little change in state to balloon up the object graph until you create another, mutated clone of the entire universe (a new "state value" if you will).

    Example; A video game with a bunch of Character, each bearing equipment, which includes a Gun, and each gun has an ammoCount.

    If Character Bob fires the gun, I have to create a new Gun with its ammoCount lowered by 1, right? But how do I say Bob's gun is now the cloned gun? Do I need to clone him too, keeping all other fields equal but updating his gun?

    Character::FireGun() {
      return new Character(this.name, this.gun.Fire(), this.bodyArmor, this.boots);
    }
    

    And then, clone the collection of characters containing bob, and so on and so forth?

    Do functional languages at least have syntactic sugar that lets me do something like Character::FireGun() { return this.Clone(gun: this.gun.fire); } ? That way, when I add a new member such as headGear, I don't have to add it to every single time a mutant clone of the Character is created?



  • @Gąska said in OOP is TRWTF:

    @pie_flavor said in OOP is TRWTF:

    @Medinoc IO<a>, except not quite because generics in Haskell are just functions that return types, which is why the generic parameter syntax looks like the function parameter syntax.

    Except not quite because those "type functions" can only have a single simple type definition and nothing else - no conditionals or anything else that you'd want to put in a regular "value" function. Saying that Haskell's IO a is same as C#'s IO<a> is very accurate.

    Yes.

    But we have to be careful here, because Haskell exposes two different ways to abstract over types. We can wrap types in other types by physically wrapping values of one type into values of another.

    Or we can abstract over types with type classes, which are more comparable in role to C# interfaces. The nice thing is, we can define interfaces for types of any number of arguments and we can even define interfaces with multiple parameters and a bunch of other things. I haven't kept up with the newest GHC, but they are always improving how expressive the typeclass system is, usually in a backwards compatible way. (But maybe I'm confused about the C#...)

    So the "interface" to using monads is defined by the type class:

    class Monad m where
      return :: a -> m a
     (>>=) :: m a -> (a -> m b) -> m b
    

    so when you get around to implementing your own monad, it would look like:

    data Maybe a = Nothing | Just a -- this defines what a `Maybe` value looks like.  So we build up a `Maybe String` by either passing `Nothing` or passing a string to `Just`.
    
    instance Monad Maybe where
      return = Just
      (Just a) >>= f = f a
      Nothing >>= f = Nothing
    

    That defines the plumbing we need to do the sequencing, and automatically keep track of whether the computation has hit Nothing or not.



  • @Medinoc said in OOP is TRWTF:

    Also, I'm a bit put off by the "all is immutable" stuff, because to me it causes every little change in state to balloon up the object graph until you create another, mutated clone of the entire universe (a new "state value" if you will).

    Example; A video game with a bunch of Character, each bearing equipment, which includes a Gun, and each gun has an ammoCount.

    If Character Bob fires the gun, I have to create a new Gun with its ammoCount lowered by 1, right? But how do I say Bob's gun is now the cloned gun? Do I need to clone him too, keeping all other fields equal but updating his gun?

    Character::FireGun() {
      return new Character(this.name, this.gun.Fire(), this.bodyArmor, this.boots);
    }
    

    And then, clone the collection of characters containing bob, and so on and so forth?

    Do functional languages at least have syntactic sugar that lets me do something like Character::FireGun() { return this.Clone(gun: this.gun.fire); } ? That way, when I add a new member such as headGear, I don't have to add it to every single time a mutant clone of the Character is created?

    I could be wrong, but I believe both Unreal and Source store ammo counts directly on the player object rather than on the gun.

    Edit: In Source, it's stored in an array using an enum as keys to specify which ammo type it is.

    Filed under: :pendant:


  • Banned

    @Medinoc said in OOP is TRWTF:

    Do functional languages at least have syntactic sugar that lets me do something like Character::FireGun() { return this.Clone(gun: this.gun.fire); } ?

    Usually it's part of a framework, not the language. I'm not aware of any purely functional framework for managing large number of interacting objects, but the mostly-functional ones I've seen - usually some variant of event-driven ECS - take an approach where each entity (or a component of an entity) has an update function that's triggered by some event (there might be separate functions for each event type but they work on the same principle) that takes the current state of the entity and the event data as parameters, and returns the new state of only the particular entity/component (or some variant of an empty value to indicate no state change), plus new events to trigger. The framework driver is usually dealing with mutable variables for performance reasons, but the users of the framework only deal with immutable objects. Also, to guarantee that all update functions are called with the same, consistent state, at least two copies of entire application state must exist at all times - one to read previous state from, and one to write new state into. So you can't avoid copying everything at least once per step anyway - but on the bright side, this kind of state is usually pretty small in size - in case of video games where it's used the most, it's usually only a few dozen bytes per component, and no more than 10k instances.



  • Obviously when your objects are all mutable and you have a deep class hierarchy you're in a world of pain. Somehow the article didn't seem to move on from that. So I gave up reading after a few paragraphs. You'd think that at some point people should realize their ignorance when they're midway into writing such dross. But no, they publish it all to be embarrassed five years later when they understand things better. I know this because it happens to me frequently.

    The way to avoid embarassement is to not learn new stuff. Or not to publish your ignorant writings. The sad thing is that these people will likely have to work in OOP languages and if they keep their ignorance they will not manage to produce a good design. Further solidifying their belief that OOP doesn't work.


  • Discourse touched me in a no-no place

    @topspin said in OOP is TRWTF:

    This is technically valid, and the magic lies in that the return from main is StateWithHelloWorldPrinted, which only has the two outputs applied that we wanted, in the correct sequence.

    You could also think of the StateWithHelloWordPrinted as being some sort of concatenation of the StateOnEntry with an instruction to print Hello and then another to print World. Adding the instructions doesn't make them happen, and the pure world hasn't blown up on us, but the point where things actually take place has moved to after main provides a result value.

    All of which makes me curious. How does Haskell handle input (from stdin or a file)?


  • Discourse touched me in a no-no place

    @Medinoc said in OOP is TRWTF:

    Also, I'm a bit put off by the "all is immutable" stuff, because to me it causes every little change in state to balloon up the object graph until you create another, mutated clone of the entire universe (a new "state value" if you will).

    Yes. It can most definitely work that way, and the language semantics totally do work in that way. In reality, the compiler can optimize common cases to be smarter (especially the case that is equivalent to assignment into an object that is unshared), but you can't observe that within the language's semantics.

    It's fair to say that this sort of thing is why lazy-evaluating functional programming isn't the best choice for all problems. But that doesn't make it useless. In lots of programs, working this way makes it much easier to create a correct program at all. Assuming you understand what correctness even means for the problem in the first place… 😜



  • @dkf said in OOP is TRWTF:

    All of which makes me curious. How does Haskell handle input (from stdin or a file)?

    Well... the compiler takes the IO action that got built up, desugars it, and compiles it down to CPU instructions, runtime calls, etc.

    Laziness comes into it, too. If you run:

    main = getLine >>= putStrLn `seq` return ()

    You get (), and it doesn't slurp up a line or print. (This is a big reason to consider the IO monad independently from the effects it causes when you interpret it).

    It's been a while since I've been through the GHC internals though.



  • @topspin Your mental model of Haskell code is still wrong: Haskell is not imperative, period. It doesn't "run" every line of code as they are written. In fact there are no guarantees that the order of which things are evaluated, unless you specifically use monadic constructs to enforce an order.

    More specifically, IO actions are not denoted by imperative actions, but functions that return IO actions. They're "pure" in the sense that they return the same exact action every time they're invoked with the same arguments. print("Hello") returns an IO action that gives out no information, and semantically should print "Hello". IO actions can also take other IO actions and do stuff with them. At the bottom of the turtles, main has type IO (), aka it should be a big IO action that doesn't give out any information. Then, after all this, the only guaranteed is that the compiled program will executes the IO action known as main.

    Just like Haskell is lazy, IO actions are not invoked until they are actually needed. It's kind of like Promises that, instead of running immediately and have you wait for the result with .then, doesn't execute unless it's the specifically requested by .then. If nobody has requested to hear an action, it doesn't make a sound.

    Honestly your line of thought feels like it comes from one of those ignorant cranks I encounter all the time: a language is defined by its specs and semantics, so unless there's the compiler/runtime committed :wtf: and made a whoopsie arguments should be done based on them. The key part of Haskell's semantics is that it's non-strict. If you don't model your conception of Haskell programs that way, what comes out of your reasoning will be :trwtf: all the time.



  • @gleemonk said in OOP is TRWTF:

    The sad thing is that these people will likely have to work in OOP languages and if they keep their ignorance they will not manage to produce a good design. Further solidifying their belief that OOP doesn't work.

    It's not like there is a lack of people building shit designs in OOP, so they'd not really stand out much apart from the putting of FP on an altar an worshipping the shit out of it.



  • @Carnage said in OOP is TRWTF:

    It's not like there is a lack of people building shit designs in OOP, so they'd not really stand out much apart from the putting of FP on an altar an worshipping the shit out of it.

    ...and never producing anything of note with FP so they don't ever acknowledge that good design takes work in both paradigms.

    The first time I wanted to write something longer than a page in Haskell I had no idea how to structure the program. I was paralysed. And I haven't written anything of substance in that language since. Maybe I should go around telling people it's impossible to have a good design in Haskell. But hey, the knowledge did transfer nicely to Elm, so all was not lost.



  • @Medinoc I assume it's a matter of just passing the new Character down to all the functions that need it on every frame of the game, rather than letting them keep a reference.

    But isn't that equivalent to just having an object that only the owner can mutate?



  • @gleemonk said in OOP is TRWTF:

    @Carnage said in OOP is TRWTF:

    It's not like there is a lack of people building shit designs in OOP, so they'd not really stand out much apart from the putting of FP on an altar an worshipping the shit out of it.

    ...and never producing anything of note with FP so they don't ever acknowledge that good design takes work in both paradigms.

    The first time I wanted to write something longer than a page in Haskell I had no idea how to structure the program. I was paralysed. And I haven't written anything of substance in that language since. Maybe I should go around telling people it's impossible to have a good design in Haskell. But hey, the knowledge did transfer nicely to Elm, so all was not lost.

    To be fair, structuring programs properly in OOP takes skills and effort too. It's just that they're the standard curriculum in every software engineering course.


  • Banned

    @anonymous234 said in OOP is TRWTF:

    But isn't that equivalent to just having an object that only the owner can mutate?

    The question is when the owner can mutate it. If you want a stable, predictable simulation, you don't want the objects to change until all objects have decided what they want to change into. Pretty much the only way to achieve this is to keep two copies of everything - the previous state of everything as immutable input to update functions, and the new state as the writable, non-readable output. Many game developers choose not to do this and update objects in place immediately, for various reasons (performance, lack of foresight, and :kneeling_warthog: seem the most common) - and that's how we get weird bugs that only happen under very specific circumstances, which causes two conflicting things to happen at the same time.



  • @Medinoc said in OOP is TRWTF:

    Also, how do I chain functions switching monads halfway through? (like, a Future whose last function returns a Maybe)?

    0_1524224909546_d0de8cfa-9a1f-4bd1-92ab-f766fe6633bc-image.png


  • Discourse touched me in a no-no place

    @_P_ said in OOP is TRWTF:

    @gleemonk said in OOP is TRWTF:

    @Carnage said in OOP is TRWTF:

    It's not like there is a lack of people building shit designs in OOP, so they'd not really stand out much apart from the putting of FP on an altar an worshipping the shit out of it.

    ...and never producing anything of note with FP so they don't ever acknowledge that good design takes work in both paradigms.

    The first time I wanted to write something longer than a page in Haskell I had no idea how to structure the program. I was paralysed. And I haven't written anything of substance in that language since. Maybe I should go around telling people it's impossible to have a good design in Haskell. But hey, the knowledge did transfer nicely to Elm, so all was not lost.

    To be fair, structuring programs properly in OOP takes skills and effort too. It's just that they're the standard curriculum in every software engineering course.

    The basic ideas of software patterns should transfer, though the specific patterns won't. Common patterns are for imperative OO languages with weak classes; I'd expect functional languages like Haskell to be quite different in flavour.

    And yes, I consider both Java and C# to have a weak class system. C++ is even worse; it isn't even single-rooted, the savages!


  • BINNED

    @_P_ said in OOP is TRWTF:

    @topspin Your mental model of Haskell code is still wrong: Haskell is not imperative, period. It doesn't "run" every line of code as they are written. In fact there are no guarantees that the order of which things are evaluated, unless you specifically use monadic constructs to enforce an order.

    Guess what, I know that order is not guaranteed. I might even be the one who originally brought it up (not re-reading everything to check), because of how that conflicts with side-effects.
    That's one of the two questions I had (how side-effects can be sequenced, and how to enforce they happen exactly once), and the one that got cleared up first.

    More specifically, IO actions are not denoted by imperative actions, but functions that return IO actions. They're "pure" in the sense that they return the same exact action every time they're invoked with the same arguments. print("Hello") returns an IO action that gives out no information, and semantically should print "Hello". IO actions can also take other IO actions and do stuff with them. At the bottom of the turtles, main has type IO (), aka it should be a big IO action that doesn't give out any information. Then, after all this, the only guaranteed is that the compiled program will executes the IO action known as main.

    Just like Haskell is lazy, IO actions are not invoked until they are actually needed. It's kind of like Promises that, instead of running immediately and have you wait for the result with .then, doesn't execute unless it's the specifically requested by .then. If nobody has requested to hear an action, it doesn't make a sound.

    Really, doesn't sound too different from my description.

    Honestly your line of thought feels like it comes from one of those ignorant cranks I encounter all the time: a language is defined by its specs and semantics, so unless there's the compiler/runtime committed :wtf: and made a whoopsie arguments should be done based on them.

    That's hilarious. You make it sound like I'm just being stubborn about applying imperative reasoning and then getting stuck, when actually the problems I was asking about don't even show up in imperative languages at all.

    I don't care about the compiler / the runtime fucking up. That line of thought was entirely made up by you. It is precisely the semantics of purely functional languages that I had these questions about in the first place.
    Unlike in imperative programs, with pure functions you can reorder things, replace "function calls" with their value (referential transparency), or even make up function calls out of thin air without changing the semantics of the program. And that obviously doesn't work if you have side effects.

    It was the non-FP people who actually gave the relevant pieces of information, and finally @Gąska gave a comprehensible explanation of how this can all work, while adding a detail no one had mentioned before:

    just one instance of RealWorld and new instances cannot be created without disposing of the old one

    That needed explanation, as it is in contrast to how everything else can be copied around at will without observable effect because of immutability of data and pure functions.

    As an aside, while @Gąska's model of how pure-ness of the language can be reconciled with side effects is probably closer to how it's actually implemented (or seems easier to implement at least), as far as I can tell it is effectively the same as my model. (Like it's the Kopenhagen interpretation to my Many Worlds. 😝)

    @_P_ said:

    The key part of Haskell's semantics is that it's non-strict. If you don't model your conception of Haskell programs that way, what comes out of your reasoning will be :trwtf: all the time.

    I think that's the first time this has been brought up at all EDIT: it isn't (and you also didn't make clear how it's even relevant).

    Seems to me that for all the talk of "this is all really simple", you FP-people failed spectacularly at giving a simple explanation.


  • BINNED

    @dkf said in OOP is TRWTF:

    C++ is even worse; it isn't even single-rooted, the savages!

    Neither are Java or C#. (Primitives don't derive from Object)



  • @topspin said in OOP is TRWTF:

    Seems to me that for all the talk of "this is all really simple", you FP-people failed spectacularly at giving a simple explanation.

    Well... when a concept like a "monad" has the intrinsic complexity of "imperative languages"... and there are tons of different assumptions one can make about imperative languages which don't necessarily hold in them all... it gets a little tricky.

    Add to that that we're starting in the middle, instead actually starting with how the language is evaluated...


  • Banned

    @topspin said in OOP is TRWTF:

    It was the non-FP people who actually gave the relevant pieces of information, and finally @Gąska gave a comprehensible explanation of how this can all work, while adding a detail no one had mentioned before:

    just one instance of RealWorld and new instances cannot be created without disposing of the old one

    Um... Funny that... So... You recall that `seq` example that @Captain posted a bit later? Yeah... I wasn't aware of that property earlier. And it's pretty important, because... you see... turns out everything I said there was completely wrong. 😅



  • @Gąska It's okay. You're a Haskell Padawan now that you understand why. :-)


  • Banned

    @Captain except I don't. I understand that it IS wrong. I still have no goddamn clue WHY it's wrong, and what exactly is going on when Haskell sees code.


  • BINNED

    @Gąska said in OOP is TRWTF:

    @topspin said in OOP is TRWTF:

    It was the non-FP people who actually gave the relevant pieces of information, and finally @Gąska gave a comprehensible explanation of how this can all work, while adding a detail no one had mentioned before:

    just one instance of RealWorld and new instances cannot be created without disposing of the old one

    Um... Funny that... So... You recall that `seq` example that @Captain posted a bit later? Yeah... I wasn't aware of that property earlier. And it's pretty important, because... you see... turns out everything I said there was completely wrong. 😅

    Nope, I overlooked that. So seq throws away that side's world state (unless it's undefined/bottom)?
    I guess I could reconcile it with my interpretation, though.



  • This post is deleted!

  • Banned

    Okay so here comes another attempt at explaining how I/O works in Haskell. It's probably as wrong as all earlier attempts, but I hope I'm inching ever so closer to the truth.

    When I said completely wrong, it was a bit of overdramatization. The part about passing RealWorld around and it being different on every call was correct. What wasn't correct was that it changes whenever interaction with outside world happens. That's not true, because no interaction happens yet when the expressions are being evaluated. It's just that Haskell internally keeps track of what action is supposed to happen at every step, all the way until the entire main is evaluated and it returns the final RealWorld that gets returned to the runtime. At this point, Haskell runs all the actions that were scheduled during evaluation, but only on the path that produced the final RealWorld - if there was any divergence, everything that didn't end up in the final RealWorld gets discarded, as if it never existed.

    Problem: this model doesn't explain how it's possible for a Haskell program to check whether a file exists, and execute one of two different code paths depending on that. It won't know which of the two possible RealWorlds is the final one until it does I/O, and if it delays all I/O until after the final RealWorld is known... Chicken and egg.

    @Captain, maybe you'll be able to help here? Which part did I get wrong here?


  • Discourse touched me in a no-no place

    @Gąska said in OOP is TRWTF:

    Okay so here comes another attempt at explaining how I/O works in Haskell. It's probably as wrong as all earlier attempts, but I hope I'm inching ever so closer to the truth.

    Are you aiming for GTL?


Log in to reply