OOP is TRWTF
-
Hey, I wanted to reply to that deleted post.
According to the Haskell Wiki
seq
is defined as:⊥ seq b = ⊥ a seq b = b
So, unless the left-hand side is ⊥ it doesn't need to "get evaluated" at all, thus
getLine
above doesn't read a line.
Any "temporary world states not yet made real" are thus discarded, as they never become a part of the world state returned by main. Almost makes sense, so far.But how does it actually decide that?
Consider that the left-hand side is the moral equivalent of (pseudo-code again):i = getLine if i == 0: return 0 return ⊥
How does it know whether the LHS is undefined if that depends on the (potentially not to be executed) input?
-
@Gąska said in OOP is TRWTF:
At this point, Haskell runs all the actions that were scheduled during evaluation, but only on the path that produced the final
RealWorld
- if there was any divergence, everything that didn't end up in the finalRealWorld
gets discarded, as if it never existed.Hey, I described it like that.
(You'll disagree but it seems equivalent to me)
-
@Gąska said in OOP is TRWTF:
When I said completely wrong, it was a bit of overdramatization. The part about passing RealWorld around and it being different on every call was correct.
True.
It's just that Haskell internally keeps track of what action is supposed to happen at every step, all the way until the entire main
It uses a technique called "weak head normal form" to prune branches from the tree (i think) you're describing, based on laziness.
So,
seq
broke your example because it typechecks and declared that the left branch was defined and prunable.
-
@topspin said in OOP is TRWTF:
So, unless the left-hand side is ⊥ it doesn't need to "get evaluated" at all, thus getLine above doesn't read a line.
Any "temporary world states not yet made real" are thus discarded, as they never become a part of the world state returned by main. Almost makes sense, so far.The value
getLine
is defined, as in it's somewhere in the source code. That's all it checks.It's never actually going to get that line because of laziness --
getLine
won't get a line until that resulting line is needed, (i.e., it's bound to a variable getting used in a function somewhere). That's part of Weak Head Normal Form.
-
@topspin said in OOP is TRWTF:
I don't see how abstraction and encapsulation are the root cause of shared mutable state, though.
I especially like the implication that functional programming doesn't have abstractions or encapsulation.
-
@Captain said in OOP is TRWTF:
@topspin said in OOP is TRWTF:
So, unless the left-hand side is ⊥ it doesn't need to "get evaluated" at all, thus getLine above doesn't read a line.
Any "temporary world states not yet made real" are thus discarded, as they never become a part of the world state returned by main. Almost makes sense, so far.The value
getLine
is defined, as in it's somewhere in the source code. That's all it checks.It's never actually going to get that line because of laziness --
getLine
won't get a line until that resulting line is needed. That's part of Weak Head Normal Form.But what if I make it dependent on the result of getLine whether the expression is defined? Or is that not possible?
The wiki mentions:
if the compiler can statically prove that the first argument is not ⊥, ...
implying that most of the time it can't, which made me believe it can be conditional on the input.
-
@topspin said in OOP is TRWTF:
@dkf said in OOP is TRWTF:
C++ is even worse; it isn't even single-rooted, the savages!
Neither are Java or C#. (Primitives don't derive from Object)
While that's true, C# doesn't allow you to directly deal with the primitive in code (instead going through its
ValueType
) and the Java compiler indiscriminately boxes primitives to theirNumber
type even when you don't want it to.
-
@topspin The semantics of
seq
are that it reduces the left-most argument to weak head normal form. In particular, that means that it fully evaluates it's top-most data constructor, but does not dig in to evaluate the inner values (it keeps track of what hasn't been evaluated by keeping something called a "thunk" -- these are the "possible" execution paths, which may or may not be needed in the real program).There are potentially situations where evaluating a top-most data constructor leads to non-termination, etc.
But, for the vast majority of IO actions, no computation is required to get its top most data constructor.
For a bit more of a specific example, consider:
data Maybe a = Just a | Nothing
Just
andNothing
are data constructors. So if we pass in something like:(Just "String") `seq` Nothing
the runtime system sees the
Just
and moves on to evaluate theNothing
.
-
@Gąska said in OOP is TRWTF:
Problem: this model doesn't explain how it's possible for a Haskell program to check whether a file exists, and execute one of two different code paths depending on that. It won't know which of the two possible
RealWorld
s is the final one until it does I/O, and if it delays all I/O until after the finalRealWorld
is known... Chicken and egg.That doesn't seem to be a problem, as "both code paths" depend on the file existence check.
So your resultingRealWorld
description will be dependent on both paths (lazily evaluated) and you can execute the file existence check, then run the corresponding path.Just how it works together with
seq
, which would throw away things which can't be undone anymore, I don't understand. But I think @Captain will explain that separately.
-
@Captain So it's only the head that gets considered at all?
⊥ seq b = ⊥
but(Just ⊥) seq b = b
?
-
@topspin Yes.
Which understandably makes reasoning about what a computer is doing when evaluating Haskell difficult. A Haskell program is more like a data structure that another program (the runtime) traverses.
-
Wait a second.
main = getLine >>= putStrLn `seq` return ()
The
`seq`
is betweengetLine >>= putStrLn
andreturn ()
. Both of these expressions evaluate to functions. Like, functions themselves, not calls to functions.main
wasn't called yet and noRealWorld
exists at this point. It's not sequencing I/O actions, it's sequencing evaluations of function values.putStrLn
is evaluated beforereturn ()
, but neither of them is called at this point. And then`seq`
does its work, discardsputStrLn
, and keepsreturn ()
. So it's not that Haskell discards I/O action. The I/O action never existed becauseputStrLn
wasn't ever actually called before it was discarded.This changes everything.
It seems like my new theory is much more wrong this time, and the previous one, the one I've just derided so badly, was actually almost correct. Almost, because I've got one thing wrong -
RealWorld
doesn't disappear after being used. Everything else seems spot on. There is no bookkeeping trickery or backtracking paths going on.main
is being evaluated from the expression (which returns a functionRealWorld -> ()
). Thenmain
gets called with initial instance ofRealWorld
. All function calls that survived compilation are now being called in order of binding, and side effects of I/O happen while the functions are being called and evaluated, just like in impure languages. The`seq`
was a red herring! Everything makes sense now!
-
@Captain And then because functions like
getLine
are always defined, any "maybe undefined" values dependent on the result of that input are not considered at all, thus there is no "paradoxical" IO do undo?
Got it. (I think. Probably not)
-
@topspin Yes. When you use the IO monad's (>>=), it wraps things up in GHC's internal
IO
data constructor (just like theJust
data constructor I demonstrated). There is real haskell code to evaluate in there, but it won't happen unless the the runtime demands it.
-
@topspin you might want to read my post above. Unlike the result of
getLine
, thegetLine
itself is a concrete value. It's not "maybe uninitialized", it's definitely initialized - it's a function. And because it's definitely initialized, it gets discarded. Now, if it was the result of callinggetLine
that was on the left of`seq`
, Haskell wouldn't be able to prove it's initialized, and it would be evaluated normally and the program would wait for input.
-
Oh, and then the real story is actually more complicated, because all of this gets compiled down to two intermediate languages lol!
But you get the idea. :-)
-
Ok, I think I have a more solid idea of Haskell than before this thread, bot damn it! I do feel as if I just sat through some people explaining how the scores in a 🦗 match came about.
Now need beers, many beers.
-
@Gąska said in OOP is TRWTF:
@topspin you might want to read my post above. Unlike the result of
getLine
, thegetLine
itself is a concrete value. It's not "maybe uninitialized", it's definitely initialized - it's a function. And because it's definitely initialized, it gets discarded. Now, if it was the result of callinggetLine
that was on the left of`seq`
, Haskell wouldn't be able to prove it's initialized, and it would be evaluated normally and the program would wait for input.Yes, that is was I said above.
It is an expression that's always defined, but depending on its result a sub-expression could result in either defined or undefined. LikeJust ⊥
is defined, even though it's "just" an undefined value.
-
Also...
@topspin said in OOP is TRWTF:
@Captain So it's only the head that gets considered at all?
⊥ seq b = ⊥
but(Just ⊥) seq b = b
?Not quite. The ⊥ cannot actually exist - it's a type without values. Whatever would have produced ⊥, it would crash the program or entered infinite loop before it'd be able to.
`seq`
isn't the only thing that enforces order - all data dependencies do. And constructingJust ⊥
depends on⊥
, which cannot be created. So while(Just ⊥) seq b
ought to eventually evaluate tob
, it'll never have a chance to.Does it make sense? Please tell me it does.
-
@Gąska said in OOP is TRWTF:
Not quite. The ⊥ cannot actually exist - it's a type without values. Whatever would have produced ⊥, it would crash the program or entered infinite loop before it'd be able to.
seq
isn't the only thing that enforces order - all data dependencies do. And constructing Just ⊥ depends on ⊥, which cannot be created. So while (Just ⊥) seq b ought to eventually evaluate to b, it'll never have a chance to.No, you're assuming a more traditional "bottom up" propagation.
The run-time doesn't know if the inside of that
Just
is (potentially) a bottom or not. And it doesn't care, because it doesn't evaluate inside of theJust
unless it needs it all. And it doesn't because it's on the left side of aseq
, which only cares if the head is defined.The standard defines a bottom called "undefined", defined by the equation "
undefined = undefined
". If you did(Just undefined) `seq` "boo"
you'd get a "boo".
-
@dfdub said in OOP is TRWTF:
@topspin said in OOP is TRWTF:
I don't see how abstraction and encapsulation are the root cause of shared mutable state, though.
I especially like the implication that functional programming doesn't have abstractions or encapsulation.
Anyone thinking that needs to take a long hard look at Standard ML's use in theorem provers.
Unlike Haskell, ML is eagerly evaluated so the semantics of IO are more like what imperative programmers would understand. It's still based on the same type theories as Haskell though, and definitely has types that do information hiding… which is in fact vital for how it used to be used. In particular, a
theorem
could only be constructed when all of its arguments were either directlyaxiom
s or constructed from them using the logic you were accepting as valid for the purposes of the argument being made (when I encountered this, it was with a very rich but not wholly tractable logic that was equivalent to turing-complete programming), yet the details of how everything was proved was usually concealed from you. Elegant and powerful abstraction and encapsulation.
-
@Captain said in OOP is TRWTF:
The standard defines a bottom called "undefined", defined by the equation "
undefined = undefined
".Oh, so ⊥ is not actually "never" type and it can actually be constructed at will in code that actually works. Got it.
The more I understand Haskell, the less impressed I am by it.
-
@Gąska It can be "created" by an expression that doesn't terminate. But that's not a problem as long as you don't actually try to evaluate it.
Python syntax:def f(i): if (i == 0): return 0 else while True: pass
You have two branches, the first results in 0, the second doesn't terminate and is thus ⊥. Since before evaluating the branch you are only dealing with an expression that describes the ⊥ branch, that's just fine.
So, mixing both languages and using
f
as defined above:Just f(0)
is equivalent toJust 0
andJust f(1)
is equivalent in some sense toJust ⊥
, butJust f(1) seq b
sees theJust
part of the LHS, considers it defined, and evaluates to b.At least I hope that's how it works.
-
Did you expect that they solved the Halting problem? I said non-termination is a bottom. :-)
There are a few others, like
error
which takes a string and throws it as an error in the run-time system.
-
@topspin oh, I get it. There's no data dependency between
Just
and its argument after all.Just
can exist and be passed around and transformed without ever evaluating the thing inside it.
-
@Captain said in OOP is TRWTF:
Did you expect that they solved the Halting problem?
No, I expected them to have a type system that at least has feature parity with TypeScript. Is that too much to ask for?
-
@Gąska said in OOP is TRWTF:
No, I expected them to have a type system that at least has feature parity with TypeScript. Is that too much to ask for?
Haskell isn't total (i.e, it's Turing complete, so therefore there are expressions that don't terminate). Is typescript? If not, I promise I can make a non-terminating expression for TypeScript that typechecks. if I knew TypeScript that is lol. :-D
-
@dkf said in OOP is TRWTF:
@dfdub said in OOP is TRWTF:
@topspin said in OOP is TRWTF:
I don't see how abstraction and encapsulation are the root cause of shared mutable state, though.
I especially like the implication that functional programming doesn't have abstractions or encapsulation.
Anyone thinking that
Is an idiot, no need for further discussion. Maybe you can do without encapsulation, but you're always going to use abstractions one way or another.
Unlike Haskell, ML is eagerly evaluated so the semantics of IO are more like what imperative programmers would understand.
Interestingly, while I never learned Haskell, we did have ML in the first half of freshman term. As it introduced concepts I hadn't seen before, I found it much more "enlightening" than the boring Java stuff that followed later on.
But since you can do simple exercises by just "returning values" to the REPL, we never made it to how to do actual IO though, and thus I never used it afterwards.
-
@Captain I think there's some misunderstanding, not sure who misunderstood who.
In TS, there can never be an existing value of type
never
. If a function has a return type ofnever
, it GUARANTEES that it will never terminate (except by throwing exception, maybe). When I've first read about Haskell's ⊥, being the bottom of entire type system and all, I've assumed it has all those same properties. But it seems it's not so?
-
@topspin I love a good REPL.
-
@topspin said in OOP is TRWTF:
Interestingly, while I never learned Haskell, we did have ML in the first half of freshman term. As it introduced concepts I hadn't seen before, I found it much more "enlightening" than the boring Java stuff that followed later on.
But since you can do simple exercises by just "returning values" to the REPL, we never made it to how to do actual IO though, and thus I never used it afterwards.I'm actually writing my thesis project right now in OCaml (basically, a very very very extended version of ML). So far, it's pretty nice, except for the absolute impossibility to organize code sensibly because it lacks proper namespaces. It's also strict-evaluated, so it works very much like imperative languages and not at all like Haskell. It also doesn't care about purity, so I/O works exactly like in C# etc. Also, the standard library sucks - so much that almost the entire community switched over to an alternative, very non-standard standard library developed by a company called Jane Street.
-
@Gąska What was the motivation for your choice, besides maybe curiosity?
-
@Gąska They sound similar enough, but... the Hindley-Milner type system is a little different maybe.
Recall that the "type" for a bottom is a free type variable. (Or rather, the type
forall a . a
). It's is 'empty' as a type because no value can have the properties of literally every single type.But like I said, bottom isn't a "value". It's a weird other thing that can happen when there's an error/non-termination while evaluating an expression. Indeed, that's the only way
a
(the type) can be inhabited.Haskellers say that "bottoms are indistinguishable" by Haskell.
That is to say, as far as Haskell the language is concerned, bottom is an "error". The run-time system may handle special bottoms differently, like
error
or the real implementation ofundefined
(which the runtime knows how to catch and report), which makes it handy for stubbing functions while you write.
-
@topspin said in OOP is TRWTF:
@Gąska What was the motivation for your choice, besides maybe curiosity?
OCaml standard library contains an OCaml parser. Why it matters is best summarized as raisins.
-
@Gąska said in OOP is TRWTF:
OCaml standard library contains an OCaml parser.
That's a good enough explanation.
-
@Captain said in OOP is TRWTF:
@Gąska They sound similar enough, but... the Hindley-Milner type system is a little different maybe.
Rust also uses HM type system and has
!
type that works exactly like TS'snever
.
-
@Gąska So they just made the commitment/choice that only non-termination was ever going to fit in to their bottom type? I mean, okay then.
-
@Captain well, it's a strict language. How could you possibly evaluate a value of type that doesn't have any values?
-
@error said in OOP is TRWTF:
We tend to refer to as "reactive" programming, for raisins.
AFAIK, "reactive programming" is basically functional programming within an event loop.
-
@dfdub I think the rebranding was to avoid triggering people who have an allergy to the words "functional programming". That, or it was created by some very smart programmers who somehow never encountered the concept of functional programming in their entire lives, and thought they're inventing something new.
-
@boomzilla said in OOP is TRWTF:
It's more that the compiler / runtime is free to evaluate that stuff lazily and in whatever order it likes (though obviously it needs to evaluate the parameters for a function before the function itself can execute).
Nope, Haskell doesn't evaluate function parameters before calling the function.
-
@Gąska Well, you could also argue that baking the event loop into your programming language / DSL / framework creates a new programming paradigm, so I wouldn't say it's just a way to avoid the label "functional".
-
I don't know if "reactive programming" is new to the mainstream, but "Functional Reactive programming" has been around since 1997.
-
@Gąska said in OOP is TRWTF:
or it was created by some very smart programmers who somehow never encountered the concept of functional programming in their entire lives, and thought they're inventing something new.
You just summed up the entire JS ecosystem.
-
I think what is confusing @topspin and the likes is that Haskell has a very peculiar model of computation, so it takes a while before the idea of how Haskell code works sinks in.
Lots of advanced usage of Haskell doesn't really involve too much of functional programming, but more on types: there's the idea of dependent types, which means even types are first-class, and can be computed by a function. Recall the
>>=
example, where how it behaves depend on what type>>=
is resolved to? Which typeclass instance is being invoked depends on what type the typeclass function is resolved to, akin to polymorphism in OO, so actually you need to specify what type>>=
is as well as the existence of>>=
as>>= :: IO a -> (a -> IO b) -> IO b
and>>= :: [a] -> (a -> [b]) -> [b]
does completely different things (>>=
has typeMonad m => m a -> (a -> m b) -> m b
). In fact even functions are monads if you treat->
as a type constructor, which leads to the natural conclusion that the space of computable functions isApplicative
only without a way to lift values directly viapure
, and the K and S combinator corresponds tofmap
and<$>
respectively.Most of the time the type system can infer the type of them for you so you don't have to explicitly specify every one of them yourself. Sometimes it matter, either because of language restrictions (like due to monomorphism restriction) or because you're utilizing the type system to your favor. Turns out this is very important, because you can do some fun things with it, like polyvariadic(!) functions in FP:
>>> printf "%s, %d, %.4f" "hello" 123 pi hello, 123, 3.1416
Or to keep track of information by types (
Data.Proxy
is specifically for this purpose.Data.Functor.Const
also uses it to do aconst
for types in a type-safe way):newtype Count x = Count { getCount :: Nat } deriving (Show, Eq, Ord) instance Countable Void where count = Count 0 instance Countable () where count = Count 1 instance Countable Bool where count = Count 2 instance Countable Nat where count = Count undefined class Countable c where count :: Count c >>> (count :: Count Void) 0 >>> (count :: Count ()) 1 >>> (count :: Count Bool) 2 >>> (count :: Count Nat) error "Prelude.undefined"
Oh, and did I mention you can do simple forward automatic differentation with types in Haskell too? It's like templates in C++ but much safer due to compile time checking. Very eye-opening.
Once this gets advanced enough you're heading into the territory of theorem proving and formal verification, which is a massive, underrated field that is growing alongside machine learning.
But to be fair, Haskell is a 80s language much like Java: its HM type system is very ancient, and it doesn't really support dependent typing very well, of which GHC desperately trying to hack in a half-functional one right now. If you want a more modern and sound language that support dependent typing properly, there are newer languages like Idris and Agda. Try them!
-
@Gąska said in OOP is TRWTF:
@Captain except I don't. I understand that it IS wrong. I still have no goddamn clue WHY it's wrong, and what exactly is going on when Haskell sees code.
There is no
spooncode.
-
@Gąska said in OOP is TRWTF:
@Captain except I don't. I understand that it IS wrong. I still have no goddamn clue WHY it's wrong, and what exactly is going on when Haskell sees code.
In a more serious note, to understand what Haskell compiler does to code you do need to know some theories on how the compiler works. Otherwise things like this happen after optimization.
It's really not as much different as "imperative" languages that we usually think about, aka C and C++. If you write blatantly dead code or UBs, compilers optimize them away in totally reasonable ways and makes you wonder why your code isn't running "line per line" like imperative, interpretive languages should behave. Or sometimes just straight out does unexpected things. I don't know, you can't really find an up-to-date, widely used language that is strictly imperative (so no built-in OO support and such) and yet doesn't optimize aggressively. The only one I can recall is probably pre-2.0 ActionScript, or some toy languages like those that exists in a programming game/Zachtronics game.
-
@topspin said in OOP is TRWTF:
@boomzilla said in OOP is TRWTF:
@mott555 said in OOP is TRWTF:
@boomzilla said in OOP is TRWTF:
endofunctor
This sounds like something that happens when you get some Haskell in your butt.
It's what you use when you get a monoid there, actually.
That sounds categorically false.
Definitely impure according to the Shafteer-Grey theorem.
-
@Gąska said in OOP is TRWTF:
Haskell program starts with just one instance of
RealWorld
and new instances cannot be created without disposing of the old one, and new one always gets created whenever any outside interaction happensAn immutable singleton by any other name would smell as sweet.
-
@LaoC said in OOP is TRWTF:
@topspin said in OOP is TRWTF:
@boomzilla said in OOP is TRWTF:
@mott555 said in OOP is TRWTF:
@boomzilla said in OOP is TRWTF:
endofunctor
This sounds like something that happens when you get some Haskell in your butt.
It's what you use when you get a monoid there, actually.
That sounds categorically false.
Definitely impure according to the Shafteer-Grey theorem.