Git hates UTF-16


  • ♿ (Parody)

    @levicki said in Git hates UTF-16:

    @pie_flavor said in Git hates UTF-16:

    @levicki said in Git hates UTF-16:

    Just answer the fucking question.

    :doing_it_wrong:

    I know he won't answer, he never answers questions which might prove him wrong.

    Just like you know the bytes magically disappear or something when you think dirty character thoughts about them?


  • Banned

    @boomzilla said in Git hates UTF-16:

    @ixvedeusi said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    an actual argument that a sequence of characters is not a sequence of bytes

    You quoted my argument just below this statement. The bytes represent characters, just as that picture of the pipe represents a pipe. And just as the picture isn't a pipe (nor vice-versa), so the bytes aren't a sequence of characters (nor vice-versa).

    If you have a sequence of characters, you also have a sequence of bytes.

    You also have a sequence of electrons, but it doesn't mean bytes are electrons.


  • ♿ (Parody)

    @Gąska said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    @ixvedeusi said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    an actual argument that a sequence of characters is not a sequence of bytes

    You quoted my argument just below this statement. The bytes represent characters, just as that picture of the pipe represents a pipe. And just as the picture isn't a pipe (nor vice-versa), so the bytes aren't a sequence of characters (nor vice-versa).

    If you have a sequence of characters, you also have a sequence of bytes.

    You also have a sequence of electrons, but it doesn't mean bytes are electrons.

    You've gone to a layer where I'm honestly a lot less knowledgeable, in that I don't really know how semiconductors work at the electron level. But yeah, obviously at that point electrons are there somehow, interacting with the rest of it. So they are partly electrons, at least.

    Real world stuff like that gets messy (and you got distracted by the composition before when I talked about the electrons and protons in your body).

    It really feels like you're saying that bytes aren't any arbitrary electrons, instead of electrons that are being used in a particular way. Again, this is your A implies B does not mean that B implies A fallacy, I believe, because again, I don't think that you believe there are no electrons involved in a computer (ignoring other ways of computer encoding like magnetic storage).

    Or, as you said, you've just gotten this overly pedantic definition of "is" in your head, based on philosophy and simply can't let go of it. I guess it wouldn't be the first time for you.


  • Banned

    @boomzilla said in Git hates UTF-16:

    Or, as you said, you've just gotten this overly pedantic definition of "is" in your head, based on philosophy and simply can't let go of it.

    At least I didn't get one that's outright wrong.


  • ♿ (Parody)

    @Gąska said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    Or, as you said, you've just gotten this overly pedantic definition of "is" in your head, based on philosophy and simply can't let go of it.

    At least I didn't get one that's outright wrong.

    Yeah, you're just holding onto it and ignoring other correct definitions / usages.


  • Banned

    @boomzilla said in Git hates UTF-16:

    It really feels like you're saying that bytes aren't any arbitrary electrons, instead of electrons that are being used in a particular way. Again, this is your A implies B does not mean that B implies A fallacy, I believe, because again, I don't think that you believe there are no electrons involved in a computer (ignoring other ways of computer encoding like magnetic storage).

    I'm really trying to understand your point, but I'm just completely lost here. Can you point out exactly which part of what I said has this problem? What is A, what is B?


  • ♿ (Parody)

    @Gąska said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    It really feels like you're saying that bytes aren't any arbitrary electrons, instead of electrons that are being used in a particular way. Again, this is your A implies B does not mean that B implies A fallacy, I believe, because again, I don't think that you believe there are no electrons involved in a computer (ignoring other ways of computer encoding like magnetic storage).

    I'm really trying to understand your point, but I'm just completely lost here. Can you point out exactly which part of what I said has this problem? What is A, what is B?

    A: sequence of bytes.
    B: sequence of characters.

    Previously you talked about how not all sequences of bytes were sequences of characters. I don't know why you actually said this, but presumably you thought it was an argument against sequences of characters being sequences of bytes. Because that was the point you were arguing.

    Yes, it makes no sense and defies normal logic. 🤷🏿♀


  • Banned

    @boomzilla said in Git hates UTF-16:

    @Gąska said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    Or, as you said, you've just gotten this overly pedantic definition of "is" in your head, based on philosophy and simply can't let go of it.

    At least I didn't get one that's outright wrong.

    Yeah, you're just holding onto it and ignoring other correct definitions / usages.

    None of the definitions here are even close to how you're using it.



  • @boomzilla said in Git hates UTF-16:

    If you have a sequence of characters, you also have a sequence of bytes.

    No you don't. You may decide to structure a sequence of bytes to represent this sequence of characters. To do so, you will have to choose a character encoding. Alternatively, you can decide to draw lines on a sheet of paper using a pencil, to structure the field of light reflected by that sheet of paper so that it represents this sequence of characters.

    Of course, someone else might already have done one or the other of these for you. In that case, you don't have a sequence of characters, you have a sequence of bytes, or a marked sheet of paper. You'll need to have additional information to find out what sequence of characters is represented by these. Either of these is just like Magritte's picture of a pipe: a representation, a "stand-in" for the characters, but not the characters themselves.

    Maybe we just don't agree what "character" means. You seem to believe that the notion of character is specific to computer programs. I consider a character to be a kind of symbol, and thus an abstraction. There is no real object ever that 'is' an abstraction.


  • Banned

    @boomzilla said in Git hates UTF-16:

    @Gąska said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    It really feels like you're saying that bytes aren't any arbitrary electrons, instead of electrons that are being used in a particular way. Again, this is your A implies B does not mean that B implies A fallacy, I believe, because again, I don't think that you believe there are no electrons involved in a computer (ignoring other ways of computer encoding like magnetic storage).

    I'm really trying to understand your point, but I'm just completely lost here. Can you point out exactly which part of what I said has this problem? What is A, what is B?

    A: sequence of bytes.
    B: sequence of characters.

    Previously you talked about how not all sequences of bytes were sequences of characters. I don't know why you actually said this, but presumably you thought it was an argument against sequences of characters being sequences of bytes. Because that was the point you were arguing.

    Yes, it makes no sense and defies normal logic. 🤷🏿♀

    I'm pretty sure I didn't say that; though maybe I have once or twice as an off-hand remark and now forgot about it. I very definitely haven't based my argument on it. I said that there are several ways in which characters don't behave like bytes, and that makes it not just unhelpful, but outright incorrect to say characters are bytes.


  • ♿ (Parody)

    @Gąska said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    @Gąska said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    Or, as you said, you've just gotten this overly pedantic definition of "is" in your head, based on philosophy and simply can't let go of it.

    At least I didn't get one that's outright wrong.

    Yeah, you're just holding onto it and ignoring other correct definitions / usages.

    None of the definitions here are even close to how you're using it.

    I can see it right there in the onebox:

    to constitute the same idea or object as

    🤷🏽♂


  • ♿ (Parody)

    @ixvedeusi said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    If you have a sequence of characters, you also have a sequence of bytes.

    No you don't.

    What? How?

    You may decide to structure a sequence of bytes to represent this sequence of characters. To do so, you will have to choose a character encoding. Alternatively, you can decide to draw lines on a sheet of paper using a pencil, to structure the field of light reflected by that sheet of paper so that it represents this sequence of characters.

    Oh, OK. You probably forgot that I specified this was characters as you find in computer memory. I fully agree that there are other representations of characters in the world but was only talking about the sorts you get in strings.

    Of course, someone else might already have done one or the other of these for you. In that case, you don't have a sequence of characters, you have a sequence of bytes, or a marked sheet of paper. You'll need to have additional information to find out what sequence of characters is represented by these. Either of these is just like Magritte's picture of a pipe: a representation, a "stand-in" for the characters, but not the characters themselves.

    What I find particularly amazing about this is that my argument is more like Magritte's than the people who argue my points with examples then tell me that it proves the opposite. And then you appeared and reminded us of this example and are using it in the wrong way. Of course, the analogy isn't perfect, because I'm not saying that the characters aren't characters. I'm just not denying the reality that you can't have those characters without bytes. That several people continue to argue exactly that is nothing short of stupendous.

    Maybe we just don't agree what "character" means. You seem to believe that the notion of character is specific to computer programs. I consider a character to be a kind of symbol, and thus an abstraction. There is no real object ever that 'is' an abstraction.

    Yes, that's part of the problem with your post, because that was what we were talking about in this particular conversation. That you've talked about different concepts doesn't have any bearing on my argument, which was only meant to deal with characters as are specific to computer programs.


  • ♿ (Parody)

    @Gąska said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    @Gąska said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    It really feels like you're saying that bytes aren't any arbitrary electrons, instead of electrons that are being used in a particular way. Again, this is your A implies B does not mean that B implies A fallacy, I believe, because again, I don't think that you believe there are no electrons involved in a computer (ignoring other ways of computer encoding like magnetic storage).

    I'm really trying to understand your point, but I'm just completely lost here. Can you point out exactly which part of what I said has this problem? What is A, what is B?

    A: sequence of bytes.
    B: sequence of characters.

    Previously you talked about how not all sequences of bytes were sequences of characters. I don't know why you actually said this, but presumably you thought it was an argument against sequences of characters being sequences of bytes. Because that was the point you were arguing.

    Yes, it makes no sense and defies normal logic. 🤷🏿♀

    I'm pretty sure I didn't say that; though maybe I have once or twice as an off-hand remark and now forgot about it. I very definitely haven't based my argument on it. I said that there are several ways in which characters don't behave like bytes, and that makes it not just unhelpful, but outright incorrect to say characters are bytes.

    Obviously I agree on unhelpful, but you can't 1984 me into saying they aren't bytes. I'm too smart and too stubborn to fall for that nonsense.


  • Banned

    @boomzilla except characters don't constitute the same object (both are abstract entities, so material form is irrelevant; and the live on different abstraction levels). They very definitely don't constitute the same idea.



  • @Gąska said in Git hates UTF-16:

    You are the timecube guy of strings. That's basically what I'm left with.

    Sorry, I'm too young to get the reference.

    If you want to brush up your general culture, apparently he got a Wikipedia site. And thanks to the Wayback Machine, you can still experience his site in all its glory:


  • ♿ (Parody)

    @Gąska said in Git hates UTF-16:

    @boomzilla except characters don't constitute the same object (both are abstract entities, so material form is irrelevant; and the live on different abstraction levels). They very definitely don't constitute the same idea.

    It's irrelevant, sure. (Mostly. It might be relevant in some situations like copying it around or serializing it.) But it is. And abstractions don't change the concrete aspects, just how we perceive and use them. But of course, the software working at a lower level is for sure using the bytes as bytes.


  • ♿ (Parody)

    @levicki said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    Does the further abstraction change the strands of DNA? The BYTEs? Or is it the same as it ever was, no matter what you think about it?

    Without this further abstraction both DNA and BYTE sequences are meaningless -- they could represent anything from 🐖 to 🧚♀.

    That's what the original quote was getting at, yes. But the fact that it's all still there is just too tempting for some people, and then we end up with fodder for this site.

    But stuff doesn't have to mean anything. /admiral_p


  • ♿ (Parody)

    @levicki said in Git hates UTF-16:

    @boomzilla 33 c0 c3

    Um...OK?


  • Banned

    @boomzilla said in Git hates UTF-16:

    @Gąska said in Git hates UTF-16:

    @boomzilla except characters don't constitute the same object (both are abstract entities, so material form is irrelevant; and the live on different abstraction levels). They very definitely don't constitute the same idea.

    It's irrelevant, sure. (Mostly. It might be relevant in some situations like copying it around or serializing it.)

    By material, I meant physical particles. Copying and serializing are abstract operations.


  • ♿ (Parody)

    @Gąska said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    @Gąska said in Git hates UTF-16:

    @boomzilla except characters don't constitute the same object (both are abstract entities, so material form is irrelevant; and the live on different abstraction levels). They very definitely don't constitute the same idea.

    It's irrelevant, sure. (Mostly. It might be relevant in some situations like copying it around or serializing it.)

    By material, I meant physical particles. Copying and serializing are abstract operations.

    My comments work at any level of abstraction. Higher levels of abstraction don't mean that the lower levels go away, even when you're not abstract at all. But the sort of thinking you're applying is like thinking that the "dark" side of the moon doesn't exist while we're looking at the other side.


  • Banned

    @boomzilla said in Git hates UTF-16:

    @Gąska said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    @Gąska said in Git hates UTF-16:

    @boomzilla except characters don't constitute the same object (both are abstract entities, so material form is irrelevant; and the live on different abstraction levels). They very definitely don't constitute the same idea.

    It's irrelevant, sure. (Mostly. It might be relevant in some situations like copying it around or serializing it.)

    By material, I meant physical particles. Copying and serializing are abstract operations.

    My comments work at any level of abstraction. Higher levels of abstraction don't mean that the lower levels go away

    Of course they don't. But they're still not equivalent to the higher level concepts/entities. I'm talking about equivalence because the definition you cited talks about equivalence, not because I'm committing a two-way implication fallacy you love accusing me of.


  • ♿ (Parody)

    @levicki said in Git hates UTF-16:

    The point being -- anything can be represented using bytes, and it can simultaneously mean different things to different people.

    Yes, of course.

    That is why you cannot assume that characters are the same as bytes or that a certain sequence of bytes means only one thing.

    What? All I'm saying is that it's safe to assume that if you have characters then you have bytes because that's how computers work. And I've explicitly said that disregarding the details of how they work as characters was dangerous and likely to lead to problems.

    If you assume that they aren't bytes then you're probably using quantum computers or something.


  • Banned

    @boomzilla said in Git hates UTF-16:

    All I'm saying is that it's safe to assume that if you have characters then you have bytes because that's how computers work.

    No - you're saying much, much more. Not all of which is correct.



  • @boomzilla said in Git hates UTF-16:

    Oh, OK. You probably forgot that I specified this was characters as you find in computer memory.

    I didn't forget this fact, way back I had already pointed out to you there there never ever are characters in computer memory, here.

    And now that I've been back to that post, I realize that @Gąska's <del><ins> got lost in the quote, mudding up the message of that post a bit...

    @boomzilla said in Git hates UTF-16:

    Yes, that's part of the problem with your post, because that was what we were talking about in this particular conversation. That you've talked about different concepts doesn't have any bearing on my argument, which was only meant to deal with characters as are specific to computer programs.

    Which you suddenly somehow introduced out of nowhere, trying to discreetly move the goal posts out of the field. Remember, the whole discussion started with you asking me to clarify my statement that

    @ixvedeusi said in Git hates UTF-16:

    I don't agree that it's true, even

    Which is independent of in which domain these characters are represented.


  • ♿ (Parody)

    @Gąska said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    @Gąska said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    @Gąska said in Git hates UTF-16:

    @boomzilla except characters don't constitute the same object (both are abstract entities, so material form is irrelevant; and the live on different abstraction levels). They very definitely don't constitute the same idea.

    It's irrelevant, sure. (Mostly. It might be relevant in some situations like copying it around or serializing it.)

    By material, I meant physical particles. Copying and serializing are abstract operations.

    My comments work at any level of abstraction. Higher levels of abstraction don't mean that the lower levels go away

    Of course they don't. But they're still not equivalent to the higher level concepts/entities. I'm talking about equivalence because the definition you cited talks about equivalence, not because I'm committing a two-way implication fallacy you love accusing me of.

    Dude, I never said they were equivalent. Fucking hell. I've said that if you treat them as equivalent you're probably in for trouble. Because they aren't equivalent. This equivalence bullshit is entirely your shoulder aliens.


  • ♿ (Parody)

    @Gąska said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    All I'm saying is that it's safe to assume that if you have characters then you have bytes because that's how computers work.

    No - you're saying much, much more. Not all of which is correct.

    Yes, I said more. Specifically that you better respect their "characterness" if you don't want trouble. And you're right, that's not 100% correct, because there are cases where treating them as bytes won't get you into trouble, but that's just being a pedantic dickweed.


  • Banned

    @boomzilla said in Git hates UTF-16:

    @Gąska said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    @Gąska said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    @Gąska said in Git hates UTF-16:

    @boomzilla except characters don't constitute the same object (both are abstract entities, so material form is irrelevant; and the live on different abstraction levels). They very definitely don't constitute the same idea.

    It's irrelevant, sure. (Mostly. It might be relevant in some situations like copying it around or serializing it.)

    By material, I meant physical particles. Copying and serializing are abstract operations.

    My comments work at any level of abstraction. Higher levels of abstraction don't mean that the lower levels go away

    Of course they don't. But they're still not equivalent to the higher level concepts/entities. I'm talking about equivalence because the definition you cited talks about equivalence, not because I'm committing a two-way implication fallacy you love accusing me of.

    Dude, I never said they were equivalent. Fucking hell. I've said that if you treat them as equivalent you're probably in for trouble. Because they aren't equivalent. This equivalence bullshit is entirely your shoulder aliens.

    Did my post get mangled again for you? Have you completely missed the part where I specifically explained why I'm talking about equivalence there, specifically to avoid the very thing you've just done?


  • ♿ (Parody)

    @ixvedeusi said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    Yes, that's part of the problem with your post, because that was what we were talking about in this particular conversation. That you've talked about different concepts doesn't have any bearing on my argument, which was only meant to deal with characters as are specific to computer programs.

    Which you suddenly somehow introduced out of nowhere, trying to discreetly move the goal post out of the field. Remember, the whole discussion started with you asking me to clarify my statement that

    :wtf: That's ridiculous. It was the topic of conversation at the time and was what @dkf was referring to. It's basically the point of this thread, which has "UTF-16" in its title.

    @ixvedeusi said in Git hates UTF-16:

    I don't agree that it's true, even

    Which is independent of in which domain these characters are represented.

    So then what do you think happens to the bytes when the character data shows up?


  • ♿ (Parody)

    @Gąska said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    @Gąska said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    @Gąska said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    @Gąska said in Git hates UTF-16:

    @boomzilla except characters don't constitute the same object (both are abstract entities, so material form is irrelevant; and the live on different abstraction levels). They very definitely don't constitute the same idea.

    It's irrelevant, sure. (Mostly. It might be relevant in some situations like copying it around or serializing it.)

    By material, I meant physical particles. Copying and serializing are abstract operations.

    My comments work at any level of abstraction. Higher levels of abstraction don't mean that the lower levels go away

    Of course they don't. But they're still not equivalent to the higher level concepts/entities. I'm talking about equivalence because the definition you cited talks about equivalence, not because I'm committing a two-way implication fallacy you love accusing me of.

    Dude, I never said they were equivalent. Fucking hell. I've said that if you treat them as equivalent you're probably in for trouble. Because they aren't equivalent. This equivalence bullshit is entirely your shoulder aliens.

    Did my post get mangled again for you? Have you completely missed the part where I specifically explained why I'm talking about equivalence there, specifically to avoid the very thing you've just done?

    If you were talking about equivalence to point out the second part of the quote (about how misleading and useless it is) then you'd have a point. But you seem to think it applies to the first part (that it's true that sequences of characters are sequences of bytes). I have given up on motives since it's difficult enough to keep up with your inconsistent logic.


  • Banned

    @boomzilla have you already forgotten the definition of "be" you have picked yourself? This is the equivalence that I was talking about. Unless you want to argue "constitute of the same idea or object" isn't about equivalence.


  • ♿ (Parody)

    @Gąska said in Git hates UTF-16:

    @boomzilla have you already forgotten the definition of "be" you have picked yourself?

    Did you ever consider it?

    This is the equivalence that I was talking about. Unless you want to argue "constitute of the same idea or object" isn't about equivalence.

    So, here's the thing, which will apparently blow your mind, even though you've mentioned the concept. We're talking about multiple levels of abstraction. You seem to think that I'm saying that all the levels are equivalent. But I'm really just pointing at the different levels. And yes, in one sense the sequence of characters are equivalent to a sequence of bytes, if you're willing to go below the "character" level of abstraction, where the thing that you are considering to be a "sequence of characters" also is a sequence of bytes.

    Can you really not perceive that it is both of these things at the same time? Like, your sibling is also your mother's child. At the same time. One of these doesn't stop being true when you're thinking about your brother (feel free to substitute first person pronouns if you have no siblings and find that too confusing).



  • @boomzilla said in Git hates UTF-16:

    That's ridiculous. It was the topic of conversation at the time and was what @dkf was referring to. It's basically the point of this thread, which has "UTF-16" in its title.

    My original statement (this here), from which this ontological mess unfolded, was an attempt to put these things in a wider context. Of course I can argue that a sequence of characters "is a" sequence of bytes in the context of a computer program, just as I can argue that a car "is a" sequence of bytes in that same context. There's strictly no problem with either of these statements as long as you limit yourself entirely to the world of computer programs, because there literally is nothing else than bytes. They are convenient mental shortcuts in this specific context.

    But that doesn't make either of these statements true, and assuming any of these is true will lead to big problems if when you transition into or outside of that specific context. Thus your limitation of "in a computer program" was not only moving goal posts, it also was never relevant, and that's why I continue to ignore it.


  • Banned

    @boomzilla you know what my mother's child and my sibling have in common? They're the same entity. You know what's the difference between a character and a byte? They're different entities. The bytes can represent the characters, but they're still different entities. Just like the linear equation describing a straight line isn't the same entity as that straight line. It only describes that line.


  • ♿ (Parody)

    @ixvedeusi said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    That's ridiculous. It was the topic of conversation at the time and was what @dkf was referring to. It's basically the point of this thread, which has "UTF-16" in its title.

    My original statement (this here), from which this ontological mess unfolded, was an attempt to put these things in a wider context. Of course I can argue that a sequence of characters "is a" sequence of bytes in the context of a computer program, just as I can argue that a car "is a" sequence of bytes in that same context. There's strictly no problem with either of these statements as long as you limit yourself entirely to the world of computer programs, because there literally is nothing else than bytes. They are convenient mental shortcuts in this specific context.

    But that doesn't make either of these statements true,

    They are true, though.

    and assuming any of these is true will lead to big problems if when you transition into or outside of that specific context.

    That is 100% false. Just because you know a thing does not mean you don't know other things. It's failing to abide by those other things that get you into trouble.

    Thus your limitation of "in a computer program" was not only moving goal posts, it also was never relevant, and that's why I continue to ignore it.

    That's totally wrong on all counts. Amazing.



  • @boomzilla said in Git hates UTF-16:

    They are true, though.

    Let's just agree to disagree on that point.


  • ♿ (Parody)

    @Gąska said in Git hates UTF-16:

    @boomzilla you know what my mother's child and my sibling have in common? They're the same entity.

    Yes! Exactly like this.

    You know what's the difference between a character and a byte? They're different entities.

    What? How?

    The bytes can represent the characters, but they're still different entities. Just like the linear equation describing a straight line isn't the same entity as that straight line. It only describes that line.

    This post went so stupid so fast and this analogy doesn't work at all. Your analogy is more about the characters (bytes) in memory vs where they're displayed on a screen.


  • ♿ (Parody)

    @ixvedeusi said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    They are true, though.

    Let's just agree to disagree on that point.

    Yes, I'll leave you and @Gąska and @levicki with your Magic Byte Man mythology.


  • Banned

    @boomzilla said in Git hates UTF-16:

    The bytes can represent the characters, but they're still different entities. Just like the linear equation describing a straight line isn't the same entity as that straight line. It only describes that line.

    This post went so stupid so fast and this analogy doesn't work at all. Your analogy is more about the characters (bytes) in memory vs where they're displayed on a screen.

    I'm 99% certain you have once again misread my posts and thought I'm talking about an actual, physical line drawn on some piece of paper or some display, and not about mathematical concept known as straight line that doesn't exist in physical world but is a purely abstract concept


  • ♿ (Parody)

    @Gąska said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    The bytes can represent the characters, but they're still different entities. Just like the linear equation describing a straight line isn't the same entity as that straight line. It only describes that line.

    This post went so stupid so fast and this analogy doesn't work at all. Your analogy is more about the characters (bytes) in memory vs where they're displayed on a screen.

    I'm 99% certain you have once again misread my posts and thought I'm talking about an actual, physical line drawn on some piece of paper or some display, and not about mathematical concept known as straight line that doesn't exist in physical world but is a purely abstract concept

    🎉 I am the 1%! 🎉

    Yes, I get that, and I'm correcting the character side of the analogy to make it like yours, where the things are different things, not just different ways to look at the same things. To use the line analogy (it's not exact, so don't lose your overly literal poop here):

    The equation is the slope and the intercept.
    The equation is mathematical symbols (variable, numbers, operations).

    Different abstraction levels. Same thing. Both true at the same time. No one comes and takes away the mathematical symbols when someone looks at the equation and thinks, "slope." The symbols are all still there, even though someone is thinking about them at a higher level of abstraction.



  • @boomzilla said in Git hates UTF-16:

    Yes, I'll leave you and @Gąska and @levicki with your Magic Byte Man mythology.

    Cool! Enjoy that pipe!


  • ♿ (Parody)

    @ixvedeusi said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    Yes, I'll leave you and @Gąska and @levicki with your Magic Byte Man mythology.

    Cool! Enjoy that pipe!

    I love that you're still imagining me to be saying the opposite of what I'm saying. Unless...did he actually grind up a pipe and use that for paint? Is the paint really a pipe and that's the secret joke?


  • Banned

    @boomzilla said in Git hates UTF-16:

    @Gąska said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    The bytes can represent the characters, but they're still different entities. Just like the linear equation describing a straight line isn't the same entity as that straight line. It only describes that line.

    This post went so stupid so fast and this analogy doesn't work at all. Your analogy is more about the characters (bytes) in memory vs where they're displayed on a screen.

    I'm 99% certain you have once again misread my posts and thought I'm talking about an actual, physical line drawn on some piece of paper or some display, and not about mathematical concept known as straight line that doesn't exist in physical world but is a purely abstract concept

    🎉 I am the 1%! 🎉

    I was giving you benefit of doubt. The 1% means you understood what I said but still got it all wrong.

    Yes, I get that, and I'm correcting the character side of the analogy to make it like yours, where the things are different things, not just different ways to look at the same things.

    BYTES ARE NOT A DIFFERENT WAY TO LOOK AT CHARACTERS. This is even more wrong than saying that characters are bytes. Like, with the latter, you can at least stretch the alternative dictionary definitions to the point of absurdity to make your argument work. But what you said here is just completely wrong, no matter how you look at it. Characters are characters. They're implemented as bytes. Bytes aren't a different conceptual model of text than whatever the standard one is called. They're just representation. Of the same conceptual model. Of the same way of looking at characters.

    The equation iscontains the slope and the intercept.
    The equation is written with mathematical symbols (variable, numbers, operations).

    FTFY.

    Different abstraction levels. Same thing.

    Wrong. Equation can be represented with a pair of slope and intercept, but it's different entity from the pair of the slope and the intercept. Equation can be represented with mathematical symbols, but it's different entity than the sequence of symbols. Just like the straight line is represented by the equation. It's the exact same situation as your two examples. And the same situation as with characters and bytes.


  • ♿ (Parody)

    @Gąska said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    @Gąska said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    The bytes can represent the characters, but they're still different entities. Just like the linear equation describing a straight line isn't the same entity as that straight line. It only describes that line.

    This post went so stupid so fast and this analogy doesn't work at all. Your analogy is more about the characters (bytes) in memory vs where they're displayed on a screen.

    I'm 99% certain you have once again misread my posts and thought I'm talking about an actual, physical line drawn on some piece of paper or some display, and not about mathematical concept known as straight line that doesn't exist in physical world but is a purely abstract concept

    🎉 I am the 1%! 🎉

    I was giving you benefit of doubt. The 1% means you understood what I said but still got it all wrong.

    Well, someone got it all wrong.

    Yes, I get that, and I'm correcting the character side of the analogy to make it like yours, where the things are different things, not just different ways to look at the same things.

    BYTES ARE NOT A DIFFERENT WAY TO LOOK AT CHARACTERS. This is even more wrong than saying that characters are bytes.

    Huh? The thing that is your characters is also bytes, depending on how you're looking at them. Of course they are. Are you getting stuck inside a single level of abstraction again?

    Like, with the latter, you can at least stretch the alternative dictionary definitions to the point of absurdity to make your argument work. But what you said here is just completely wrong, no matter how you look at it. Characters are characters. They're implemented as bytes. Bytes aren't a different conceptual model of text than whatever the standard one is called. They're just representation. Of the same conceptual model. Of the same way of looking at characters.

    I'm amazed that you write all this with a straight face. Really. You write all that stuff explaining how the characters are bytes and then you conclude that they aren't. It's really amazing. I couldn't keep that up.

    The equation iscontains the slope and the intercept.
    The equation is written with mathematical symbols (variable, numbers, operations).

    FTFY.

    Different abstraction levels. Same thing.

    Wrong. Equation can be represented with a pair of slope and intercept, but it's different entity from the pair of the slope and the intercept. Equation can be represented with mathematical symbols, but it's different entity than the sequence of symbols. Just like the straight line is represented by the equation. It's the exact same situation as your two examples. And the same situation as with characters and bytes.

    You are truly fucked up, my friend. Get help.


  • Banned

    @boomzilla said in Git hates UTF-16:

    Like, with the latter, you can at least stretch the alternative dictionary definitions to the point of absurdity to make your argument work. But what you said here is just completely wrong, no matter how you look at it. Characters are characters. They're implemented as bytes. Bytes aren't a different conceptual model of text than whatever the standard one is called. They're just representation. Of the same conceptual model. Of the same way of looking at characters.

    I'm amazed that you write all this with a straight face. Really. You write all that stuff explaining how the characters are bytes and then you conclude that they aren't. It's really amazing. I couldn't keep that up.

    It's not my fault you're confusing abstraction, representation, identity, and several other concepts all with each other.


  • ♿ (Parody)

    @Gąska said in Git hates UTF-16:

    @boomzilla said in Git hates UTF-16:

    Like, with the latter, you can at least stretch the alternative dictionary definitions to the point of absurdity to make your argument work. But what you said here is just completely wrong, no matter how you look at it. Characters are characters. They're implemented as bytes. Bytes aren't a different conceptual model of text than whatever the standard one is called. They're just representation. Of the same conceptual model. Of the same way of looking at characters.

    I'm amazed that you write all this with a straight face. Really. You write all that stuff explaining how the characters are bytes and then you conclude that they aren't. It's really amazing. I couldn't keep that up.

    It's not my fault you're confusing abstraction, representation, identity, and several other concepts all with each other.

    Finally, you're right! It's no one's fault because it's not happening.


  • Banned

    @boomzilla and you say I'm denying reality.


  • ♿ (Parody)

    @Gąska said in Git hates UTF-16:

    @boomzilla and you say I'm denying reality.

    Only because of the way your posts represent your mental processes.

    No, I think a lightbulb went on a few posts back and you're just getting stuck inside of abstraction layers and can't think past those.


  • Banned

    @boomzilla I'm not even talking about abstraction layers anymore!


  • ♿ (Parody)

    @Gąska said in Git hates UTF-16:

    @boomzilla I'm not even talking about abstraction layers anymore!

    No one said you were. Sheesh, now you can't even tell your current posts from your past posts? Or are your posts no longer your posts? Did all the bytes leave and take the characters with them?


  • Banned

    @boomzilla if your goal was to win the argument by making your post completely undecipherable, you've succeeded. I'm out.


Log in to reply