I, ChatGPT


  • BINNED

    @Mason_Wheeler said in I, ChatGPT:

    @topspin said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @topspin said in I, ChatGPT:

    @Mason_Wheeler so then somebody creates this automated tool, who is not liable according to you, and somebody else is using it to create hundreds of different exploitable SQL statements and posts them on their website. And now you claim the second is liable, even though he didn’t go out and attack somebody’s server, but some idiot came along, downloaded these statements and executed them without any checks.
    There’s no active attack, the victim is hitting himself.

    This is the booby trap principle. If you deliberately leave something dangerous lying around, you are liable for the damage it causes. It doesn't have to be "active" to be a bona fide attack.

    It’s not a “booby trap” that your firefighter activates after legitimately entering my house. It’s a gun he found in the gun safe, loaded with the bullets from the safe, put it into his mouth and pulled the trigger.
    If it’s “not active”, stop putting the gun in your mouth.

    You are lying through your teeth. Stop it.

    There is nothing in that post that could even qualify as a lie. Your reply is of the “not even wrong” kind.



  • @topspin said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    The fact that it's not copyright infringement and never was.

    You didn’t answer the question. Why is it not copyright infringement if it’s done by “AI” but is if the very same thing is done by “not AI”.

    It's not, for the simple reason that the thing being done is not being done by "not AI."

    Wrong on all three counts, but that's beside the point. This is what I mean by "words chosen to obscure it." People are making an argument that is fundamentally about machines being soulless, and then deny that that's what they're talking about because they want it to sound respectable to people who don't believe in such unfashionable notions. But at its core, this is an argument that there is an ephemeral quality inherent to living human beings that makes them capable of doing things that cannot be replicated by anything else.

    I laid it out perfectly clearly for you and yet you manage to get it completely wrong: I didn’t say there’s some inherent ability in humans, in fact the opposite. I said the law treats human minds differently from computers.

    Which law? There's an awful lot of it, afterall.

    Define "treats human minds different from." In what context? Definitely not in this context, because this is such a new context that laws for it don't exist yet.

    In this context. You keep repeating that humans are allowed to learn.

    I never said humans. I very deliberately never said humans. I said learning is not, and should not be, subject to copyright, nor to consent.

    I can learn by heart whatever copyrighted book I acquire. Copyright prevents me from copying it into a machine, or another piece of paper, but not into my mind. Which is fundamentally the same thing, but absolutely not the same in front of the law, neither letter nor spirit.

    You keep talking about the law, and you keep demonstrating that you have no idea what you're talking about. Copyright is not absolute. It never has been, and it should not be. (If anything, it should be significantly weaker than it is right now; the current state of copyright is a giant mess that causes a lot of real harm.) Copyright is an exception to the first amendment right to free speech, as embodied in the concept of fair use, and this is about as clear an example of transformative fair use as you can find.



  • @topspin said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @topspin said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @topspin said in I, ChatGPT:

    @Mason_Wheeler so then somebody creates this automated tool, who is not liable according to you, and somebody else is using it to create hundreds of different exploitable SQL statements and posts them on their website. And now you claim the second is liable, even though he didn’t go out and attack somebody’s server, but some idiot came along, downloaded these statements and executed them without any checks.
    There’s no active attack, the victim is hitting himself.

    This is the booby trap principle. If you deliberately leave something dangerous lying around, you are liable for the damage it causes. It doesn't have to be "active" to be a bona fide attack.

    It’s not a “booby trap” that your firefighter activates after legitimately entering my house. It’s a gun he found in the gun safe, loaded with the bullets from the safe, put it into his mouth and pulled the trigger.
    If it’s “not active”, stop putting the gun in your mouth.

    You are lying through your teeth. Stop it.

    There is nothing in that post that could even qualify as a lie. Your reply is of the “not even wrong” kind.

    You took an example of something that is disguised as something harmless and left sitting around for someone to poison themselves with, and recast it as an act of deliberate self-harm where the person in question knows exactly what they're dealing with. That is a lie.


  • Notification Spam Recipient

    @Mason_Wheeler said in I, ChatGPT:

    Irrelevant, because no ripping-off of other people's works is happening.

    Alright, I'm checked out. This is literally the point of this whole thread even before it became a self-defying loop.


  • BINNED

    @Mason_Wheeler said in I, ChatGPT:

    @topspin said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    The fact that it's not copyright infringement and never was.

    You didn’t answer the question. Why is it not copyright infringement if it’s done by “AI” but is if the very same thing is done by “not AI”.

    It's not, for the simple reason that the thing being done is not being done by "not AI."

    Wrong on all three counts, but that's beside the point. This is what I mean by "words chosen to obscure it." People are making an argument that is fundamentally about machines being soulless, and then deny that that's what they're talking about because they want it to sound respectable to people who don't believe in such unfashionable notions. But at its core, this is an argument that there is an ephemeral quality inherent to living human beings that makes them capable of doing things that cannot be replicated by anything else.

    I laid it out perfectly clearly for you and yet you manage to get it completely wrong: I didn’t say there’s some inherent ability in humans, in fact the opposite. I said the law treats human minds differently from computers.

    Which law? There's an awful lot of it, afterall.

    Define "treats human minds different from." In what context? Definitely not in this context, because this is such a new context that laws for it don't exist yet.

    In this context. You keep repeating that humans are allowed to learn.

    I never said humans. I very deliberately never said humans. I said learning is not, and should not be, subject to copyright, nor to consent.

    I can learn by heart whatever copyrighted book I acquire. Copyright prevents me from copying it into a machine, or another piece of paper, but not into my mind. Which is fundamentally the same thing, but absolutely not the same in front of the law, neither letter nor spirit.

    You keep talking about the law, and you keep demonstrating that you have no idea what you're talking about.

    :mirror:

    Copyright is not absolute. It never has been, and it should not be. (If anything, it should be significantly weaker than it is right now; the current state of copyright is a giant mess that causes a lot of real harm.)

    Fun fact: I agree with that.
    What I don’t agree with is putting up the barrier of copyright for everyone except for the big players doing it en masse. The massive scale is not an argument that it’s fair use, it’s an argument against it being fair use.
    If you want to keep Big AI in business because it’s not profitable to get a license from everyone, like the common plebs have to do, then fix the fucking copyright laws. But you rather defend them continuing not getting a license where everyone else has to.

    Copyright is an exception to the first amendment right to free speech, as embodied in the concept of fair use, and this is about as clear an example of transformative fair use as you can find.

    It is absolutely not clear since you refuse to explain how it is distinct from copying it classically. If you can’t explain where the line is between learning and copying, then it’s ruled by copyright, as you’re using a computer program to copy content, which falls under copyright for classical programs too.

    If you fixed copyright law, that wouldn’t be an issue, but with the current laws it is. It’s just not enforced since the big players complain it’s not profitable, while simultaneously profiting from the otherwise overburdening copyright.


  • Banned

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler define "learning".

    1. Subject demonstrates lack of ability to do X.
    2. Subject is presented with new information on doing X.
    3. Solely on the basis of receiving and processing this new information — ie without being directly modified in some way — subject now demonstrates the ability to do X.

    Using this definition, uploading a pirated movie to a warez site so that the warez site can "learn" how to serve the pirated movie counts as learning.

    So now you're back to talking about copyright.

    Ultimately, the entire issue is about copyright. Without copyright, there would be no question whether it's okay to rip off other people's works.

    Irrelevant, because no ripping-off of other people's works is happening.

    That's what you claim. The other side claims the opposite. Nobody would claim anything if copyright didn't exist.

    But no, I didn't mean copyright. I meant whether a warez site should be treated the same as whatever OpenAI is doing, and if not, why not.

    And how is warez not entirely about copyright violation?

    Freeware, abandonware, archiving... There are a bunch of legitimate reasons to use warez beyond copyright violations. I've used them myself to obtain ISOs of my legally owned gaming magazine CDs that are too scratched to work anymore. No, I couldn't re-buy those games. I mean, I did re-buy them, but had to patch in the files from CD to play them in my language.

    Because in the broad sense, the system is learning how to serve new content.

    No, it's not. The system is learning how to produce new content.

    And what if the content it's learning to produce isn't new? Would that change the situation? I'm not asking about the actual workings of an actual system (since it's an unanswerable question and OpenAI will die before ever admitting anything incriminating), just a hypothetical.

    And now we get to the crux of the matter! This isn't a rational argument at all; it is — and always has been, no matter which words are chosen to obscure it — at its core a metaphysical argument that "machine learning" a priori cannot be a real, legitimate thing because they don't have a soul.

    Wait wait wait. Are you saying that you were just playing devil's advocate this whole time and you don't actually believe there's such thing as machine learning?

    No. I'm saying that at the core of the Luddite argument is a bad-faith attempt to disqualify the concept of the existence of machine learning.

    Well, dunno about "the Luddite", whatever it is, but I don't disqualify the concept - nor the existing examples - of machine learning. I'm just putting a line between human learning, and machine learning. The first one is one of basic human right, and arguably one of the main goals of existence. The latter is just an advanced calculator - and not all of them, just a tiny subset of them. I see no reason to give any human rights to any calculators, even ones capable of learning. And I see no reason to differentiate one advanced calculator from another.


  • Notification Spam Recipient

    @Gustav said in I, ChatGPT:

    And I see no reason to differentiate one advanced calculator from another.

    But da lehrnung is xformitaieve!


  • Discourse touched me in a no-no place

    @Mason_Wheeler said in I, ChatGPT:

    Because consent is irrelevant for learning. Always has been.

    But you've not proved that it is learning. "But someone put the label 'learning' on it!" is not proof.

    And AIs are not persons. They don't have rights. They don't have responsibilities. They don't have legal obligations. They are not alive.

    1d2067b0-79a4-46cb-a54a-fd34f8cb5466-image.png

    Still true.



  • @topspin said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @topspin said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    The fact that it's not copyright infringement and never was.

    You didn’t answer the question. Why is it not copyright infringement if it’s done by “AI” but is if the very same thing is done by “not AI”.

    It's not, for the simple reason that the thing being done is not being done by "not AI."

    Wrong on all three counts, but that's beside the point. This is what I mean by "words chosen to obscure it." People are making an argument that is fundamentally about machines being soulless, and then deny that that's what they're talking about because they want it to sound respectable to people who don't believe in such unfashionable notions. But at its core, this is an argument that there is an ephemeral quality inherent to living human beings that makes them capable of doing things that cannot be replicated by anything else.

    I laid it out perfectly clearly for you and yet you manage to get it completely wrong: I didn’t say there’s some inherent ability in humans, in fact the opposite. I said the law treats human minds differently from computers.

    Which law? There's an awful lot of it, afterall.

    Define "treats human minds different from." In what context? Definitely not in this context, because this is such a new context that laws for it don't exist yet.

    In this context. You keep repeating that humans are allowed to learn.

    I never said humans. I very deliberately never said humans. I said learning is not, and should not be, subject to copyright, nor to consent.

    I can learn by heart whatever copyrighted book I acquire. Copyright prevents me from copying it into a machine, or another piece of paper, but not into my mind. Which is fundamentally the same thing, but absolutely not the same in front of the law, neither letter nor spirit.

    You keep talking about the law, and you keep demonstrating that you have no idea what you're talking about.

    :mirror:

    Copyright is not absolute. It never has been, and it should not be. (If anything, it should be significantly weaker than it is right now; the current state of copyright is a giant mess that causes a lot of real harm.)

    Fun fact: I agree with that.
    What I don’t agree with is putting up the barrier of copyright for everyone except for the big players doing it en masse. The massive scale is not an argument that it’s fair use, it’s an argument against it being fair use.

    I agree in principle. In practice, though, 1) that's not entirely what's happening, and 2) trying to choke it will only make the problem you're saying you hate even worse.

    Open-source alternatives to the giant corporate AIs were already emerging last year, and getting good surprisingly quickly. Stable Diffusion beats Dall-E in many ways already, for example. And in the long run, open source always wins, provided it's a project that attracts sufficient interest to draw a sizeable community.

    But if you start throwing around copyright precedents to establish gigantic legal barriers to entry, then the only entities that will be capable of doing it at all are "the big players doing it en masse."

    If you want to keep Big AI in business because it’s not profitable to get a license from everyone, like the common plebs have to do, then fix the fucking copyright laws. But you rather defend them continuing not getting a license where everyone else has to.

    Quite the opposite; I don't want "the common plebs" to have to get a license to learn either. I find the very idea intrinsically offensive, no matter who it applies to.

    Copyright is an exception to the first amendment right to free speech, as embodied in the concept of fair use, and this is about as clear an example of transformative fair use as you can find.

    It is absolutely not clear since you refuse to explain how it is distinct from copying it classically. If you can’t explain where the line is between learning and copying, then it’s ruled by copyright, as you’re using a computer program to copy content, which falls under copyright for classical programs too.

    I've explained it over and over: it's transformative because it is not being used for the purpose of producing copies, nor is anyone actually using it in good faith for this purpose. The only people producing copies are people twisting the AIs' arms to get them to produce something that looks like a copy, to manufacture a precedent to apply copyright where it clearly does not and should not apply.

    If you fixed copyright law, that wouldn’t be an issue, but with the current laws it is.

    What has two thumbs and has been arguing for the better part of two decades for the rollback of terrible copyright laws such as the DMCA?


  • Notification Spam Recipient

    @Mason_Wheeler said in I, ChatGPT:

    And in the long run, open source always wins, provided it's a project that attracts sufficient interest to draw a sizeable community.

    Still waiting for Linux on the Desktop. :mlp_smug:



  • @Gustav said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler define "learning".

    1. Subject demonstrates lack of ability to do X.
    2. Subject is presented with new information on doing X.
    3. Solely on the basis of receiving and processing this new information — ie without being directly modified in some way — subject now demonstrates the ability to do X.

    Using this definition, uploading a pirated movie to a warez site so that the warez site can "learn" how to serve the pirated movie counts as learning.

    So now you're back to talking about copyright.

    Ultimately, the entire issue is about copyright. Without copyright, there would be no question whether it's okay to rip off other people's works.

    Irrelevant, because no ripping-off of other people's works is happening.

    That's what you claim. The other side claims the opposite.

    And yet the facts stubbornly continue to refuse to bear out their claims.

    But no, I didn't mean copyright. I meant whether a warez site should be treated the same as whatever OpenAI is doing, and if not, why not.

    And how is warez not entirely about copyright violation?

    Freeware, abandonware, archiving... There are a bunch of legitimate reasons to use warez beyond copyright violations. I've used them myself to obtain ISOs of my legally owned gaming magazine CDs that are too scratched to work anymore. No, I couldn't re-buy those games. I mean, I did re-buy them, but had to patch in the files from CD to play them in my language.

    OK, we seem to be at a terminology dispute here. I've never heard the term "warez" used to mean a site set up for any purpose other than deliberate blatantly-flouting-the-law piracy. Certainly not freeware, and people running abandonware or archival sites don't tend to describe their sites as warez.

    Because in the broad sense, the system is learning how to serve new content.

    No, it's not. The system is learning how to produce new content.

    And what if the content it's learning to produce isn't new? Would that change the situation? I'm not asking about the actual workings of an actual system (since it's an unanswerable question and OpenAI will die before ever admitting anything incriminating), just a hypothetical.

    That's for the courts to decide after it actually happens and stops being hypothetical, just as it is when human artists create something new. People have been litigating over "insufficiently creative" creative works for a long, long time now. One artist even infamously got sued for "ripping off" his own work! If the courts can resolve cases like that, they can resolve generative AI cases.

    And now we get to the crux of the matter! This isn't a rational argument at all; it is — and always has been, no matter which words are chosen to obscure it — at its core a metaphysical argument that "machine learning" a priori cannot be a real, legitimate thing because they don't have a soul.

    Wait wait wait. Are you saying that you were just playing devil's advocate this whole time and you don't actually believe there's such thing as machine learning?

    No. I'm saying that at the core of the Luddite argument is a bad-faith attempt to disqualify the concept of the existence of machine learning.

    Well, dunno about "the Luddite", whatever it is

    A Luddite is someone who tries to impede the progress of technology by force, out of fear the new invention will put him out of work. They've been around ever since the Industrial Revolution, when followers of a guy named Ludd tried to destroy productivity-enhancing machinery. (One popular tactic was throwing hard, wooden shoes, called sabot, into the machinery to grind the gears, thus giving birth to the term sabotage.)

    You may not know about them, but they are at the core of this controversy, pushing the ideas you're repeating. Luddites have always been wrong, and this time is no different.



  • @dkf said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    Because consent is irrelevant for learning. Always has been.

    But you've not proved that it is learning. "But someone put the label 'learning' on it!" is not proof.

    The burden of proof is on the accuser. I've had this same conversation in a few different communities, so I'll issue the same challenge here that I have elsewhere. Can you define "learning" in such a way that:

    1. includes what human beings do that is commonly understood as learning
    2. excludes what machines do that is commonly referred to as machine learning
    3. relies solely on objective facts, and not on metaphysical concepts such as "knowledge" or "mind"

    So far, no one has ever even attempted to do so, let alone come up with a real answer.

    And AIs are not persons. They don't have rights. They don't have responsibilities. They don't have legal obligations. They are not alive.

    Agreed. They are tools. This does not, however, mean that they are incapable of learning. Why should it?


  • Banned

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler define "learning".

    1. Subject demonstrates lack of ability to do X.
    2. Subject is presented with new information on doing X.
    3. Solely on the basis of receiving and processing this new information — ie without being directly modified in some way — subject now demonstrates the ability to do X.

    Using this definition, uploading a pirated movie to a warez site so that the warez site can "learn" how to serve the pirated movie counts as learning.

    So now you're back to talking about copyright.

    Ultimately, the entire issue is about copyright. Without copyright, there would be no question whether it's okay to rip off other people's works.

    Irrelevant, because no ripping-off of other people's works is happening.

    That's what you claim. The other side claims the opposite.

    And yet the facts stubbornly continue to refuse to bear out their claims.

    But no, I didn't mean copyright. I meant whether a warez site should be treated the same as whatever OpenAI is doing, and if not, why not.

    And how is warez not entirely about copyright violation?

    Freeware, abandonware, archiving... There are a bunch of legitimate reasons to use warez beyond copyright violations. I've used them myself to obtain ISOs of my legally owned gaming magazine CDs that are too scratched to work anymore. No, I couldn't re-buy those games. I mean, I did re-buy them, but had to patch in the files from CD to play them in my language.

    OK, we seem to be at a terminology dispute here. I've never heard the term "warez" used to mean a site set up for any purpose other than deliberate blatantly-flouting-the-law piracy. Certainly not freeware, and people running abandonware or archival sites don't tend to describe their sites as warez.

    What? Warez forums have been flaunting the freeware and abandonware arguments everywhere at least since 1999. That, and educational purposes. My first experience with warez was downloading Pokemon ROMs from a website that put "for educational purposes only, delete after 24h" in big letters on top of their download section. Nevermind that no such exception exists in law.

    Because in the broad sense, the system is learning how to serve new content.

    No, it's not. The system is learning how to produce new content.

    And what if the content it's learning to produce isn't new? Would that change the situation? I'm not asking about the actual workings of an actual system (since it's an unanswerable question and OpenAI will die before ever admitting anything incriminating), just a hypothetical.

    That's for the courts to decide after it actually happens and stops being hypothetical, just as it is when human artists create something new.

    You're bringing back copyright into the discussion. From the learning standpoint - is it machine learning if the only thing the machine learns is how to reproduce existing content? Is originality required to call it learning?

    And now we get to the crux of the matter! This isn't a rational argument at all; it is — and always has been, no matter which words are chosen to obscure it — at its core a metaphysical argument that "machine learning" a priori cannot be a real, legitimate thing because they don't have a soul.

    Wait wait wait. Are you saying that you were just playing devil's advocate this whole time and you don't actually believe there's such thing as machine learning?

    No. I'm saying that at the core of the Luddite argument is a bad-faith attempt to disqualify the concept of the existence of machine learning.

    Well, dunno about "the Luddite", whatever it is

    A Luddite is someone who tries to impede the progress of technology by force, out of fear the new invention will put him out of work.

    That's definitely not me.

    Luddites have always been wrong, and this time is no different.

    But are they wrong for the same reason I am wrong? If not, then why am I wrong (as opposed to why luddites are wrong)? Am I wrong to begin with? Do machines have human rights?


  • ♿ (Parody)

    @LaoC said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @LaoC said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @LaoC said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    Yes, you really should. Once again, one of the most fundamental principles of the rule of law is that everything which is not forbidden is permitted. A TOS has no legal validity; it's just someone arbitrarily saying "I don't want you doing these things and if you do them anyway I'll be unhappy." It's possible that they may put technical enforcement measures in place that give rise to a CFAA claim if you get the right judge on the right phase of the moon, but that's about it.

    Remember how you started this subthread by claiming that to employ such technical enforcement measures was super illegal according to some law or other that you could never specify? I do.

    No. Citation?

    Of course not.

    Maliciously sabotaging someone else's business is severely illegal. Nightshade is sabotage in the classic sense, not particularly different from throwing wooden shoes into machinery.

    Nightshade is a technical enforcement measure.

    No, because there's nothing to enforce. There is no right to forbid someone from learning from your work.

    "Someone". Notice something? As much as you like to pretend otherwise, the law does make a difference between humans and not-humans. Otherwise I'd be entitled to buy a movie ticket for "someone" (that someone being my video camera) and have him "learn" from the movie and produce a totally-not-the-same artwork from the incomprehensible and not at all equivalent to the celluloid (yeah, it's a French arthouse movie) encoding on that aptly-named "memory card".

    But even besides that issue. Suppose you were posting information in English. Someone had a business where they read your articles and then acted on that information. But one day you decided to publish in German instead of English because you disagree with the way they were using your information.

    According to Mason, switching to another language is a tort! You've sabotaged their business. Lol.


  • Banned

    @boomzilla said in I, ChatGPT:

    @LaoC said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @LaoC said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @LaoC said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    Yes, you really should. Once again, one of the most fundamental principles of the rule of law is that everything which is not forbidden is permitted. A TOS has no legal validity; it's just someone arbitrarily saying "I don't want you doing these things and if you do them anyway I'll be unhappy." It's possible that they may put technical enforcement measures in place that give rise to a CFAA claim if you get the right judge on the right phase of the moon, but that's about it.

    Remember how you started this subthread by claiming that to employ such technical enforcement measures was super illegal according to some law or other that you could never specify? I do.

    No. Citation?

    Of course not.

    Maliciously sabotaging someone else's business is severely illegal. Nightshade is sabotage in the classic sense, not particularly different from throwing wooden shoes into machinery.

    Nightshade is a technical enforcement measure.

    No, because there's nothing to enforce. There is no right to forbid someone from learning from your work.

    "Someone". Notice something? As much as you like to pretend otherwise, the law does make a difference between humans and not-humans. Otherwise I'd be entitled to buy a movie ticket for "someone" (that someone being my video camera) and have him "learn" from the movie and produce a totally-not-the-same artwork from the incomprehensible and not at all equivalent to the celluloid (yeah, it's a French arthouse movie) encoding on that aptly-named "memory card".

    But even besides that issue. Suppose you were posting information in English. Someone had a business where they read your articles and then acted on that information. But one day you decided to publish in German instead of English because you disagree with the way they were using your information.

    According to Mason, switching to another language is a tort! You've sabotaged their business. Lol.

    This exact thing happened to my employer last year. Our source stopped giving press releases in English and we had to switch to German instead. Combined with their readable-to-humans-but-not-machines PDF shenanigans, I'm suspecting intentional sabotage. Lol.


  • ♿ (Parody)

    @Mason_Wheeler said in I, ChatGPT:

    Luddites have always been wrong, and this time is no different.

    I agree with you here. But you're also wrong.


  • BINNED

    @Mason_Wheeler said in I, ChatGPT:

    But if you start throwing around copyright precedents to establish gigantic legal barriers to entry, then the only entities that will be capable of doing it at all are "the big players doing it en masse."

    I've explained it over and over: it's transformative because it is not being used for the purpose of producing copies, nor is anyone actually using it in good faith for this purpose. The only people producing copies are people twisting the AIs' arms to get them to produce something that looks like a copy, to manufacture a precedent to apply copyright where it clearly does not and should not apply.

    You could argue the exact opposite. That the only way to get copyright fixed is to make sure it hurts the big players as much as it has been hurting everyone else for years. But right now, we're allowing them to ignore copyright when scraping millions of images, while everyone else gets fucked for copying a dozen. The only way to fix that is to show the status quo is untenable.

    What most people use it in good faith for doesn't matter, some people have used it to prove that it makes copies (that's not bad faith, that's demonstrating an important point that was hitherto being denied, you included) and the creators have violated copyright on a massive scale to build it.



  • @Gustav said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    OK, we seem to be at a terminology dispute here. I've never heard the term "warez" used to mean a site set up for any purpose other than deliberate blatantly-flouting-the-law piracy. Certainly not freeware, and people running abandonware or archival sites don't tend to describe their sites as warez.

    What? Warez forums have been flaunting the freeware and abandonware arguments everywhere at least since 1999. That, and educational purposes. My first experience with warez was downloading Pokemon ROMs from a website that put "for educational purposes only, delete after 24h" in big letters on top of their download section. Nevermind that no such exception exists in law.

    Yes. But you understand that what they are claiming is not the same as what they actually are, right?

    That's for the courts to decide after it actually happens and stops being hypothetical, just as it is when human artists create something new.

    You're bringing back copyright into the discussion. From the learning standpoint - is it machine learning if the only thing the machine learns is how to reproduce existing content? Is originality required to call it learning?

    From a practical sense, reproduction is required to call it learning. Show me any programmer who didn't start out mindlessly creating copies of existing stuff, and then analyzing it to see how it works. You cannot achieve originality without first going through reproduction along the way.

    A Luddite is someone who tries to impede the progress of technology by force, out of fear the new invention will put him out of work.

    That's definitely not me.

    I know. But they're where this line of dubious reasoning originates that you're perpetuating. If their ideas don't represent you, then please don't represent them and their ideas.


  • Discourse touched me in a no-no place

    @Gustav said in I, ChatGPT:

    This exact thing happened to my employer last year. Our source stopped giving press releases in English and we had to switch to German instead. Combined with their readable-to-humans-but-not-machines PDF shenanigans, I'm suspecting intentional sabotage. Lol.

    Am I now supposed to believe that press releases are readable-to-humans in the first place? 🍹



  • @topspin said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    But if you start throwing around copyright precedents to establish gigantic legal barriers to entry, then the only entities that will be capable of doing it at all are "the big players doing it en masse."

    I've explained it over and over: it's transformative because it is not being used for the purpose of producing copies, nor is anyone actually using it in good faith for this purpose. The only people producing copies are people twisting the AIs' arms to get them to produce something that looks like a copy, to manufacture a precedent to apply copyright where it clearly does not and should not apply.

    You could argue the exact opposite. That the only way to get copyright fixed is to make sure it hurts the big players as much as it has been hurting everyone else for years.

    You could argue that. But...

    54956053-09a4-4804-9d29-1bf62fa0e840-image.png


  • BINNED

    @Mason_Wheeler said in I, ChatGPT:

    @topspin said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    But if you start throwing around copyright precedents to establish gigantic legal barriers to entry, then the only entities that will be capable of doing it at all are "the big players doing it en masse."

    I've explained it over and over: it's transformative because it is not being used for the purpose of producing copies, nor is anyone actually using it in good faith for this purpose. The only people producing copies are people twisting the AIs' arms to get them to produce something that looks like a copy, to manufacture a precedent to apply copyright where it clearly does not and should not apply.

    You could argue the exact opposite. That the only way to get copyright fixed is to make sure it hurts the big players as much as it has been hurting everyone else for years.

    You could argue that. But...

    But what? Steamboat Willie is just an example of copyright getting extend over and over to the detriment of the public because it didn't hurt businesses, it made them money. And they'd have bought another 20 years extension if they hadn't caught themselves in a :trolley-garage: culture war with politicians.
    This shit would not have happened if it was as detrimental to specific big businesses as it is to the general public.



  • @topspin said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @topspin said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    But if you start throwing around copyright precedents to establish gigantic legal barriers to entry, then the only entities that will be capable of doing it at all are "the big players doing it en masse."

    I've explained it over and over: it's transformative because it is not being used for the purpose of producing copies, nor is anyone actually using it in good faith for this purpose. The only people producing copies are people twisting the AIs' arms to get them to produce something that looks like a copy, to manufacture a precedent to apply copyright where it clearly does not and should not apply.

    You could argue the exact opposite. That the only way to get copyright fixed is to make sure it hurts the big players as much as it has been hurting everyone else for years.

    You could argue that. But...

    But what? Steamboat Willie is just an example of copyright getting extend over and over to the detriment of the public because it didn't hurt businesses, it made them money. And they'd have bought another 20 years extension if they hadn't caught themselves in a :trolley-garage: culture war with politicians.

    There I don't agree. The last one — all of the previous ones, really — happened largely under the radar; what's changed is the Internet found out about copyright abuse and term extensions and has spent a lot of time making the very idea of ever doing it again politically toxic. It was never going to happen, not because we strongarmed Disney into backing down this time, but because enough politicians understood that doing so meant they'd have no future that Disney didn't even bother asking this time. That decision was made (and discussed publicly) long before all the culture war garbage got started.


  • Banned

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    OK, we seem to be at a terminology dispute here. I've never heard the term "warez" used to mean a site set up for any purpose other than deliberate blatantly-flouting-the-law piracy. Certainly not freeware, and people running abandonware or archival sites don't tend to describe their sites as warez.

    What? Warez forums have been flaunting the freeware and abandonware arguments everywhere at least since 1999. That, and educational purposes. My first experience with warez was downloading Pokemon ROMs from a website that put "for educational purposes only, delete after 24h" in big letters on top of their download section. Nevermind that no such exception exists in law.

    Yes. But you understand that what they are claiming is not the same as what they actually are, right?

    I know of a few warez sites that exist specifically just for abandonware. There's also plenty of sites that redistribute shareware and freeware with varying levels of malice and adherence to law, although they don't brand themselves as warez. As for archiving, that's the purpose of Archive.org, and I believe it's genuine even though the end result is the largest warez site on Earth.

    That's for the courts to decide after it actually happens and stops being hypothetical, just as it is when human artists create something new.

    You're bringing back copyright into the discussion. From the learning standpoint - is it machine learning if the only thing the machine learns is how to reproduce existing content? Is originality required to call it learning?

    From a practical sense, reproduction is required to call it learning. Show me any programmer who didn't start out mindlessly creating copies of existing stuff, and then analyzing it to see how it works. You cannot achieve originality without first going through reproduction along the way.

    Sooooooo... a warez site is technically machine learning after all?

    A Luddite is someone who tries to impede the progress of technology by force, out of fear the new invention will put him out of work.

    That's definitely not me.

    I know. But they're where this line of dubious reasoning originates that you're perpetuating. If their ideas don't represent you, then please don't represent them and their ideas.

    I don't. I have nothing against progress. I have a lot against giving human rights to robots, however. Mostly because it would massively impede progress.

    So... Any particular reason why machines should have an inherent right to self-development, like humans do?


  • Discourse touched me in a no-no place

    @Mason_Wheeler said in I, ChatGPT:

    The burden of proof is on the accuser. I've had this same conversation in a few different communities, so I'll issue the same challenge here that I have elsewhere. Can you define "learning" in such a way that:

    1. includes what human beings do that is commonly understood as learning
    2. excludes what machines do that is commonly referred to as machine learning
    3. relies solely on objective facts, and not on metaphysical concepts such as "knowledge" or "mind"

    So far, no one has ever even attempted to do so, let alone come up with a real answer.

    Objective fact: Computers are not considered to be "persons" by the law in a very large number of countries and other jurisdictions. (There would be a very large number of consequences were this not so.)

    Objective fact: Things that are not "persons" do not have responsibilities or duties. When they do actions that mix up with responsibilities or duties, "someone" that is legally a "person" has to handle all the responsibilities and duties concerned. (You see this with pets, for example, and nobody would argue that dogs cannot learn.)

    Reasonable projection: If a dog could reproduce an artwork with high enough fidelity to be considered to be a copy or derived work, you bet that there would be a lot of legal cases on the matter. It hasn't happened, mainly because dogs are concerned with other things and aren't equipped with hands for drawing, painting or typing; dogs have their priorities and they're not quite the same as humans'.

    Objective fact: AIs (of the neural network kind) form a statistical model of the world. The amazing thing is that taking some noise and some input allows statistical projection back from the internal model to something that corresponds to an artwork. Why is that good enough? No idea. (How much of human learning and creativity are little more than high-dimensional statistical projection from randomness? A very worrying question that one!)

    Objective fact: AIs are undoubtedly computer programs running under the direction of a legal "person", such as OpenAI, Google, Facebook or Microsoft. If you run one on your own computer hardware, then you're the legal "person" directing it to run.

    Objective fact: An awful lot of case law exists for people getting prosecuted for using copyrighted works for personal commercial advantage in ways that are not within the exceptions defined by law. At no point has "but I used a computer to do it" ever been a defense, except perhaps as evidence towards pleading insanity.

    Objective fact: The overall use of the copyrighted works by OpenAI (for example) is not learning as such, but rather providing a service for generating new works similar to existing ones. (Similar by projection in a suitable hyperspatial model.) The stuff similar to learning is just part of that overall objective. (They'd be able to argue it was just for learning if they were not going on to provide services with the trained models.)

    Objective fact: The law doesn't just concern itself with minutiae but also with what people are doing overall. Very often, that's what the law cares about most of all.


  • Banned

    Anyway, I'm having some fun with ChatGPT myself.

    ee9b39a7-42cf-4b9f-9de6-1d48611fc7ef-image.png

    eac6fb5b-7c14-43a7-8acd-1ce91a37b58b-image.png

    #CleanQueenCrew indeed!



  • @dkf said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    The burden of proof is on the accuser. I've had this same conversation in a few different communities, so I'll issue the same challenge here that I have elsewhere. Can you define "learning" in such a way that:

    1. includes what human beings do that is commonly understood as learning
    2. excludes what machines do that is commonly referred to as machine learning
    3. relies solely on objective facts, and not on metaphysical concepts such as "knowledge" or "mind"

    So far, no one has ever even attempted to do so, let alone come up with a real answer.

    Objective fact: Computers are not considered to be "persons" by the law in a very large number of countries and other jurisdictions. (There would be a very large number of consequences were this not so.)

    Agreed, but how is that relevant to the definition of learning?

    Objective fact: Things that are not "persons" do not have responsibilities or duties. When they do actions that mix up with responsibilities or duties, "someone" that is legally a "person" has to handle all the responsibilities and duties concerned. (You see this with pets, for example, and nobody would argue that dogs cannot learn.)

    Agreed, but how is that relevant to the definition of learning?

    Reasonable projection: If a dog could reproduce an artwork with high enough fidelity to be considered to be a copy or derived work, you bet that there would be a lot of legal cases on the matter. It hasn't happened, mainly because dogs are concerned with other things and aren't equipped with hands for drawing, painting or typing; dogs have their priorities and they're not quite the same as humans'.

    Not a dog, true, but...

    Objective fact: AIs (of the neural network kind) form a statistical model of the world. The amazing thing is that taking some noise and some input allows statistical projection back from the internal model to something that corresponds to an artwork. Why is that good enough? No idea. (How much of human learning and creativity are little more than high-dimensional statistical projection from randomness? A very worrying question that one!)

    So you're agreeing that it's quite possible that what they do is similar to how human learning works?

    Objective fact: AIs are undoubtedly computer programs running under the direction of a legal "person", such as OpenAI, Google, Facebook or Microsoft. If you run one on your own computer hardware, then you're the legal "person" directing it to run.

    Agreed, but how is that relevant to the definition of learning?

    Objective fact: An awful lot of case law exists for people getting prosecuted for using copyrighted works for personal commercial advantage in ways that are not within the exceptions defined by law. At no point has "but I used a computer to do it" ever been a defense, except perhaps as evidence towards pleading insanity.

    Agreed, but how is that relevant to the definition of learning?

    Objective fact: The overall use of the copyrighted works by OpenAI (for example) is not learning as such, but rather providing a service for generating new works similar to existing ones. (Similar by projection in a suitable hyperspatial model.) The stuff similar to learning is just part of that overall objective. (They'd be able to argue it was just for learning if they were not going on to provide services with the trained models.)

    Disagree. The learning is not "just part of the overall objective," but a dependency thereof. Without the learning, the generating cannot happen, and therefore the learning is an integral part of the generating.

    Objective fact: The law doesn't just concern itself with minutiae but also with what people are doing overall. Very often, that's what the law cares about most of all.

    Agreed, but how is that relevant to the definition of learning?



  • @LaoC said in I, ChatGPT:

    Different people reading the same text or looking at the same picture will sometimes learn very different things from it.

    See, for example, many :trolley-garage: discussions. Some people have ... different ... reading comprehension than others.


  • Banned

    @Mason_Wheeler said in I, ChatGPT:

    Objective fact: AIs (of the neural network kind) form a statistical model of the world. The amazing thing is that taking some noise and some input allows statistical projection back from the internal model to something that corresponds to an artwork. Why is that good enough? No idea. (How much of human learning and creativity are little more than high-dimensional statistical projection from randomness? A very worrying question that one!)

    So you're agreeing that it's quite possible that what they do is similar to how human learning works?

    Cardiopulmonary bypass does something very similar to how human breathing works, but I'm still skeptical on giving CPBs any human rights.



  • @topspin said in I, ChatGPT:

    Besides, you just used "fair use" as a defense above, which applies to copyright. If there was no copyright question, there was no "fair use" to claim.

    Much of the world doesn't even recognize "fair use" in relation to copyright. It's a US thing. Some other countries have similar exceptions by other names and varying in details. Others make no exceptions whatsoever, and making any copy of any kind for any reason without the author's permission is a violation.



  • @boomzilla said in I, ChatGPT:

    ruining Netflix's movies

    Impossibru!!!


  • BINNED

    @HardwareGeek said in I, ChatGPT:

    @topspin said in I, ChatGPT:

    Besides, you just used "fair use" as a defense above, which applies to copyright. If there was no copyright question, there was no "fair use" to claim.

    Much of the world doesn't even recognize "fair use" in relation to copyright. It's a US thing. Some other countries have similar exceptions by other names and varying in details. Others make no exceptions whatsoever, and making any copy of any kind for any reason without the author's permission is a violation.

    And yet others (:trolley-garage:) don’t recognize copyright to begin with. They claim to, for international trade reasons, but just laugh at the idea of not copying whatever they like. Just as much as I’d laugh at “you wouldn’t download a car”.

    On a different note: in Germany we do have a thing called “Schöpfungshöhe“. That means a minimum amount of creative output required for something to qualify for copyright. If your garbage is just completely uncreative, you don’t get copyright even if it’s your original work. (Some even argued that uncreative run-of-the-mill porn movies didn’t qualify, but I have no idea if that ruling stood the test of time.)



  • @Gustav said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    Objective fact: AIs (of the neural network kind) form a statistical model of the world. The amazing thing is that taking some noise and some input allows statistical projection back from the internal model to something that corresponds to an artwork. Why is that good enough? No idea. (How much of human learning and creativity are little more than high-dimensional statistical projection from randomness? A very worrying question that one!)

    So you're agreeing that it's quite possible that what they do is similar to how human learning works?

    Cardiopulmonary bypass does something very similar to how human breathing works, but I'm still skeptical on giving CPBs any human rights.

    Why do people keep talking about human rights when I specifically said I'm not advocating for anything of the sort?


  • Banned

    @Mason_Wheeler I can't keep up with all posts in this thread, most are boring as hell. So apologies for that.

    Okay, so if not inherent rights, then what is the reason for treating human learning and machine learning the same?



  • @Gustav said in I, ChatGPT:

    @Mason_Wheeler I can't keep up with all posts in this thread, most are boring as hell. So apologies for that.

    Okay, so if not inherent rights, then what is the reason for treating human learning and machine learning the same?

    Because learning is learning, which is something fundamentally different in nature from copying. What other reason is needed?


  • Discourse touched me in a no-no place

    @Mason_Wheeler said in I, ChatGPT:

    So you're agreeing that it's quite possible that what they do is similar to how human learning works?

    It's not similar. It's a simulation of a simplification of a model of how neurons were thought to work that turned out to be not actually true and which leaves out a very large amount of detail, some of which is probably rather important for doing a better job.

    Neurons don't use continuous currents, don't average over time, and absolutely definitely never ever ever send information backwards in time. They also have dynamic connectivity, even in adult brains, and their handling of time is absolutely vital for understanding them. Furthermore, the artificial neural networks do not simulate any of the critical functions of sleep, such as supporting forgetting unimportant detail, or incorporate any of the higher computational aspects possible in dendritic trees. The fact that ANN models are based on (abstractions of) currents was mostly because that was by far the easiest thing to measure in situ for many years. In reality, currents are really just the averaging over time of what's really happening, and only really muscles use them (because they necessarily need to work by integrating the charge/neurotransmitter spikes).

    The fascinating thing is that, despite being a very poor simulation of a real neural network, ANNs do so well anyway. Some of that is because scale really turns out to be important. Some of that is because they're using the right sort of non-linearity. Some of it is because many of the details are less important than we thought. But it definitely raises some very deep philosophical questions: how much of human creativity (things that have been widely lauded for millennia!) is really just randomness and statistical projection? Should we value it as highly as we have done in the past?

    I don't have answers to those questions, BTW. They're fine ones for philosophy classes I suppose...


    Your other responses to my post's points indicate to me that you're completely failing to engage with the substance of everyone else's arguments. You're so caught up on your technical aspects that you've completely forgotten that everyone else is far more concerned about wider aspects. The wider concerns matter. Really.


  • ♿ (Parody)

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    Objective fact: AIs (of the neural network kind) form a statistical model of the world. The amazing thing is that taking some noise and some input allows statistical projection back from the internal model to something that corresponds to an artwork. Why is that good enough? No idea. (How much of human learning and creativity are little more than high-dimensional statistical projection from randomness? A very worrying question that one!)

    So you're agreeing that it's quite possible that what they do is similar to how human learning works?

    Cardiopulmonary bypass does something very similar to how human breathing works, but I'm still skeptical on giving CPBs any human rights.

    Why do people keep talking about human rights when I specifically said I'm not advocating for anything of the sort?

    I'd say this is a sign that you're missing important aspects of the topic.



  • @Mason_Wheeler said in I, ChatGPT:

    As you can see, asking an AI to produce a copy of even one of the most famous works of art of all time does not produce a copy of it. It produces something that is vaguely similar and recognizable as being inspired by the Mona Lisa, but nothing morea derivative work that is a copyright violation if the original is protected by copyright.

    Obviously, the Mona Lisa is in the public domain, so your derivative isn't violating the copyright that doesn't exist. However, if the AI creates a similarly derivative version of an image that is protected, then the author's copyright has been violated.



  • @HardwareGeek said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    As you can see, asking an AI to produce a copy of even one of the most famous works of art of all time does not produce a copy of it. It produces something that is vaguely similar and recognizable as being inspired by the Mona Lisa, but nothing morea derivative work that is a copyright violation if the original is protected by copyright.

    Obviously, the Mona Lisa is in the public domain, so your derivative isn't violating the copyright that doesn't exist. However, if the AI creates a similarly derivative version of an image that is protected, then the author's copyright has been violated.

    Agreed. But please don't use single-fact syllogisms here. "If someone uses a tool to do something bad that can also be done without the tool, then they have done something bad" conveys exactly as much meaning if all mention of the tool is excised, leaving you with a simple tautology. It provides no rational insight whatsoever about the tool itself.


  • Banned

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler I can't keep up with all posts in this thread, most are boring as hell. So apologies for that.

    Okay, so if not inherent rights, then what is the reason for treating human learning and machine learning the same?

    Because learning is learning, which is something fundamentally different in nature from copying. What other reason is needed?

    Some line of thought that ends with "and therefore machine learning should be a protected activity just like human learning is". For humans, it's justified by it being an irrevocable human right. What's the justification for machines?



  • @Mason_Wheeler said in I, ChatGPT:

    Now that we've established that AIs can't create actual copies in the first place

    Maybe; maybe not, but they sure as hell can create derivative works, and creating derivative works is a right reserved to the author under copyright law (with limited exceptions, in some jurisdictions, for "fair use" such as parody).



  • @Gustav said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler I can't keep up with all posts in this thread, most are boring as hell. So apologies for that.

    Okay, so if not inherent rights, then what is the reason for treating human learning and machine learning the same?

    Because learning is learning, which is something fundamentally different in nature from copying. What other reason is needed?

    Some line of thought that ends with "and therefore machine learning should be a protected activity just like human learning is". For humans, it's justified by it being an irrevocable human right. What's the justification for machines?

    The rule of law: that which is not prohibited is always permitted. And learning is not prohibited, therefore there is no justification nor consent needed.



  • @HardwareGeek said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    Now that we've established that AIs can't create actual copies in the first place

    Maybe; maybe not, but they sure as hell can create derivative works, and creating derivative works is a right reserved to the author under copyright law (with limited exceptions, in some jurisdictions, for "fair use" such as parody).

    Once again, saying that a tool can be used to do something bad that can also be done without that tool tells us nothing useful about the tool.



  • @Mason_Wheeler said in I, ChatGPT:

    And learning is not prohibited, therefore there is no justification nor consent needed.

    Creating a derivative work without permission, however, is prohibited. I'd argue that a network with weights derived from a certain image counts as such.



  • @Gustav said in I, ChatGPT:

    Would it be illegal to knowingly give a wrong answer on StackOverflow?

    Would anyone even notice among all the other wrong answers?



  • @topspin said in I, ChatGPT:

    @Tsaukpaetra said in I, ChatGPT:

    @topspin said in I, ChatGPT:

    like a burglar suing you for eating moldy left-overs in your fridge. Extremely fucking retarded.

    This... happens. And people can actually win those. 😞

    Your fridge violates the Geneva convention. :trollface:

    Well, I threw some of my violations in the garbage this morning.



  • @cvi said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    And learning is not prohibited, therefore there is no justification nor consent needed.

    Creating a derivative work without permission, however, is prohibited. I'd argue that a network with weights derived from a certain image counts as such.

    Once again, no one is doing that, and no one is complaining about doing that, so who cares?

    What people are doing is creating images "derived from" literally billions of examples, not from "a certain image."

    I have a friend on Facebook who's a published author. She's constantly showing off AI artwork she created depicting the characters in her books. Very clearly "a derivative work" of her own work. Derivative of any other artwork? Much less clearly so!



  • @boomzilla said in I, ChatGPT:

    Yes, you've proved quite resilient to any sort of logic here

    🔧


  • Banned

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler I can't keep up with all posts in this thread, most are boring as hell. So apologies for that.

    Okay, so if not inherent rights, then what is the reason for treating human learning and machine learning the same?

    Because learning is learning, which is something fundamentally different in nature from copying. What other reason is needed?

    Some line of thought that ends with "and therefore machine learning should be a protected activity just like human learning is". For humans, it's justified by it being an irrevocable human right. What's the justification for machines?

    The rule of law: that which is not prohibited is always permitted. And learning is not prohibited, therefore there is no justification nor consent needed.

    Yes, if we ignore the laws that prohibit this particular case of learning, then it's true laws don't prohibit it. Unfortunately, in real world we can't ignore laws like that (unless we're a billion-dollar corporation).

    Teachers are allowed to do what they do because they are expressly exempt from the normal copyright law (yes, I know, we're back to copyright, sorry). They are exempt from copyright law on the grounds that it's a human right to learn. AI systems don't have the same inherent right, therefore this exemption doesn't apply, and the normal law takes over, and the normal law says you cannot make copies. If OpenAI deleted the ML model and replaced it with a Pajeet, it would suddenly become legal.

    The usual counterargument is that the model doesn't contain any copies, but that's not the counterargument you are using here.



  • @Gustav said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla Counterfeiting is also illegal. Not sure how it's relevant to the conversation, though.

    Yes, you've proved quite resilient to any sort of logic here and instead of finding the useful part of an analogy you look for any reason to dismiss it, including by adding information like "counterfeiting."

    What do you mean? You asked about someone putting out "something that looks like real money but isn't actually valuable" for other people to find. What is that, if not counterfeit money?

    MonopolyCanadian money?

    :trollface:


  • ♿ (Parody)

    @Mason_Wheeler said in I, ChatGPT:

    @HardwareGeek said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    As you can see, asking an AI to produce a copy of even one of the most famous works of art of all time does not produce a copy of it. It produces something that is vaguely similar and recognizable as being inspired by the Mona Lisa, but nothing morea derivative work that is a copyright violation if the original is protected by copyright.

    Obviously, the Mona Lisa is in the public domain, so your derivative isn't violating the copyright that doesn't exist. However, if the AI creates a similarly derivative version of an image that is protected, then the author's copyright has been violated.

    Agreed. But please don't use single-fact syllogisms here. "If someone uses a tool to do something bad that can also be done without the tool, then they have done something bad" conveys exactly as much meaning if all mention of the tool is excised, leaving you with a simple tautology. It provides no rational insight whatsoever about the tool itself.

    You might believe that but people often don't realize that.


Log in to reply