I, ChatGPT


  • BINNED

    @Mason_Wheeler so if I put up a website with explanations (and examples) of SQL injection attacks, I’m responsible for you running harmful code. How do you expect people to teach?



  • @Mason_Wheeler said in I, ChatGPT:

    No, the point is that in its natural state, it is valuable, but then someone takes an additional, active step to adulterate it in order to make it poisonous rather than valuable.

    That pre-supposes the only value in an image is to be consumed by the tools.

    An image is still valuable even in its 'poisoned' state - at least as I understand it because to the human eye it is otherwise hard or impossible to distinguish, so in that respect it's still valuable.

    It's not just exploitable for AI purposes.



  • @topspin First, there's a pretty wide chasm between explaining what a SQL injection attack is and creating an automated SQL injection generation tool and advertising it, "hey, everyone who doesn't like SQL databases, use this tool to screw them up!"

    Second, I am mindful of this. If you look at what I wrote, at no point did I say that the creator of Nightshade is likely to incur legal liability. (I think it's possible, but I'm not going to say so with any high degree of confidence.) What I said, repeatedly, is that people who use it are likely to incur legal liability. Just like SQL injection attacks.



  • @Arantor said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    No, the point is that in its natural state, it is valuable, but then someone takes an additional, active step to adulterate it in order to make it poisonous rather than valuable.

    That pre-supposes the only value in an image is to be consumed by the tools.

    https://www.youtube.com/watch?v=Kjnasl1Kyuo


  • Banned

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    So back to the question you didn't answer.

    That was my answer: it's still a booby trap even if you have a secure fence around it.

    Okay so why did you even mention registration and paywall if neither registration nor paywall changes anything?

    Neither registration nor paywall changes anything about sabotage or tortious interference. What they do change is providing a clear barrier that, if circumvented, can give rise to a CFAA claim, as I explained.

    Does the free website have obligation to continue providing service to users that they know they have CFAA claim against? And if not... are they forbidden from silently redirecting them to a different service that's markedly worse?

    This actually poses an interesting question for F2P games. Is it against the law for matchmaking systems to pit known hackers against other known hackers (I forgot what this technique is called but it's used by most of the popular games nowadays)?

    I'd be leery of doing so, because this hypothetical is exactly why the legal system exists in the first place: things you "know" might not, in fact, be true. We strongly discourage people — most especially the aggrieved parties themselves — from taking the enforcement of the law into their own hands, preferring to hash it out in an adversarial process before an objective, unbiased third party, for a variety of reasons, but above all else for the protection of the accused against false accusations.

    But isn't ALL anti-bot defense, by its very nature, taking law in their own hands? Is ALL anti-bot defense automatically illegal?



  • @Gustav said in I, ChatGPT:

    Is ALL anti-bot defense automatically illegal?

    Automatically? No. In a risky, gray area that I would personally prefer to stay out of? Yes.


  • ♿ (Parody)

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    The point is that there's something out there that someone thinks is valuable to their business, but then it's not.

    No, the point is that in its natural state, it is valuable, but then someone takes an additional, active step to adulterate it in order to make it poisonous rather than valuable. That is crossing a line into active, malicious sabotage, and no number of weird analogies makes it not crossing that line.

    No, it absolutely isn't sabotage. They've done nothing to the scraper or anything owned by the scraper no matter how intensely you hallucinate otherwise.

    What are you talking about? "Doing something to" the training run by the scraper is the entire reason for Nightshade's existence. It is specifically advertised as existing for that exact purpose: "run your work through this to screw up AIs."

    So maybe don't do that? Like, if someone puts ethanol in gasoline but the engine you're using can't handle that....maybe use something else. The people who put the ethanol in that didn't put their fuel in your tank. You did that to yourself.



  • @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    The point is that there's something out there that someone thinks is valuable to their business, but then it's not.

    No, the point is that in its natural state, it is valuable, but then someone takes an additional, active step to adulterate it in order to make it poisonous rather than valuable. That is crossing a line into active, malicious sabotage, and no number of weird analogies makes it not crossing that line.

    No, it absolutely isn't sabotage. They've done nothing to the scraper or anything owned by the scraper no matter how intensely you hallucinate otherwise.

    What are you talking about? "Doing something to" the training run by the scraper is the entire reason for Nightshade's existence. It is specifically advertised as existing for that exact purpose: "run your work through this to screw up AIs."

    So maybe don't do that? Like, if someone puts ethanol in gasoline but the engine you're using can't handle that....maybe use something else. The people who put the ethanol in that didn't put their fuel in your tank. You did that to yourself.

    :wat-girl: :wtf_owl: :wat:

    What are you even talking about here and how does that in any way relate to the topic at hand?


  • ♿ (Parody)

    @Mason_Wheeler said in I, ChatGPT:

    @topspin First, there's a pretty wide chasm between explaining what a SQL injection attack is and creating an automated SQL injection generation tool and advertising it, "hey, everyone who doesn't like SQL databases, use this tool to screw them up!"

    And a similar difference between using Nightshade and deliberately feeding those images to an AI. Why do you insist on picking up all the horseshit?


  • BINNED

    @Mason_Wheeler so then somebody creates this automated tool, who is not liable according to you, and somebody else is using it to create hundreds of different exploitable SQL statements and posts them on their website. And now you claim the second is liable, even though he didn’t go out and attack somebody’s server, but some idiot came along, downloaded these statements and executed them without any checks.
    There’s no active attack, the victim is hitting himself.


  • ♿ (Parody)

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    The point is that there's something out there that someone thinks is valuable to their business, but then it's not.

    No, the point is that in its natural state, it is valuable, but then someone takes an additional, active step to adulterate it in order to make it poisonous rather than valuable. That is crossing a line into active, malicious sabotage, and no number of weird analogies makes it not crossing that line.

    No, it absolutely isn't sabotage. They've done nothing to the scraper or anything owned by the scraper no matter how intensely you hallucinate otherwise.

    What are you talking about? "Doing something to" the training run by the scraper is the entire reason for Nightshade's existence. It is specifically advertised as existing for that exact purpose: "run your work through this to screw up AIs."

    So maybe don't do that? Like, if someone puts ethanol in gasoline but the engine you're using can't handle that....maybe use something else. The people who put the ethanol in that didn't put their fuel in your tank. You did that to yourself.

    :wat-girl: :wtf_owl: :wat:

    What are you even talking about here and how does that in any way relate to the topic at hand?

    I'm trying to explain to you that it's not the image sharers who are using these images to train the AIs. The trainers can choose to avoid these images and then everyone is happy.



  • @topspin said in I, ChatGPT:

    @Mason_Wheeler so then somebody creates this automated tool, who is not liable according to you, and somebody else is using it to create hundreds of different exploitable SQL statements and posts them on their website. And now you claim the second is liable, even though he didn’t go out and attack somebody’s server, but some idiot came along, downloaded these statements and executed them without any checks.
    There’s no active attack, the victim is hitting himself.

    This is the booby trap principle. If you deliberately leave something dangerous lying around, you are liable for the damage it causes. It doesn't have to be "active" to be a bona fide attack.



  • @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    The point is that there's something out there that someone thinks is valuable to their business, but then it's not.

    No, the point is that in its natural state, it is valuable, but then someone takes an additional, active step to adulterate it in order to make it poisonous rather than valuable. That is crossing a line into active, malicious sabotage, and no number of weird analogies makes it not crossing that line.

    No, it absolutely isn't sabotage. They've done nothing to the scraper or anything owned by the scraper no matter how intensely you hallucinate otherwise.

    What are you talking about? "Doing something to" the training run by the scraper is the entire reason for Nightshade's existence. It is specifically advertised as existing for that exact purpose: "run your work through this to screw up AIs."

    So maybe don't do that? Like, if someone puts ethanol in gasoline but the engine you're using can't handle that....maybe use something else. The people who put the ethanol in that didn't put their fuel in your tank. You did that to yourself.

    :wat-girl: :wtf_owl: :wat:

    What are you even talking about here and how does that in any way relate to the topic at hand?

    I'm trying to explain to you that it's not the image sharers who are using these images to train the AIs. The trainers can choose to avoid these images and then everyone is happy.

    How? How do you choose to avoid something that is deliberately designed to be undetectable? I will accept as valid any answer that is not logically equivalent to "throw the entire business model out the window."


  • ♿ (Parody)

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    The point is that there's something out there that someone thinks is valuable to their business, but then it's not.

    No, the point is that in its natural state, it is valuable, but then someone takes an additional, active step to adulterate it in order to make it poisonous rather than valuable. That is crossing a line into active, malicious sabotage, and no number of weird analogies makes it not crossing that line.

    No, it absolutely isn't sabotage. They've done nothing to the scraper or anything owned by the scraper no matter how intensely you hallucinate otherwise.

    What are you talking about? "Doing something to" the training run by the scraper is the entire reason for Nightshade's existence. It is specifically advertised as existing for that exact purpose: "run your work through this to screw up AIs."

    So maybe don't do that? Like, if someone puts ethanol in gasoline but the engine you're using can't handle that....maybe use something else. The people who put the ethanol in that didn't put their fuel in your tank. You did that to yourself.

    :wat-girl: :wtf_owl: :wat:

    What are you even talking about here and how does that in any way relate to the topic at hand?

    I'm trying to explain to you that it's not the image sharers who are using these images to train the AIs. The trainers can choose to avoid these images and then everyone is happy.

    How? How do you choose to avoid something that is deliberately designed to be undetectable? I will accept as valid any answer that is not logically equivalent to "throw the entire business model out the window."

    You could do something revolutionary like ask the people providing the images. Just because someone came up with a business plan means that other people need to support it.



  • @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    The point is that there's something out there that someone thinks is valuable to their business, but then it's not.

    No, the point is that in its natural state, it is valuable, but then someone takes an additional, active step to adulterate it in order to make it poisonous rather than valuable. That is crossing a line into active, malicious sabotage, and no number of weird analogies makes it not crossing that line.

    No, it absolutely isn't sabotage. They've done nothing to the scraper or anything owned by the scraper no matter how intensely you hallucinate otherwise.

    What are you talking about? "Doing something to" the training run by the scraper is the entire reason for Nightshade's existence. It is specifically advertised as existing for that exact purpose: "run your work through this to screw up AIs."

    So maybe don't do that? Like, if someone puts ethanol in gasoline but the engine you're using can't handle that....maybe use something else. The people who put the ethanol in that didn't put their fuel in your tank. You did that to yourself.

    :wat-girl: :wtf_owl: :wat:

    What are you even talking about here and how does that in any way relate to the topic at hand?

    I'm trying to explain to you that it's not the image sharers who are using these images to train the AIs. The trainers can choose to avoid these images and then everyone is happy.

    How? How do you choose to avoid something that is deliberately designed to be undetectable? I will accept as valid any answer that is not logically equivalent to "throw the entire business model out the window."

    You could do something revolutionary like ask the people providing the images. Just because someone came up with a business plan means that other people need to support it.

    Again with the conflating of deliberate action vs. inaction. People don't need to take active action to "support it," but taking active action to interfere with it is tortious interference.



  • @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    The point is that there's something out there that someone thinks is valuable to their business, but then it's not.

    No, the point is that in its natural state, it is valuable, but then someone takes an additional, active step to adulterate it in order to make it poisonous rather than valuable. That is crossing a line into active, malicious sabotage, and no number of weird analogies makes it not crossing that line.

    No, it absolutely isn't sabotage. They've done nothing to the scraper or anything owned by the scraper no matter how intensely you hallucinate otherwise.

    What are you talking about? "Doing something to" the training run by the scraper is the entire reason for Nightshade's existence. It is specifically advertised as existing for that exact purpose: "run your work through this to screw up AIs."

    So maybe don't do that? Like, if someone puts ethanol in gasoline but the engine you're using can't handle that....maybe use something else. The people who put the ethanol in that didn't put their fuel in your tank. You did that to yourself.

    :wat-girl: :wtf_owl: :wat:

    What are you even talking about here and how does that in any way relate to the topic at hand?

    I'm trying to explain to you that it's not the image sharers who are using these images to train the AIs. The trainers can choose to avoid these images and then everyone is happy.

    How? How do you choose to avoid something that is deliberately designed to be undetectable? I will accept as valid any answer that is not logically equivalent to "throw the entire business model out the window."

    You could do something revolutionary like ask the people providing the images. Just because someone came up with a business plan means that other people need to support it.

    Stop making sense and being reasonable.


  • Considered Harmful

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    Once again, the question no one wants to answer: where does any requirement exist to obtain a license for learning?

    In the fucking license agreement god fucking dammit. Did you even read it?

    pie_flavor👼


  • Banned

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    Is ALL anti-bot defense automatically illegal?

    Automatically? No. In a risky, gray area that I would personally prefer to stay out of? Yes.

    The way you presented it, I don't see anything grey about it. Service providers are not allowed to mark any users as troublemakers without court order, period. Either that, or they are allowed to mark troublemakers, and the grey area doesn't exist for the opposite reason.


  • ♿ (Parody)

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    The point is that there's something out there that someone thinks is valuable to their business, but then it's not.

    No, the point is that in its natural state, it is valuable, but then someone takes an additional, active step to adulterate it in order to make it poisonous rather than valuable. That is crossing a line into active, malicious sabotage, and no number of weird analogies makes it not crossing that line.

    No, it absolutely isn't sabotage. They've done nothing to the scraper or anything owned by the scraper no matter how intensely you hallucinate otherwise.

    What are you talking about? "Doing something to" the training run by the scraper is the entire reason for Nightshade's existence. It is specifically advertised as existing for that exact purpose: "run your work through this to screw up AIs."

    So maybe don't do that? Like, if someone puts ethanol in gasoline but the engine you're using can't handle that....maybe use something else. The people who put the ethanol in that didn't put their fuel in your tank. You did that to yourself.

    :wat-girl: :wtf_owl: :wat:

    What are you even talking about here and how does that in any way relate to the topic at hand?

    I'm trying to explain to you that it's not the image sharers who are using these images to train the AIs. The trainers can choose to avoid these images and then everyone is happy.

    How? How do you choose to avoid something that is deliberately designed to be undetectable? I will accept as valid any answer that is not logically equivalent to "throw the entire business model out the window."

    You could do something revolutionary like ask the people providing the images. Just because someone came up with a business plan means that other people need to support it.

    Again with the conflating of deliberate action vs. inaction. People don't need to take active action to "support it," but taking active action to interfere with it is tortious interference.

    Depends on the action. Offering a more attractive service to compete with the business is active action to interfere, too. Unless you can show where these people are obligated to provide images in a certain format then this isn't, either.



  • @Arantor said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    The point is that there's something out there that someone thinks is valuable to their business, but then it's not.

    No, the point is that in its natural state, it is valuable, but then someone takes an additional, active step to adulterate it in order to make it poisonous rather than valuable. That is crossing a line into active, malicious sabotage, and no number of weird analogies makes it not crossing that line.

    No, it absolutely isn't sabotage. They've done nothing to the scraper or anything owned by the scraper no matter how intensely you hallucinate otherwise.

    What are you talking about? "Doing something to" the training run by the scraper is the entire reason for Nightshade's existence. It is specifically advertised as existing for that exact purpose: "run your work through this to screw up AIs."

    So maybe don't do that? Like, if someone puts ethanol in gasoline but the engine you're using can't handle that....maybe use something else. The people who put the ethanol in that didn't put their fuel in your tank. You did that to yourself.

    :wat-girl: :wtf_owl: :wat:

    What are you even talking about here and how does that in any way relate to the topic at hand?

    I'm trying to explain to you that it's not the image sharers who are using these images to train the AIs. The trainers can choose to avoid these images and then everyone is happy.

    How? How do you choose to avoid something that is deliberately designed to be undetectable? I will accept as valid any answer that is not logically equivalent to "throw the entire business model out the window."

    You could do something revolutionary like ask the people providing the images. Just because someone came up with a business plan means that other people need to support it.

    Stop making sense and being reasonable.

    There is nothing reasonable about requiring consent for learning. That's insane in fact.


  • Banned

    @Mason_Wheeler define "learning". Is there any physical person who's learning?


  • Discourse touched me in a no-no place

    @Mason_Wheeler said in I, ChatGPT:

    How? How do you choose to avoid something that is deliberately designed to be undetectable?

    Why is it right for the AI companies to decide that they can have the right to derive commercial advantage from the use of that specific image when the creator of the image did not explicitly grant them permission to do so? Other commercial users need to secure a license first. That things are laundered through a system of programs doesn't change the basic legal question; AIs (with current technology levels at least) are very much not legal persons in any jurisdiction I've heard of, so it is their owners who necessarily carry the legal can for the AIs' actions.


  • BINNED

    @Mason_Wheeler said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    The point is that there's something out there that someone thinks is valuable to their business, but then it's not.

    No, the point is that in its natural state, it is valuable, but then someone takes an additional, active step to adulterate it in order to make it poisonous rather than valuable. That is crossing a line into active, malicious sabotage, and no number of weird analogies makes it not crossing that line.

    No, it absolutely isn't sabotage. They've done nothing to the scraper or anything owned by the scraper no matter how intensely you hallucinate otherwise.

    What are you talking about? "Doing something to" the training run by the scraper is the entire reason for Nightshade's existence. It is specifically advertised as existing for that exact purpose: "run your work through this to screw up AIs."

    So maybe don't do that? Like, if someone puts ethanol in gasoline but the engine you're using can't handle that....maybe use something else. The people who put the ethanol in that didn't put their fuel in your tank. You did that to yourself.

    :wat-girl: :wtf_owl: :wat:

    What are you even talking about here and how does that in any way relate to the topic at hand?

    I'm trying to explain to you that it's not the image sharers who are using these images to train the AIs. The trainers can choose to avoid these images and then everyone is happy.

    How? How do you choose to avoid something that is deliberately designed to be undetectable? I will accept as valid any answer that is not logically equivalent to "throw the entire business model out the window."

    You could do something revolutionary like ask the people providing the images. Just because someone came up with a business plan means that other people need to support it.

    Stop making sense and being reasonable.

    There is nothing reasonable about requiring consent for learning. That's insane in fact.

    And yet you’re allowed to post factually incorrect information.
    Which this isn’t. It’s more like information that’s useful for your intended audience and not useful for people you don’t want on your site. But even if it were, posting incorrect information doesn’t make you liable if somebody learns it wrong.



  • @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    The point is that there's something out there that someone thinks is valuable to their business, but then it's not.

    No, the point is that in its natural state, it is valuable, but then someone takes an additional, active step to adulterate it in order to make it poisonous rather than valuable. That is crossing a line into active, malicious sabotage, and no number of weird analogies makes it not crossing that line.

    No, it absolutely isn't sabotage. They've done nothing to the scraper or anything owned by the scraper no matter how intensely you hallucinate otherwise.

    What are you talking about? "Doing something to" the training run by the scraper is the entire reason for Nightshade's existence. It is specifically advertised as existing for that exact purpose: "run your work through this to screw up AIs."

    So maybe don't do that? Like, if someone puts ethanol in gasoline but the engine you're using can't handle that....maybe use something else. The people who put the ethanol in that didn't put their fuel in your tank. You did that to yourself.

    :wat-girl: :wtf_owl: :wat:

    What are you even talking about here and how does that in any way relate to the topic at hand?

    I'm trying to explain to you that it's not the image sharers who are using these images to train the AIs. The trainers can choose to avoid these images and then everyone is happy.

    How? How do you choose to avoid something that is deliberately designed to be undetectable? I will accept as valid any answer that is not logically equivalent to "throw the entire business model out the window."

    You could do something revolutionary like ask the people providing the images. Just because someone came up with a business plan means that other people need to support it.

    Again with the conflating of deliberate action vs. inaction. People don't need to take active action to "support it," but taking active action to interfere with it is tortious interference.

    Depends on the action. Offering a more attractive service to compete with the business is active action to interfere, too.

    That's competition, which is not occurring here. And even when competing, actively sabotaging the operation of your competitors is tortious interference. The classic textbook example is attempting to disrupt supplies flowing in to your competitor's business.

    Unless you can show where these people are obligated to provide images in a certain format then this isn't, either.

    No. Stop conflating "obligated to do X" with "forbidden to do Y."



  • @Gustav said in I, ChatGPT:

    @Mason_Wheeler define "learning".

    1. Subject demonstrates lack of ability to do X.
    2. Subject is presented with new information on doing X.
    3. Solely on the basis of receiving and processing this new information — ie without being directly modified in some way — subject now demonstrates the ability to do X.

    Is there any physical person who's learning?

    Why does that matter?


  • Considered Harmful

    @Mason_Wheeler said in I, ChatGPT:

    Yes, you really should. Once again, one of the most fundamental principles of the rule of law is that everything which is not forbidden is permitted. A TOS has no legal validity; it's just someone arbitrarily saying "I don't want you doing these things and if you do them anyway I'll be unhappy." It's possible that they may put technical enforcement measures in place that give rise to a CFAA claim if you get the right judge on the right phase of the moon, but that's about it.

    Remember how you started this subthread by claiming that to employ such technical enforcement measures was super illegal according to some law or other that you could never specify? I do.


  • ♿ (Parody)

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    The point is that there's something out there that someone thinks is valuable to their business, but then it's not.

    No, the point is that in its natural state, it is valuable, but then someone takes an additional, active step to adulterate it in order to make it poisonous rather than valuable. That is crossing a line into active, malicious sabotage, and no number of weird analogies makes it not crossing that line.

    No, it absolutely isn't sabotage. They've done nothing to the scraper or anything owned by the scraper no matter how intensely you hallucinate otherwise.

    What are you talking about? "Doing something to" the training run by the scraper is the entire reason for Nightshade's existence. It is specifically advertised as existing for that exact purpose: "run your work through this to screw up AIs."

    So maybe don't do that? Like, if someone puts ethanol in gasoline but the engine you're using can't handle that....maybe use something else. The people who put the ethanol in that didn't put their fuel in your tank. You did that to yourself.

    :wat-girl: :wtf_owl: :wat:

    What are you even talking about here and how does that in any way relate to the topic at hand?

    I'm trying to explain to you that it's not the image sharers who are using these images to train the AIs. The trainers can choose to avoid these images and then everyone is happy.

    How? How do you choose to avoid something that is deliberately designed to be undetectable? I will accept as valid any answer that is not logically equivalent to "throw the entire business model out the window."

    You could do something revolutionary like ask the people providing the images. Just because someone came up with a business plan means that other people need to support it.

    Again with the conflating of deliberate action vs. inaction. People don't need to take active action to "support it," but taking active action to interfere with it is tortious interference.

    Depends on the action. Offering a more attractive service to compete with the business is active action to interfere, too.

    That's competition, which is not occurring here. And even when competing, actively sabotaging the operation of your competitors is tortious interference. The classic textbook example is attempting to disrupt supplies flowing in to your competitor's business.

    Ah, "supplies." Except...they're not paying for these. It's like a photographer complaining that every time he goes in public to take pictures of people they frown at him and he can't make money off pictures that don't have smiling people.

    Unless you can show where these people are obligated to provide images in a certain format then this isn't, either.

    No. Stop conflating "obligated to do X" with "forbidden to do Y."

    It's not as cut and dried as you seem to think it is. There is nothing, except some imagined obligation you continue to insist is real, forbidding this action.



  • @dkf said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    How? How do you choose to avoid something that is deliberately designed to be undetectable?

    Why is it right for the AI companies to decide that they can have the right to derive commercial advantage from the use of that specific image when the creator of the image did not explicitly grant them permission to do so?

    Because consent is irrelevant for learning. Always has been. A bunch of Luddites freaking out about the latest new thing does not change that.



  • @Mason_Wheeler Consent is required if you are going to take material you do not legally own and do something with it that isn't explicitly listed as permitted.



  • @LaoC said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    Yes, you really should. Once again, one of the most fundamental principles of the rule of law is that everything which is not forbidden is permitted. A TOS has no legal validity; it's just someone arbitrarily saying "I don't want you doing these things and if you do them anyway I'll be unhappy." It's possible that they may put technical enforcement measures in place that give rise to a CFAA claim if you get the right judge on the right phase of the moon, but that's about it.

    Remember how you started this subthread by claiming that to employ such technical enforcement measures was super illegal according to some law or other that you could never specify? I do.

    No. Citation?


  • ♿ (Parody)

    @Mason_Wheeler said in I, ChatGPT:

    @dkf said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    How? How do you choose to avoid something that is deliberately designed to be undetectable?

    Why is it right for the AI companies to decide that they can have the right to derive commercial advantage from the use of that specific image when the creator of the image did not explicitly grant them permission to do so?

    Because consent is irrelevant for learning. Always has been. A bunch of Luddites freaking out about the latest new thing does not change that.

    So they should learn how to properly analyze Nightshaded images?



  • @boomzilla said in I, ChatGPT:

    That's competition, which is not occurring here. And even when competing, actively sabotaging the operation of your competitors is tortious interference. The classic textbook example is attempting to disrupt supplies flowing in to your competitor's business.

    Ah, "supplies." Except...they're not paying for these.

    Never said they were. Why is that relevant?

    It's like a photographer complaining that every time he goes in public to take pictures of people they frown at him and he can't make money off pictures that don't have smiling people.

    :wat:

    No. Stop conflating "obligated to do X" with "forbidden to do Y."

    It's not as cut and dried as you seem to think it is. There is nothing, except some imagined obligation you continue to insist is real, forbidding this action.

    You were saying?



  • @Arantor said in I, ChatGPT:

    @Mason_Wheeler Consent is required if you are going to take material you do not legally own and do something with it that isn't explicitly listed as permitted.

    No, it's not. Copyright is extremely narrow, by design, specifically to prevent exactly this sort of abuse.


  • ♿ (Parody)

    @Mason_Wheeler said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    That's competition, which is not occurring here. And even when competing, actively sabotaging the operation of your competitors is tortious interference. The classic textbook example is attempting to disrupt supplies flowing in to your competitor's business.

    Ah, "supplies." Except...they're not paying for these.

    Never said they were. Why is that relevant?

    So what sort of guarantee of purpose is being violated? Where's the obligation to provide something suitable for AI training? The answer is: "In @Mason-Wheeler's head."

    It's like a photographer complaining that every time he goes in public to take pictures of people they frown at him and he can't make money off pictures that don't have smiling people.

    :wat:

    No. Stop conflating "obligated to do X" with "forbidden to do Y."

    It's not as cut and dried as you seem to think it is. There is nothing, except some imagined obligation you continue to insist is real, forbidding this action.

    You were saying?

    Did you read this link? Which part of it do you suppose supports your claims?


  • Banned

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler define "learning".

    1. Subject demonstrates lack of ability to do X.
    2. Subject is presented with new information on doing X.
    3. Solely on the basis of receiving and processing this new information — ie without being directly modified in some way — subject now demonstrates the ability to do X.

    Using this definition, uploading a pirated movie to a warez site so that the warez site can "learn" how to serve the pirated movie counts as learning.

    In fact, that's almost exactly what OpenAI does, except there's a series of neural tubes in the middle.

    Are both of these learning and therefore should be allowed by law? Is neither of these learning? If one is and the other isn't, where lies the difference?

    Is there any physical person who's learning?

    Why does that matter?

    Why does that not matter? I agree it's absurd to forbid humans from learning, but I absolutely do not see why the same line of reasoning should extend to soulless automata executing program instructions.



  • @Mason_Wheeler said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    @Mason_Wheeler Consent is required if you are going to take material you do not legally own and do something with it that isn't explicitly listed as permitted.

    No, it's not. Copyright is extremely narrow, by design, specifically to prevent exactly this sort of abuse.

    Do you understand why open source projects need to specify a licence? It's precisely because copyright is not extremely narrow at all and that the licence on open source exists to grant all the rights that copyright doesn't.

    Jesus fucking Christ. It's comical how wrong you are about this, even if you may, possibly, have a moral argument, the rest is so rooted in an alternative reality there's no hope of debating you sensibly on the subject.


  • Considered Harmful

    @Mason_Wheeler said in I, ChatGPT:

    @LaoC said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    Yes, you really should. Once again, one of the most fundamental principles of the rule of law is that everything which is not forbidden is permitted. A TOS has no legal validity; it's just someone arbitrarily saying "I don't want you doing these things and if you do them anyway I'll be unhappy." It's possible that they may put technical enforcement measures in place that give rise to a CFAA claim if you get the right judge on the right phase of the moon, but that's about it.

    Remember how you started this subthread by claiming that to employ such technical enforcement measures was super illegal according to some law or other that you could never specify? I do.

    No. Citation?

    Of course not.

    Maliciously sabotaging someone else's business is severely illegal. Nightshade is sabotage in the classic sense, not particularly different from throwing wooden shoes into machinery.

    Nightshade is a technical enforcement measure. I put up a sign saying "stay off my lawn, this stuff is only for people I like". You're free to ignore the sign and take whatever you find, but if what looked like gold to you turns out to be shit, that's your problem. In any case it's not like a booby trap where you're provably lacking an arm that was there before you stuck it into my stuff. You ingested an ephemeral copy that you may or may not have been entitled to and maybe your big heap of hashes got better at whatever ill-defined thing you had in mind for it, maybe it got worse, who knows? It's not like any of my may-have-been-shit can actually be found in it, is it?



  • @Gustav said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler define "learning".

    1. Subject demonstrates lack of ability to do X.
    2. Subject is presented with new information on doing X.
    3. Solely on the basis of receiving and processing this new information — ie without being directly modified in some way — subject now demonstrates the ability to do X.

    Using this definition, uploading a pirated movie to a warez site so that the warez site can "learn" how to serve the pirated movie counts as learning.

    So now you're back to talking about copyright. And worse still, making analogies to copyright rather than talking about it directly. :rolleyes:

    Is there any physical person who's learning?

    Why does that matter?

    Why does that not matter? I agree it's absurd to forbid humans from learning, but I absolutely do not see why the same line of reasoning should extend to soulless automata executing program instructions.

    And now we get to the crux of the matter! This isn't a rational argument at all; it is — and always has been, no matter which words are chosen to obscure it — at its core a metaphysical argument that "machine learning" a priori cannot be a real, legitimate thing because they don't have a soul.



  • @Arantor said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    @Mason_Wheeler Consent is required if you are going to take material you do not legally own and do something with it that isn't explicitly listed as permitted.

    No, it's not. Copyright is extremely narrow, by design, specifically to prevent exactly this sort of abuse.

    Do you understand why open source projects need to specify a licence? It's precisely because copyright is not extremely narrow at all and that the licence on open source exists to grant all the rights that copyright doesn't.

    It's because open source licensing is built on top of the existing copyright system.

    Learning is not.



  • @LaoC said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @LaoC said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    Yes, you really should. Once again, one of the most fundamental principles of the rule of law is that everything which is not forbidden is permitted. A TOS has no legal validity; it's just someone arbitrarily saying "I don't want you doing these things and if you do them anyway I'll be unhappy." It's possible that they may put technical enforcement measures in place that give rise to a CFAA claim if you get the right judge on the right phase of the moon, but that's about it.

    Remember how you started this subthread by claiming that to employ such technical enforcement measures was super illegal according to some law or other that you could never specify? I do.

    No. Citation?

    Of course not.

    Maliciously sabotaging someone else's business is severely illegal. Nightshade is sabotage in the classic sense, not particularly different from throwing wooden shoes into machinery.

    Nightshade is a technical enforcement measure.

    No, because there's nothing to enforce. There is no right to forbid someone from learning from your work.



  • OK, Mason Wheeler has convinced me of two things.

    One, to process every single goddamn image I ever make in Nightshade, and to never listen to a word he has to say again.

    I think this is a win on both counts.


  • BINNED

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler define "learning".

    1. Subject demonstrates lack of ability to do X.
    2. Subject is presented with new information on doing X.
    3. Solely on the basis of receiving and processing this new information — ie without being directly modified in some way — subject now demonstrates the ability to do X.

    Using this definition, uploading a pirated movie to a warez site so that the warez site can "learn" how to serve the pirated movie counts as learning.

    So now you're back to talking about copyright. And worse still, making analogies to copyright rather than talking about it directly. :rolleyes:

    Analogies were allowed when you made them, right?

    And you avoid the answer at the point where you have to explain why copyright infringement is fine for AI and not fine for classical programs. What makes AI programs different, besides the moniker?

    Is there any physical person who's learning?

    Why does that matter?

    Why does that not matter? I agree it's absurd to forbid humans from learning, but I absolutely do not see why the same line of reasoning should extend to soulless automata executing program instructions.

    And now we get to the crux of the matter! This isn't a rational argument at all; it is — and always has been, no matter which words are chosen to obscure it — at its core a metaphysical argument that "machine learning" a priori cannot be a real, legitimate thing because they don't have a soul.

    No, thats not the problem. (Obviously, because you can’t even define what a soul is, since it doesn’t exist.)
    It’s a fact that the legal system treats human minds different from computing machines. Whether you think there is good reason or not for that, it’s a legal fact. There’s a clear legal distinction between humans and computer programs. There is no legal distinction between classical programs, whose distributors need to obey copyright, and “AI” programs. They’re the same thing, anyway, other than that we don’t know how to make the latter reliable. (We don’t know how to make the former reliable, either, but we’re much closer). And you said yourself you don’t want to define where the line is.


  • ♿ (Parody)

    @Mason_Wheeler said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    @Mason_Wheeler Consent is required if you are going to take material you do not legally own and do something with it that isn't explicitly listed as permitted.

    No, it's not. Copyright is extremely narrow, by design, specifically to prevent exactly this sort of abuse.

    Do you understand why open source projects need to specify a licence? It's precisely because copyright is not extremely narrow at all and that the licence on open source exists to grant all the rights that copyright doesn't.

    It's because open source licensing is built on top of the existing copyright system.

    Learning is not.

    But your complaint is not that someone is preventing learning, it's that their own learning activity is learning something in a way they don't want it to. So maybe they should stop that.


  • Considered Harmful

    @Mason_Wheeler said in I, ChatGPT:

    @LaoC said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @LaoC said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    Yes, you really should. Once again, one of the most fundamental principles of the rule of law is that everything which is not forbidden is permitted. A TOS has no legal validity; it's just someone arbitrarily saying "I don't want you doing these things and if you do them anyway I'll be unhappy." It's possible that they may put technical enforcement measures in place that give rise to a CFAA claim if you get the right judge on the right phase of the moon, but that's about it.

    Remember how you started this subthread by claiming that to employ such technical enforcement measures was super illegal according to some law or other that you could never specify? I do.

    No. Citation?

    Of course not.

    Maliciously sabotaging someone else's business is severely illegal. Nightshade is sabotage in the classic sense, not particularly different from throwing wooden shoes into machinery.

    Nightshade is a technical enforcement measure.

    No, because there's nothing to enforce. There is no right to forbid someone from learning from your work.

    "Someone". Notice something? As much as you like to pretend otherwise, the law does make a difference between humans and not-humans. Otherwise I'd be entitled to buy a movie ticket for "someone" (that someone being my video camera) and have him "learn" from the movie and produce a totally-not-the-same artwork from the incomprehensible and not at all equivalent to the celluloid (yeah, it's a French arthouse movie) encoding on that aptly-named "memory card".


  • Banned

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler define "learning".

    1. Subject demonstrates lack of ability to do X.
    2. Subject is presented with new information on doing X.
    3. Solely on the basis of receiving and processing this new information — ie without being directly modified in some way — subject now demonstrates the ability to do X.

    Using this definition, uploading a pirated movie to a warez site so that the warez site can "learn" how to serve the pirated movie counts as learning.

    So now you're back to talking about copyright.

    Ultimately, the entire issue is about copyright. Without copyright, there would be no question whether it's okay to rip off other people's works.

    But no, I didn't mean copyright. I meant whether a warez site should be treated the same as whatever OpenAI is doing, and if not, why not. Because in the broad sense, the system is learning how to serve new content.

    Is there any physical person who's learning?

    Why does that matter?

    Why does that not matter? I agree it's absurd to forbid humans from learning, but I absolutely do not see why the same line of reasoning should extend to soulless automata executing program instructions.

    And now we get to the crux of the matter! This isn't a rational argument at all; it is — and always has been, no matter which words are chosen to obscure it — at its core a metaphysical argument that "machine learning" a priori cannot be a real, legitimate thing because they don't have a soul.

    Wait wait wait. Are you saying that you were just playing devil's advocate this whole time and you don't actually believe there's such thing as machine learning?


  • BINNED

    @Mason_Wheeler said in I, ChatGPT:

    @topspin said in I, ChatGPT:

    @Mason_Wheeler so then somebody creates this automated tool, who is not liable according to you, and somebody else is using it to create hundreds of different exploitable SQL statements and posts them on their website. And now you claim the second is liable, even though he didn’t go out and attack somebody’s server, but some idiot came along, downloaded these statements and executed them without any checks.
    There’s no active attack, the victim is hitting himself.

    This is the booby trap principle. If you deliberately leave something dangerous lying around, you are liable for the damage it causes. It doesn't have to be "active" to be a bona fide attack.

    It’s not a “booby trap” that your firefighter activates after legitimately entering my house. It’s a gun he found in the gun safe, loaded with the bullets from the safe, put it into his mouth and pulled the trigger.
    If it’s “not active”, stop putting the gun in your mouth.



  • @topspin said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler define "learning".

    1. Subject demonstrates lack of ability to do X.
    2. Subject is presented with new information on doing X.
    3. Solely on the basis of receiving and processing this new information — ie without being directly modified in some way — subject now demonstrates the ability to do X.

    Using this definition, uploading a pirated movie to a warez site so that the warez site can "learn" how to serve the pirated movie counts as learning.

    So now you're back to talking about copyright. And worse still, making analogies to copyright rather than talking about it directly. :rolleyes:

    Analogies were allowed when you made them, right?

    And you avoid the answer at the point where you have to explain why copyright infringement is fine for AI and not fine for classical programs. What makes AI programs different, besides the moniker?

    The fact that it's not copyright infringement and never was.

    Is there any physical person who's learning?

    Why does that matter?

    Why does that not matter? I agree it's absurd to forbid humans from learning, but I absolutely do not see why the same line of reasoning should extend to soulless automata executing program instructions.

    And now we get to the crux of the matter! This isn't a rational argument at all; it is — and always has been, no matter which words are chosen to obscure it — at its core a metaphysical argument that "machine learning" a priori cannot be a real, legitimate thing because they don't have a soul.

    No, thats not the problem. (Obviously, because you can’t even define what a soul is, since it doesn’t exist.)

    Wrong on all three counts, but that's beside the point. This is what I mean by "words chosen to obscure it." People are making an argument that is fundamentally about machines being soulless, and then deny that that's what they're talking about because they want it to sound respectable to people who don't believe in such unfashionable notions. But at its core, this is an argument that there is an ephemeral quality inherent to living human beings that makes them capable of doing things that cannot be replicated by anything else.

    It’s a fact that the legal system treats human minds different from computing machines. Whether you think there is good reason or not for that, it’s a legal fact.

    Define "treats human minds different from." In what context? Definitely not in this context, because this is such a new context that laws for it don't exist yet.



  • @Gustav said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler define "learning".

    1. Subject demonstrates lack of ability to do X.
    2. Subject is presented with new information on doing X.
    3. Solely on the basis of receiving and processing this new information — ie without being directly modified in some way — subject now demonstrates the ability to do X.

    Using this definition, uploading a pirated movie to a warez site so that the warez site can "learn" how to serve the pirated movie counts as learning.

    So now you're back to talking about copyright.

    Ultimately, the entire issue is about copyright. Without copyright, there would be no question whether it's okay to rip off other people's works.

    Irrelevant, because no ripping-off of other people's works is happening.

    But no, I didn't mean copyright. I meant whether a warez site should be treated the same as whatever OpenAI is doing, and if not, why not.

    And how is warez not entirely about copyright violation?

    Because in the broad sense, the system is learning how to serve new content.

    No, it's not. The system is learning how to produce new content.

    And now we get to the crux of the matter! This isn't a rational argument at all; it is — and always has been, no matter which words are chosen to obscure it — at its core a metaphysical argument that "machine learning" a priori cannot be a real, legitimate thing because they don't have a soul.

    Wait wait wait. Are you saying that you were just playing devil's advocate this whole time and you don't actually believe there's such thing as machine learning?

    No. I'm saying that at the core of the Luddite argument is a bad-faith attempt to disqualify the concept of the existence of machine learning.



  • @topspin said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @topspin said in I, ChatGPT:

    @Mason_Wheeler so then somebody creates this automated tool, who is not liable according to you, and somebody else is using it to create hundreds of different exploitable SQL statements and posts them on their website. And now you claim the second is liable, even though he didn’t go out and attack somebody’s server, but some idiot came along, downloaded these statements and executed them without any checks.
    There’s no active attack, the victim is hitting himself.

    This is the booby trap principle. If you deliberately leave something dangerous lying around, you are liable for the damage it causes. It doesn't have to be "active" to be a bona fide attack.

    It’s not a “booby trap” that your firefighter activates after legitimately entering my house. It’s a gun he found in the gun safe, loaded with the bullets from the safe, put it into his mouth and pulled the trigger.
    If it’s “not active”, stop putting the gun in your mouth.

    You are lying through your teeth. Stop it.


  • BINNED

    @Mason_Wheeler said in I, ChatGPT:

    @topspin said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Mason_Wheeler define "learning".

    1. Subject demonstrates lack of ability to do X.
    2. Subject is presented with new information on doing X.
    3. Solely on the basis of receiving and processing this new information — ie without being directly modified in some way — subject now demonstrates the ability to do X.

    Using this definition, uploading a pirated movie to a warez site so that the warez site can "learn" how to serve the pirated movie counts as learning.

    So now you're back to talking about copyright. And worse still, making analogies to copyright rather than talking about it directly. :rolleyes:

    Analogies were allowed when you made them, right?

    And you avoid the answer at the point where you have to explain why copyright infringement is fine for AI and not fine for classical programs. What makes AI programs different, besides the moniker?

    The fact that it's not copyright infringement and never was.

    You didn’t answer the question. Why is it not copyright infringement if it’s done by “AI” but is if the very same thing is done by “not AI”.

    Is there any physical person who's learning?

    Why does that matter?

    Why does that not matter? I agree it's absurd to forbid humans from learning, but I absolutely do not see why the same line of reasoning should extend to soulless automata executing program instructions.

    And now we get to the crux of the matter! This isn't a rational argument at all; it is — and always has been, no matter which words are chosen to obscure it — at its core a metaphysical argument that "machine learning" a priori cannot be a real, legitimate thing because they don't have a soul.

    No, thats not the problem. (Obviously, because you can’t even define what a soul is, since it doesn’t exist.)

    Wrong on all three counts, but that's beside the point. This is what I mean by "words chosen to obscure it." People are making an argument that is fundamentally about machines being soulless, and then deny that that's what they're talking about because they want it to sound respectable to people who don't believe in such unfashionable notions. But at its core, this is an argument that there is an ephemeral quality inherent to living human beings that makes them capable of doing things that cannot be replicated by anything else.

    I laid it out perfectly clearly for you and yet you manage to get it completely wrong: I didn’t say there’s some inherent ability in humans, in fact the opposite. I said the law treats human minds differently from computers.

    It’s a fact that the legal system treats human minds different from computing machines. Whether you think there is good reason or not for that, it’s a legal fact.

    Define "treats human minds different from." In what context? Definitely not in this context, because this is such a new context that laws for it don't exist yet.

    In this context. You keep repeating that humans are allowed to learn. I can learn by heart whatever copyrighted book I acquire. Copyright prevents me from copying it into a machine, or another piece of paper, but not into my mind. Which is fundamentally the same thing, but absolutely not the same in front of the law, neither letter nor spirit.

    ETA: there’s nothing new about this. Copyright has existed for centuries and applied to computers for decades. Human minds have existed even longer.


Log in to reply