I, ChatGPT



  • @Watson said in I, ChatGPT:

    @sockpuppet7 said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    your soul is where the 'art' comes from and AIs don't have souls.

    this is why this discussion won't ever be settled, it's religion

    It will be fun to see religious people trying to explain souls if AI gets to the same level as our brains

    Next you'll be saying there is no Silicon Heaven. Preposterous! Where would all the calculators go?

    did you see the series altered carbon?

    I'm glad they just omitted the religious debate that would happen if that mind backup technology existed. In the real world that would be an endless discussion


  • đŸšœ Regular

    @sockpuppet7 said in I, ChatGPT:

    It will be fun to see religious people trying to explain souls if AI gets to the same level as our brains

    As fun as the Crusades, Jihad, etc.


  • Discourse touched me in a no-no place

    @Arantor said in I, ChatGPT:

    The semi-smart in-betweening is the part AI is kind of missing at the moment.

    Oh, that's what the interpolation in the high-dimension abstract vector space is doing; producing things that likely fit the pattern established by the context contents and the random source. Fingers weren't right because the dimensionality needed to be higher (by about one level of abstraction). Getting text right seems to need an even higher level, or maybe splitting things into a part that does the "art" and another that does the lettering. (There are very many AIs that know what letters are shaped like; it's a standard training set.)


  • Discourse touched me in a no-no place

    @Gustav said in I, ChatGPT:

    It's incredibly hard to actually prove humans actually possess the ability to have genuinely original thoughts

    Most of the time, people don't have original thoughts. Having original thoughts is difficult, and people tend to avoid things that difficult. That is why getting a PhD is such a major personal achievement; it requires at least some original thought.


  • Discourse touched me in a no-no place

    @Arantor said in I, ChatGPT:

    @Gustav In which case you have the concept of reasoning, of observation and implementation. Still not a thing AI can do.

    It requires embodiment. The AI needs to be connected to world (however real you think it is, it's more real than any simulation we can make) and to be able to make changes that it can observe the results of and learn from.

    We're not quite there with robotics yet. Surprisingly close though (except for the control systems, which are horrible because most roboticists can't write decent code). We're further off with learning algorithms, at least as far as mainstream AI is concerned. There's stuff in labs that does better, but doesn't have the scale (because scaling up is very expensive when it doesn't fit the model of a cloud service).


  • ♿ (Parody)

    @dkf said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    The semi-smart in-betweening is the part AI is kind of missing at the moment.

    Oh, that's what the interpolation in the high-dimension abstract vector space is doing; producing things that likely fit the pattern established by the context contents and the random source. Fingers weren't right because the dimensionality needed to be higher (by about one level of abstraction). Getting text right seems to need an even higher level, or maybe splitting things into a part that does the "art" and another that does the lettering. (There are very many AIs that know what letters are shaped like; it's a standard training set.)

    40fcca66-bd03-4738-8052-68cc265f772a-image.png



  • @Gustav said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    @Gustav In which case you have the concept of reasoning, of observation and implementation. Still not a thing AI can do.

    Reasoning is just about the first thing they made AI do, back when they called it expert systems. Observation is nearly a solved problem by now for the most part thanks to deep learning. Implementation boils down to taking known things and mixing them together.

    Expert systems don’t themselves do the reasoning, the humans did that in constructing the rulesets. The machines just tries to solve for the criteria it has, and won’t spontaneously think of something out of left field that could be adapted.



  • @Zecc said in I, ChatGPT:

    @sockpuppet7 said in I, ChatGPT:

    It will be fun to see religious people trying to explain souls if AI gets to the same level as our brains

    As fun as the Crusades, Butlerian Jihad, etc.



  • @boomzilla Cubism is a great example. Humans can represent their world in a different way if they choose. Few humans were or ever could be Picasso, but currently no AI can be the next Picasso, only imitate him.

    Ditto the next Douglas Adams or the next Mozart or the next
 it doesn’t matter. AI currently doesn’t have enough capacity to do anything beyond strictly remix what it already has, and what it has is not comparable to what we have - yet. Even if you want to argue that we too are machines etc. well, we are, and if we build neural networks the same size as ours, with as much data fed into it as we ourselves receive, maybe we’ll get actual creativity from them for the same reasons we ourselves are creative.

    Or maybe we won’t. The scientist in me says there’s no soul and that reproducing human creativity should be doable with a big enough model (which the current AIs aren’t), the artist in me hopes that never happens.

    I want to believe there’s still a little magic in the air.


  • BINNED

    I think this whole "can AI be creative" part is completely irrelevant to the copyright discussion.

    The point is, for humans there's this concept called plagiarism. Even if you don't copy the code one-to-one, it could still be a derivative work. If you just take the linux kernel and run it through a refactoring tool to change some variable names and stuff, it'll still be copyrighted and you can't just do what you please with it. Humans know when they do this stuff intentionally. They may or may not get caught, and when someone finds out, they may or may not be able to prove it. But usually there is still a clear distinction between "I wrote this code myself, obviously inspired by all the things I learned during my life" and "I blatantly copied something and obfuscated the changes".

    With the AI stuff, it takes in all the available open source code, massive amounts, and remixes it. Then it produces something that might either be "creative" and thus a "fair use" of prior knowledge in the same sense humans read other works1, or something that's just basically refactoring, making it a derivative, or even something that's a 1:1 copy. The difference has to be judged on a case by case basis. The user most probably will have no idea which it is, since the tool can't tell them what it did, and the user doesn't know all the stuff it used as input. Given how little people care about just copy-pasta-ing stuff from stack overflow, they also will not bother to find out either. The people whose work is potentially being used will also have no idea either, so it's unlikely they can go and complain, in those case where they have a rightful complaint.

    And to this end, the whole "AI" machinery will be used just as mechanism for moneycopyright laundering, where copyrighted code goes in, code "without" copyright (in the pie_flavor sense) comes out, and nobody will be able to do something about it because they have Microsoft "guaranteeing" that this code doesn't violate any copyrights.


    I'd be interested in hearing Microsoft's position if somebody where to feed stolen Windows source code into these systems. Sure, using that stolen code is illegal, but if you just train the AI on it and then "delete" it afterwards, then MS has no grounds to complain about the resulting model.
    (Going even further, this is in essence how facebook, google, et al. loophole their way out of their illegal data collection / privacy intrusion.)


    1 Although some people very intentionally do not read OSS code when they want to implement something, so as not to taint themselves. This can be important for either clean room reverse engineering or just implementing open standards.



  • @Arantor said in I, ChatGPT:

    @boomzilla Cubism is a great example. Humans can represent their world in a different way if they choose. Few humans were or ever could be Picasso, but currently no AI can be the next Picasso, only imitate him.

    Ditto the next Douglas Adams or the next Mozart or the next
 it doesn’t matter. AI currently doesn’t have enough capacity to do anything beyond strictly remix what it already has, and what it has is not comparable to what we have - yet. Even if you want to argue that we too are machines etc. well, we are, and if we build neural networks the same size as ours, with as much data fed into it as we ourselves receive, maybe we’ll get actual creativity from them for the same reasons we ourselves are creative.

    Or maybe we won’t. The scientist in me says there’s no soul and that reproducing human creativity should be doable with a big enough model (which the current AIs aren’t), the artist in me hopes that never happens.

    I want to believe there’s still a little magic in the air.

    I agree with you here. That's bad, I like to argue things


  • BINNED

    @sockpuppet7 said in I, ChatGPT:

    I agree with you here. That's bad, I like to argue things

    One of us! One of us!


  • ♿ (Parody)

    @Arantor said in I, ChatGPT:

    I want to believe there’s still a little magic in the air.

    Me, too, but I can't justify arbitrary and contrary to reason claims of copyright infringement based on vague intuition about the nature of creativity.


  • Banned

    @Arantor said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    @Gustav In which case you have the concept of reasoning, of observation and implementation. Still not a thing AI can do.

    Reasoning is just about the first thing they made AI do, back when they called it expert systems. Observation is nearly a solved problem by now for the most part thanks to deep learning. Implementation boils down to taking known things and mixing them together.

    Expert systems don’t themselves do the reasoning, the humans did that in constructing the rulesets.

    Depends on your opinion on logic programming paradigm. It's only facts and rules of induction that the humans construct; the actual path of reasoning that gives such and such result is constructed 100% algorithmically. Wouldn't facts and rules of induction be part of observation, rather than reasoning?

    The machines just tries to solve for the criteria it has, and won’t spontaneously think of something out of left field that could be adapted.

    And that was a few decades ago. Nowadays classifying AIs are perfectly capable of inventing their own facts and inductions, to the point humans have a hard time following the line of reasoning but it still gives the correct results with very high accuracy.



  • @topspin said in I, ChatGPT:

    copyrighted code goes in, code "without" copyright (in the pie_flavor sense) comes out

    In my limited understanding of copyright law that would work about as well as the pie_flavor defense against EULAs. To determine if there's copyright infringement, the courts will look at the works in question and decide if they are similar enough to constitute infringement. What tools were used to produce these works is entirely irrelevant to this question.

    From my skimming of the court case linked above I got the impression that this is exactly what happened there and the reason the cases were dismissed: The plaintiffs could not point to any specific works of theirs that were being infringed, so there was nothing to compare.

    What this means, and what should IMHO be obvious, is that it makes no sense to talk about AIs infringing copyrights, it's the publishers of any AI-generated works who risk infringement.


  • BINNED

    @ixvedeusi said in I, ChatGPT:

    @topspin said in I, ChatGPT:

    copyrighted code goes in, code "without" copyright (in the pie_flavor sense) comes out

    What this means, and what should IMHO be obvious, is that it makes no sense to talk about AIs infringing copyrights, it's the publishers of any AI-generated works who risk infringement.

    Agreed, but ...

    In my limited understanding of copyright law that would work about as well as the pie_flavor defense against EULAs. To determine if there's copyright infringement, the courts will look at the works in question and decide if they are similar enough to constitute infringement. What tools were used to produce these works is entirely irrelevant to this question.

    The problem with that I stated above. They produce a massive amount of stuff, sourced from an even more massive amount of stuff. Said "publishers of any AI-generated works" don't even have the resources to check if they have just infringed copyright, they just trust nobody will notice and they can get away with it. (And they got MS's blessings that it doesn't, which also solely relies on them assuming they get away with it on a massive scale and having more lawyers.)
    If the users don't even check all of their code, like with Stack Overflow, who's going to have the resources to have the courts check all of the code?



  • @topspin said in I, ChatGPT:

    Said "publishers of any AI-generated works" don't even have the resources to check if they have just infringed copyright, they just trust nobody will notice and they can get away with it.

    And that is new how exactly? It's been ages that the internet has been so full of stuff and nobody bothers checking everything for copyright. I think you're overstating the relevance of "but AI!" here. Anyone can browse github for some GPL code, copy it into their code base and very probably get away with it.

    The question is: will we get some high-profile cases where some people get seriously burned? If so, I'd think that might put a bit of a damper on the AI craze. I also think that in the future we will see much more stuff like YouTube's copyright bots and automatic take-downs, but here again AI is only marginally relevant, mostly as a convenient excuse.


  • BINNED

    @ixvedeusi said in I, ChatGPT:

    @topspin said in I, ChatGPT:

    Said "publishers of any AI-generated works" don't even have the resources to check if they have just infringed copyright, they just trust nobody will notice and they can get away with it.

    And that is new how exactly? It's been ages that the internet has been so full of stuff and nobody bothers checking everything for copyright. I think you're overstating the relevance of "but AI!" here. Anyone can browse github for some GPL code, copy it into their code base and very probably get away with it.

    There's a massive difference of intent and awareness here, leading to the difference of scale. Fewer people do that intentionally than will do when they're not even aware the thing they just produced is copyright infringement, especially if half of their job is done by the tool.



  • @boomzilla said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    I want to believe there’s still a little magic in the air.

    Me, too, but I can't justify arbitrary and contrary to reason claims of copyright infringement based on vague intuition about the nature of creativity.

    Sure, there’s the “what’s legally enforceable” and there’s “what’s ethically outrageous”, the former is not even close to being a thing.

    The latter is complicated because you get a ton of “creative” people (and for some I use this term loosely) whose own work is only half a step up from what an AI would churn out being the most vocally offended that “how dare you replace the human element” up to people like me that aspire to be better (having, for example, multiple complete novels and dozens of short stories that are, to the best of my knowledge not consciously drawn from any identifiable set of sources) that has some reservations about it both legally and ethically, to people who are sufficiently creative of their own volition and simply give no fucks.



  • @topspin contributions to Stack Overflow are explicitly tagged CC-BY-SA, versions differ over time but it’s all theoretically Creative Commons.

    It’s basically enforced by trust for the most part, and that the snippets of code are generally too short to be distinctive.


  • Discourse touched me in a no-no place

    @Gustav said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    Expert systems don’t themselves do the reasoning, the humans did that in constructing the rulesets.

    Depends on your opinion on logic programming paradigm. It's only facts and rules of induction that the humans construct; the actual path of reasoning that gives such and such result is constructed 100% algorithmically. Wouldn't facts and rules of induction be part of observation, rather than reasoning?

    Check out theorem proving systems. They're usually structured as theorem proving assistants, tracking the open terms remaining to be proved and the axiom schema, but the user has to say which strategy to apply at a particular step. Some of that can be automated (a lot is) but not all of it, at least once you get beyond first order logic or simple modal logic (those two reach different parts of the tractability frontier; the union of them isn't tractable at all). Things like the axiom of choice make automated proofs very hard; I guess an AI could make progress there by steering the proof assistant... but it isn't an area where there's been much work.

    There has been work on ensembles of AI systems. Apparently that's a very good approach, where the weaknesses of one model are compensated by the strengths of another.

    And that was a few decades ago. Nowadays classifying AIs are perfectly capable of inventing their own facts and inductions, to the point humans have a hard time following the line of reasoning but it still gives the correct results with very high accuracy.

    They're pretty much always interpolation and/or projections, but often in a crazy high-order vector space. It is comprehending the meaning of the vector space that is hard.


  • BINNED

    @Arantor said in I, ChatGPT:

    @topspin contributions to Stack Overflow are explicitly tagged CC-BY-SA, versions differ over time but it’s all theoretically Creative Commons.

    The Stack Overflow reference isn't in regard to copyright but in regard to copy-pasta'ing without understanding it.



  • @topspin well there’s that too, but I’d hope that’s the sort of thing that code review should catch


    Reality is that it doesn’t matter - if the answer is findable online, whether on SO or a blog or random forum, it’s going to get copy-pasta’d and often without proper comprehension. SO just pours the accelerant on this.



  • @ixvedeusi said in I, ChatGPT:

    The question is: will we get some high-profile cases where some people get seriously burned? If so, I'd think that might put a bit of a damper on the AI craze.

    it's bound to happen. one could ask AI to draw Mickey, for example

    my opinion as how it should be is that asking AI to draw something should be similar if I asked a human to do it. If I ask a human to draw Mickey my human won't get the copyright to use this Mickey

    if I ask human to draw a boomzilla Mickey hybrid it could fit as parody or whatever, and the same should apply for AI



  • @HardwareGeek said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    reasoning ... not a thing AI can do.

    Have you met any humans?

    I don't know... stupid people are incredibly ingenious. Not in a good way, but still...



  • @sockpuppet7 said in I, ChatGPT:

    It will be fun to see religious people trying to explain souls if AI gets to the same level as our brains

    We invented AI. We're GODS!

    And since AI is then at the same level as us, that "problem" is quickly solved by exterminating all humans.



  • @dcon So in a few years when AI is self aware and asks us if we're the gods for creating it, whoever answers must have seen Ghostbusters.

    c1549bc9-5930-4c54-9e48-2f524c479fdb-Untitled.jpg


  • Notification Spam Recipient

    @DogsB said in I, ChatGPT:

    Isn’t that nice. He’s found a new word for choose your own adventure books. They were suppose to bury the ‘ passive read mode’ experience too.

    It's kinda funny. The fiction site I frequent actually bans CYOA-style works. I don't remember why...



  • @Arantor said in I, ChatGPT:

    @dcon So in a few years when AI is self aware and asks us if we're the gods for creating it, whoever answers must have seen Ghostbusters.

    c1549bc9-5930-4c54-9e48-2f524c479fdb-Untitled.jpg

    https://www.youtube.com/watch?v=He2G7TOZBtM


  • Notification Spam Recipient

    @remi said in I, ChatGPT:

    it's one of those things that is talked about much, much, much more than it is done.

    Like edible underwear!

    @remi said in I, ChatGPT:

    that includes "homemade" which can be read as anything from "I rubbed on a pillow once!"

    "I wore clothes once!" 😏


  • Notification Spam Recipient

    @Arantor said in I, ChatGPT:

    Consider the approach of attaching such a device to a willing participant, streaming this, and setting it up such that paying a micro transaction will activate the device for a specified period of time.

    Yes, that exists. 😉



  • @Tsaukpaetra I know, I wish I didn't, but I do and I ensured that I would share the bounty of my knowledge because if I have to know this exists, so do others.


  • Notification Spam Recipient


  • Notification Spam Recipient

    @Benjamin-Hall said in I, ChatGPT:

    why I'm not particularly concerned about killer AI/AI singularity.

    Well, at least not one headed by today's idea of "AI" in any case.


  • Notification Spam Recipient

    @Gustav said in I, ChatGPT:

    Nowadays classifying AIs are perfectly capable of inventing their own facts and inductions, to the point humans have a hard time following the line of reasoning but it still gives the correct results with very high accuracy.

    Ooh ooh! Can I play with it?


  • BINNED

    @Tsaukpaetra said in I, ChatGPT:

    @sockpuppet7 said in I, ChatGPT:

    https://wibble.fbmac.net/content/fur-tunately-timeless-the-next-doctor-who-barks-in-style

    The first pic is kinda hot. The rest... frighten me.

    “Hot” is absolutely the wrong, completely inappropriate term here. But I’m kinda jealous of that sharp suit.




  • Notification Spam Recipient


  • Notification Spam Recipient

    I think this is actually a pretty good use of AI. Wouldn't surprise me if EA is looking into this for commentary in their sports games. Wouldn't be surprised if the next big open-world game has some kind of newspaper telling you what you got up to in the last mission and invent puff pieces about the npcs you killed.

    Why the fuck do AI voice people act like hiring voice actors is some kind of arcane ritual. "We could have that in hours instead of months." Bro just send us an email and we will work with you. I've knocked out entire games worth of audio in a two hour session.

    That's because you don't work corporate. Getting you to do two hours work is three weeks of some mope filling out forms and performing archaic incantations before you're even sent an email.



  • @DogsB There's also the Screen Actors Guild, which adds another layer of paperwork over anything that isn't covered by a simple Cameo request if you want anyone recognizable.




  • Notification Spam Recipient

    @sockpuppet7 said in I, ChatGPT:

    @Tsaukpaetra what about a pony doctor?

    https://wibble.fbmac.net/content/from-gallifrey-to-equestria-doctor-who-steeds-ahead-in-new-form

    2218b343-923b-4a67-ac7d-270399214d8f-d145891b-eb28-40fe-8078-c85fc69109ab.jpg

    1. That's a horse
    2. He already has canonical representation:

    9c5874719be13d77b049507d113b3cc5-doctor-whooves-dr-whooves.jpg

    Might be somewhat close but it's too dark to tell. :mlp_shrug:



  • @sockpuppet7 that pic nails Tennant’s hair from the old days really well.



  • I am also a bad person, I was using DALL-E 3 last night to try to come up with a logo for my side project (the one I tell myself I’m going to finish) and was quite impressed with the variety it came up with despite the really shit brief I gave it because I have NFI how to explain what I want (or indeed, what I want, because I don’t know) and the most relevant terms are delightfully hard to depict.



  • @dkf said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    It's incredibly hard to actually prove humans actually possess the ability to have genuinely original thoughts

    Most of the time, people don't have original thoughts. Having original thoughts is difficult, and people tend to avoid things that difficult. That is why getting a PhD is such a major personal achievement; it requires at least some original thought.

    :laugh-harder:
    Here in đŸ‡©đŸ‡Ș we had a few little scandals where the PhD theses of politicians were looked at and found to be a mere collection of newspaper articles copied together, but not marked as citations. I.e. plagiarism.

    And, decades before that, other people joked about the "DĂŒnnbrettbohrer in Bonn" (i.e. in the era before annexion of GDR), where they quoted from the theses of chancellor Kohl and other PhD holding politicians. But at least, it may have been real original thought by those old era politicians instead of plagiarism nowadays, just their level was ... funny.


  • BINNED

    @BernieTheBernie said in I, ChatGPT:

    @dkf said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    It's incredibly hard to actually prove humans actually possess the ability to have genuinely original thoughts

    Most of the time, people don't have original thoughts. Having original thoughts is difficult, and people tend to avoid things that difficult. That is why getting a PhD is such a major personal achievement; it requires at least some original thought.

    :laugh-harder:
    Here in đŸ‡©đŸ‡Ș we had a few little scandals where the PhD theses of politicians were looked at and found to be a mere collection of newspaper articles copied together, but not marked as citations. I.e. plagiarism.

    And, decades before that, other people joked about the "DĂŒnnbrettbohrer in Bonn" (i.e. in the era before annexion of GDR), where they quoted from the theses of chancellor Kohl and other PhD holding politicians. But at least, it may have been real original thought by those old era politicians instead of plagiarism nowadays, just their level was ... funny.

    Well, those are really not much more than very long, boring, and elaborate essays. They contain no shred of original research. That's why half of them are written by a ghost writer, and ChatGPT really could do just as good a job writing these.



  • @Tsaukpaetra said in I, ChatGPT:

    1. That's a horse
    1. You can't know that from such a close shot.
    2. That's a distinction without a difference anyway.

  • Discourse touched me in a no-no place

    @topspin said in I, ChatGPT:

    @BernieTheBernie said in I, ChatGPT:

    @dkf said in I, ChatGPT:

    @Gustav said in I, ChatGPT:

    It's incredibly hard to actually prove humans actually possess the ability to have genuinely original thoughts

    Most of the time, people don't have original thoughts. Having original thoughts is difficult, and people tend to avoid things that difficult. That is why getting a PhD is such a major personal achievement; it requires at least some original thought.

    :laugh-harder:
    Here in đŸ‡©đŸ‡Ș we had a few little scandals where the PhD theses of politicians were looked at and found to be a mere collection of newspaper articles copied together, but not marked as citations. I.e. plagiarism.

    And, decades before that, other people joked about the "DĂŒnnbrettbohrer in Bonn" (i.e. in the era before annexion of GDR), where they quoted from the theses of chancellor Kohl and other PhD holding politicians. But at least, it may have been real original thought by those old era politicians instead of plagiarism nowadays, just their level was ... funny.

    Well, those are really not much more than very long, boring, and elaborate essays. They contain no shred of original research. That's why half of them are written by a ghost writer, and ChatGPT really could do just as good a job writing these.

    See also: Boris Johnson. Except that wasn't for a PhD but rather a (putative) book for commercial sale. That wasn't an "autobiography" either.



  • @topspin said in I, ChatGPT:

    They contain no shred of original research.

    It might be for the best least worst. Have you ever seen what happens when a politician tries to have original ideas?


  • BINNED

    @Zerosquare said in I, ChatGPT:

    @topspin said in I, ChatGPT:

    They contain no shred of original research.

    It might be for the best least worst. Have you ever seen what happens when a politician tries to have original ideas?

    :trolley-garage: - 🃏

    Time to exterminate the weak. Yuri Sound , RA2 yuri's revenge – 00:07
    — AsmR IsCool


Log in to reply