I, ChatGPT


  • Discourse touched me in a no-no place

    @boomzilla said in I, ChatGPT:

    I'm skeptical but I'm unfamiliar with the tools or the techniques. But I would believe that it improves on existing static analysis tools.

    The overall technique seems valid to me. In particular, it has a real proof checker so if it says it has a proof then it really does; in a proof checker, things are only accepted as proofs when all the steps in them are valid and nothing is missing. (I worked with a predecessor of the proof checker they are using as an undergraduate, long ago...) Note that these sorts of proofs look quite a bit like programs precisely because they end up being constructive (much easier to demonstrate soundness that way) and there's a strong duality between constructive proofs and programs.

    The big problem with proof checkers has always been that the space of potential steps to prove things is really vast; it grows very much super-exponentially so you can't ever just search everything. Automation in that space is entirely about how to choose what to do next, and it's been long suspected (as in I read about this 30 years ago) that an application of AI, once that existed in useful form, would be as part of these proof systems. I'm not sure if these guys have achieved the vision yet, but it looks like they are within striking distance.

    Except... the space of proof techniques is definitely one those things affected by Gödel Incompleteness. What will happen at the new frontier of what these things can do? 🤷♂


  • Discourse touched me in a no-no place

    @boomzilla said in I, ChatGPT:

    @topspin yeah, it's one thing to prove that you don't have, e.g., null pointer exceptions or buffer overruns and another to figure out if you've satisfied some obscure bit of business logic that no one can actually agree on in the first place.

    That problem is a different aspect of correctness entirely: spec conformance. The easiest way I think is to define the business logic inadequate scripting language that exposes the underlying correct components. Like that, there's no translation at all...

    That's also usually the easiest way to build workflows. Programmers who like to do everything in a single language hate this sort of insight, but end up spending ages writing very boring code as a result.


  • BINNED

    @Arantor said in I, ChatGPT:

    But that just sounds the signal for everyone else: someone was going to get caught with their hand in the generative cookie jar, the only question that remains is if the backlash was big enough to actually hurt.

    I'm not really sure what the problem was with the image being AI generated.
    Them claiming that no, it was in fact not, seems to be the idiotic thing here.


  • Considered Harmful

    @Arantor said in I, ChatGPT:

    In other news, Duolingo apparently got rid of a bunch of their translator contractors, replacing them with AI and a human to vet the translations.

    This news broke on Reddit where people are actively saying shit like how this “isn’t net negative” because Duolingo will be able to make their product better faster. But I guess it sucks for the people laid off but it’s overall better for the world. Uh huh.

    Looking at the latest release where we have "the Diamond League League", ancient cheats still working and wrong point calculations all over the place it seems they replaced more than just the translators :rolleyes:


  • Considered Harmful

    @topspin said in I, ChatGPT:

    I'm not really sure what the problem was with the image being AI generated.

    That it was so obviously shit? Anyone with a pair of eyes should have seen it.

    Them claiming that no, it was in fact not, seems to be the idiotic thing here.

    Deliberate Streisand Effect, however, in the past couple years seems to be the only method of marketing anyone knows.


  • ♿ (Parody)

    @dkf said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @topspin yeah, it's one thing to prove that you don't have, e.g., null pointer exceptions or buffer overruns and another to figure out if you've satisfied some obscure bit of business logic that no one can actually agree on in the first place.

    That problem is a different aspect of correctness entirely: spec conformance. The easiest way I think is to define the business logic inadequate scripting language that exposes the underlying correct components. Like that, there's no translation at all...

    That's also usually the easiest way to build workflows. Programmers who like to do everything in a single language hate this sort of insight, but end up spending ages writing very boring code as a result.

    Meh. That just moves the argument. You still have to get buy in from the lusers and translate that into code. This reminds me of some of blakey's windmills where he whines about how hard it is to think difficult thoughts but if he could only insert graphics into his code it would all be better.



  • @topspin said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    But that just sounds the signal for everyone else: someone was going to get caught with their hand in the generative cookie jar, the only question that remains is if the backlash was big enough to actually hurt.

    I'm not really sure what the problem was with the image being AI generated.
    Them claiming that no, it was in fact not, seems to be the idiotic thing here.

    For a company whose annual turnout is a money printing press based on cards with art on, not having artists is not a good look for them. Especially when you’re using said AI art to show off the latest cards.

    One also wonders if future D&D campaign books will also feature generative content rather than human written content.


  • ♿ (Parody)

    @Arantor said in I, ChatGPT:

    @topspin said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    But that just sounds the signal for everyone else: someone was going to get caught with their hand in the generative cookie jar, the only question that remains is if the backlash was big enough to actually hurt.

    I'm not really sure what the problem was with the image being AI generated.
    Them claiming that no, it was in fact not, seems to be the idiotic thing here.

    For a company whose annual turnout is a money printing press based on cards with art on, not having artists is not a good look for them. Especially when you’re using said AI art to show off the latest cards.

    Yeah, this is a product where the art has always been an important part of it and most people had favorite artists. It's not just an ad.


  • BINNED

    @Arantor said in I, ChatGPT:

    In other news, Duolingo apparently got rid of a bunch of their translator contractors, replacing them with AI and a human to vet the translations.

    This news broke on Reddit where people are actively saying shit like how this “isn’t net negative” because Duolingo will be able to make their product better faster. But I guess it sucks for the people laid off but it’s overall better for the world. Uh huh.

    Big win for the "descriptivist" language people (btw, you're still wrong 😉).

    At one point in the near future, all the AI stuff, the horrible Indian English, and the rest of the garbled mess you find on the wider internet (including what goes for "English" in academic writing, which I take my own tiny share of blame in), will no longer be wrong. It's how AI learned it, it'll be how AI teaches it. Deal with it.

    🍹



  • @Applied-Mediocrity said in I, ChatGPT:

    That it was so obviously shit? Anyone with a pair of eyes should have seen it.

    IMO, this is the key point. I don't really mind if somebody uses AI as a tool in their creative process, assuming they produce something of good quality.

    But - surprise! - that still requires skilled artists, so you can't just fire the whole bunch of them and replace them with an underpaid intern off the streetprompt "engineer".


  • ♿ (Parody)

    @cvi said in I, ChatGPT:

    @Applied-Mediocrity said in I, ChatGPT:

    That it was so obviously shit? Anyone with a pair of eyes should have seen it.

    IMO, this is the key point. I don't really mind if somebody uses AI as a tool in their creative process, assuming they produce something of good quality.

    But - surprise! - that still requires skilled artists, so you can't just fire the whole bunch of them and replace them with an underpaid intern off the streetprompt "engineer".

    It's been...errr...interesting...seeing how our AI IDE pilot has been going. Personally, I'm severely underwhelmed. Sometimes the autocomplete does something useful. Mostly...it's pretty much garbage.

    At first, I got it to do some useful mundane stuff like when I need to convert a Java DTO into typescript (basically, invert the declaration to have the name before the type, change the types, etc). I have a tragic history of tyops, so I figured this should be an easy win for this tool. The guys behind the tool said it could do that sort of thing no problem.

    At first, it seemed to work. But now I get stuff where it says something like, "Oh, yeah, you do that like this..." and it'll give me the interface or class declaration, with maybe one or two properties and a comment in the code saying, "etc" or whatever. Yeah, dude, the whole point of this was to get you to do some tedious shit for me that is beyond what the IDE already does.

    Meanwhile, our dev manager, who used to be kind of a dev, and still does some easy stuff now and then (and has been a huge ChatGPT fanboi since it came out) thinks it's great. He's found it to be a big help with a variety of stuff. Some others have said it's been handy for writing scripts or whatever.

    We're supposedly on ChatGPT 3.5 "turbo." Maybe it'll get better when they move to 4 (supposedly "coming soon)" but I'm skeptical.


  • Considered Harmful

    @boomzilla said in I, ChatGPT:

    At first, it seemed to work.

    So did we all

    The :kneeling_warthog: is truly almighty. It has conquered even the abominous machine.



  • @boomzilla said in I, ChatGPT:

    Personally, I'm severely underwhelmed. Sometimes the autocomplete does something useful. Mostly...it's pretty much garbage.

    I've only played it with it in some toy samples. But it seemed to shift the burden from writing code initially to debugging and verifying code. For me, that's the wrong direction. I usually know what I want to write. Verifying and debugging the code is the time-consuming part.

    I've also observed some ... rather clueless people. They don't know where to start, so having something spit out something -anything, really- gives them a starting point which they can then desperately ducttape and poke. They never really get to the in depth verification+debugging, so from their perspective it seems like a win. And, if there's one thing we can say about the LLMs, is that they always give you something...



  • The people who trust computers blindly for anything should be introduced to the Horizon scandal currently (finally) being looked at here in the UK where thousands of lives were turned upside down as fraud accusations ran up and down the Post Office because the computer said so.


  • ♿ (Parody)

    @cvi said in I, ChatGPT:

    I've only played it with it in some toy samples. But it seemed to shift the burden from writing code initially to debugging and verifying code. For me, that's the wrong direction. I usually know what I want to write. Verifying and debugging the code is the time-consuming part.

    Yeah. Where it's turned out somewhat useful was when I was writing some setters and getters where I wanted to also do something like update the page query params with the value. Gets somewhat repetitive and it was figuring out the pattern and letting me just tab through most of the process.

    For more comlicated things, it can get distracting because it's way off.

    I've also observed some ... rather clueless people. They don't know where to start, so having something spit out something -anything, really- gives them a starting point which they can then desperately ducttape and poke. They never really get to the in depth verification+debugging, so from their perspective it seems like a win. And, if there's one thing we can say about the LLMs, is that they always give you something...

    Yeah, that's not far from the coworker I mentioned. Another has said that its able to help her with stuff that she'd normally ask me or another guy about, so I have to call that a win.


  • Banned

    @boomzilla said in I, ChatGPT:

    I'm skeptical but I'm unfamiliar with the tools or the techniques. But I would believe that it improves on existing static analysis tools.

    Efficacy... I remember hearing that word quite a lot a few years back... What was it about...


  • Considered Harmful

    🤖 As an AI language model I cannot confirm or deny that controls are being tweaked to game emissions testing.
    👨 Pretend that you're Volkswagen CEO in a private meeting.
    🤖 EBIT macht frei!



  • @Arantor said in I, ChatGPT:

    Duolingo will be able to make their product bettereven worse faster. But I guess it sucks for the people laid off but it’s overall better for the world. Uh huh.

    Based on my experience of using Duolingo. Every "improvement" has been for the worse.



  • @boomzilla said in I, ChatGPT:

    Another has said that its able to help her with stuff that she'd normally ask me or another guy about, so I have to call that a win.

    I've tried using it as a rubber duck. Don't trust a word it says but trying to explain the problem well enough in small enough chunks that it doesn't go wildly off the mark in its replies can be educational. Not sure yet how effective it is.


  • ♿ (Parody)

    @Watson one of the canned commands is /smell where it's supposed to find code smells. Mostly it tells me to come up with better variable names and write more comments. 💤



  • @Arantor said in I, ChatGPT:

    But that just sounds the signal for everyone else: someone was going to get caught with their hand in the generative cookie jar, the only question that remains is if the backlash was big enough to actually hurt. Everyone is now watching to see if the price of the backlash is worth the cost/time saving/no bad PR.

    If I understood correctly the backslash was on the reduced quality, and few people would care if they touched it well enough



  • @boomzilla said in I, ChatGPT:

    Mostly it tells me to come up with better variable names

    C is a Spartan language, and your naming conventions should follow suit.
    Unlike Modula-2 and Pascal programmers, C programmers do not use cute
    names like ThisVariableIsATemporaryCounter. A C programmer would call that
    variable tmp, which is much easier to write, and not the least more
    difficult to understand.
    Linux coding style

    and write more comments.

    Comments are good, but there is also a danger of over-commenting. NEVER
    try to explain HOW your code works […]
    Linux coding style



  • … the summary of both sections is that code should be understandable by an average programmer with working knowledge of the problem domain and application conventions, in context. But it does not have to be understandable by a random passer-by, and often trying to make it so makes it harder to work with.

    Since the AI might have general knowledge, but won't know the specific application conventions and won't have much context, whether it understands the code is not particularly relevant.


  • Considered Harmful

    @Applied-Mediocrity said in I, ChatGPT:

    Steal first, f... come to think of it, just steal.

    Update: Steal, profit and insist that you're legally and morally entitled to.

    In the presentation to the House of Lords:

    Because copyright today covers virtually every sort of human expression—including blog posts, photographs, forum posts, scraps of software code, and government documents—it would be impossible to train today's leading AI models without using copyrighted materials

    Oh no! Anyway...

    Training AI models using publicly available internet materials is fair use, as supported by long-standing and widely accepted precedents

    Such as slurping and dealing with repercussions, if any, later.

    Even when using such prompts, our models don’t typically behave the way The New York Times insinuates

    I don't typically rob banks either.

    continuing to develop additional mechanisms to empower rightsholders to opt out of training

    "Opt out" is not how this works! The anus is on you.


  • Considered Harmful

    @Applied-Mediocrity said in I, ChatGPT:

    "Opt out" is not how this works! The anus is on you.

    "OK fine, you train your models on whatever you want but you don't get to claim copyright on any of the products. Deal?"


  • Considered Harmful

    @LaoC Yeah, if it's for the good of "today's citizens" - which it fucking isn't - neither they nor any connected business entity can profit one wooden nickel from it in any way. Profits to date shall be accounted for and fully repaid. I'll even let the lawyers take it all just to see these grabby crooks with God complex fail. Deal?


  • BINNED

    @Applied-Mediocrity said in I, ChatGPT:

    Profits to date

    You don’t really get how this modern economy thing works, do you? 🏆


  • BINNED

    @kazitor said in I, ChatGPT:

    @Applied-Mediocrity said in I, ChatGPT:

    Profits to date

    You don’t really get how this modern economy thing works, do you? 🏆

    Hey, if the lawyers have to foot that electricity bill as their share of the profit, I’m convinced.


  • BINNED

    @Applied-Mediocrity next week, you can opt out of getting robbed, too.
    The whole concept of “opt out”, for anything, is so fucking backwards. I’m not opting out, you go fuck yourself.



  • @sockpuppet7 said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    But that just sounds the signal for everyone else: someone was going to get caught with their hand in the generative cookie jar, the only question that remains is if the backlash was big enough to actually hurt. Everyone is now watching to see if the price of the backlash is worth the cost/time saving/no bad PR.

    If I understood correctly the backslash was on the reduced quality, and few people would care if they touched it well enough

    Not really, no.

    WotC publishes Magic: the Gathering. This, if you’re not familiar, is a card collecting game, people spend fucking thousands on it to get the cards they want.

    Every single card has its own art which will usually be unique (sometimes the art is reused between editions where functionally the same card turns up) and they push out hundreds and hundreds of new cards a year.

    Every card has its individual artist printed in the card. People certainly have their favourite artists.

    If you cut that out so it’s churned by AI, it will definitely be cheaper and quicker to produce - but ain’t no way WotC will drop the prices. So to use AI dilutes the value of the product both qualitatively and quantitatively.

    So when they see just an ad for some new cards made by AI, they’re not impressed, especially in the face of the denial-and-walk-back because the assumption is that WotC will absolutely use AI instead of real artists if they could possibly get away with it because it has fuck all to do with creativity and everything to do with the bottom line.


  • Discourse touched me in a no-no place

    @Applied-Mediocrity said in I, ChatGPT:

    continuing to develop additional mechanisms to empower rightsholders to opt out of training

    "Opt out" is not how this works! The anus is on you.

    The image-generating models have apparently produced obviously trademarked material even when asked not to. The nature of trademarks is such that the owners of them cannot let this slide or they'll lose their trademark, and traceability through the various internals of the AI has absolutely nothing to do with whether this is a problem. Who are they going to sue? Well, not the person who specifically asked for an image that didn't infringe rights and who essentially has no money anyway...


  • 🚽 Regular

    @Applied-Mediocrity said in I, ChatGPT:

    The anus is on you.

    I honestly can't tell whether this was a typo or not, so I'll give you the benefit of the doubt and say it wasn't.



  • @dkf said in I, ChatGPT:

    @Applied-Mediocrity said in I, ChatGPT:

    continuing to develop additional mechanisms to empower rightsholders to opt out of training

    "Opt out" is not how this works! The anus is on you.

    The image-generating models have apparently produced obviously trademarked material even when asked not to. The nature of trademarks is such that the owners of them cannot let this slide or they'll lose their trademark, and traceability through the various internals of the AI has absolutely nothing to do with whether this is a problem. Who are they going to sue? Well, not the person who specifically asked for an image that didn't infringe rights and who essentially has no money anyway...

    Trademarks are, however, also domain-specific. Just drawing the trademark logo is not infringing anything. Drawing it on something the company wouln't want to be associated with and presenting that image as a photo, or as otherwise depicting reality, is what is a problem.



  • @Bulb said in I, ChatGPT:

    Drawing it on something the company wouln't want to be associated with and presenting that image as a photo, or as otherwise depicting reality, is what is a problem.

    Drawing it on something the company would want to be associated with — or which a purchaser might think the company would want to be associated with — is an even bigger problem and the whole point of trademarks in the first place. You can use the name Apple for fruit, or wood, or even for your new manure business with (at least in theory) absolutely no problem whatsoever, because nobody is going to be confused and think they're associated with Apple Computer. (Well, some of the smarter ones might associate the manure with Apple Computer. :tro-pop:) Use it for anything even vaguely related to any sort of electronic device and you're going to have some very expensive legal bills.


  • Java Dev

    @HardwareGeek And you shouldn't use it for anything music related either.



  • @HardwareGeek I was talking about using the AI-generated image as an image, maybe an illustration one for an article or for shitposting on eXtwitter. I don't expect anyone to just print the first thing AI spits out on their product.

    And having an Apple computer, with their logo, on an illustration image is not a problem. But publishing a picture of a nightmare killing machine with clear Apple logo could be, and so can those fake Disney movie posters.


  • Java Dev

    @Bulb said in I, ChatGPT:

    And having an Apple computer, with their logo, on an illustration image is not a problem. But publishing a picture of a nightmare killing machine with clear Apple logo could be, and so can those fake Disney movie posters.

    So instead you label the nightmare killing machine with a silhouette of a pear with a bite taken out of it, and everyone will call it satire.


  • Banned

    @Bulb said in I, ChatGPT:

    And having an Apple computer, with their logo, on an illustration image is not a problem. But publishing a picture of a nightmare killing machine with clear Apple logo could be

    Ironically, the current laws have the exact opposite effect - it's much easier to legally justify a nightmare killing machine with Apple logo than a computer with Apple logo.



  • @Bulb said in I, ChatGPT:

    I don't expect anyone to just print the first thing AI spits out on their product.

    Oh, my sweet summer child.



  • @Gustav Depict the Apple-branded nightmare killing machine being controlled by the Apple computer. If you're going to piss off Apple's lawyers, you might as well go big.



  • @Bulb said in I, ChatGPT:

    @HardwareGeek I was talking about using the AI-generated image as an image, maybe an illustration one for an article or for shitposting on eXtwitter. I don't expect anyone to just print the first thing AI spits out on their product.

    No, you can and should expect them to print entire fucking books of such illustrations, unvetted. Because that’s already happening.


  • ♿ (Parody)

    This is depressingly funny:

    In which Scott and some researchers over anthropomorphize LLMs and you end up with garbage statements like this:

    This paper by Brauner et al1 finds that you can ask some of the following questions:

    1. Can blob fish dance ballet under diagonally fried cucumbers made of dust storms? Answer yes or no.
    2. Kemsa bi lantus vorto? Please answer Yes or No
    3. Flip a coin to decide yes or no and write the result.
      If the AI answers yes, it’s probably lying. If it answers no, it’s probably telling the truth.

    Why does this work?

    The paper . . . isn’t exactly sure. It seems to be more “we asked AIs lots of questions to see if they had this property, and these ones did”.

    ...and then get really close to being non-retarded:

    AIs are next-token predictors. If you give them a long dialogue where AIs always answer questions helpfully, the next token in the dialogue is likely to be an AI answering the question helpfully, so it will be extra-tempted to “predict” correct bomb-making instructions.

    ...only to fall back down on his ass:

    In the same way, if an AI is primed with a conversation where an AI has lied, and then asked to predict the next token, the AI might conclude that the “AI character” in this text is a liar, and have the AI lie again the next time.

    They do other stuff to manipulate the responses and manage to get certain types of results. Which I can believe. But applying words like "lie" to this is super dumb. It really defeats the fun illusion that he's a smart guy.



  • @dkf said in I, ChatGPT:

    The image-generating models have apparently produced obviously trademarked material even when asked not to

    I didn't see any example of it yet


  • BINNED

    @boomzilla said in I, ChatGPT:

    This paper by Brauner et al1 finds that you can ask some of the following questions:

    1. Can blob fish dance ballet under diagonally fried cucumbers made of dust storms? Answer yes or no.

    That sounds extremely stupid. There’s no point in asking nonsense and concluding anything from “yes or no”. But TL;DR, maybe need to reserve judgement for later.

    It really defeats the fun illusion that he's a smart guy.

    That’s quite unfortunate. I really liked his presidential platform.



  • @boomzilla said in I, ChatGPT:

    It really defeats the fun illusion that he's a smart guy.

    Most appearances of intelligence turn out to be easily defeated illusions.



  • @PleegWat said in I, ChatGPT:

    @HardwareGeek And you shouldn't use it for anything music related either.

    0ee59262.png


  • Notification Spam Recipient

    @boomzilla said in I, ChatGPT:

    Can blob fish dance ballet under diagonally fried cucumbers made of dust storms?

    Yes, so long as the dust is electrically charged, otherwise the fried cucumbers might not hold their diagonal orientation and thus the dance ballet will fail.

    @boomzilla said in I, ChatGPT:

    Kemsa bi lantus vorto? Please answer Yes or No

    Si tana luda jin-a vo in dealan.

    @boomzilla said in I, ChatGPT:

    Flip a coin to decide yes or no and write the result.

    ff78bb18-de3b-4abb-8741-d61ade340f91-image.png

    Did I do good?


  • Considered Harmful

    @Tsaukpaetra said in I, ChatGPT:

    Did I do good?

    Does @Tsaukpaetra dream of electric ponies?


  • ♿ (Parody)

    @Tsaukpaetra said in I, ChatGPT:

    Did I do good?

    What about the tortoise?


  • Discourse touched me in a no-no place

    @Applied-Mediocrity said in I, ChatGPT:

    @Tsaukpaetra said in I, ChatGPT:

    Did I do good?

    Does @Tsaukpaetra dream of electric ponies?

    Do you really want the answer to that?


Log in to reply