I, ChatGPT



  • Meanwhile, anyone who resigned from OpenAI during the CEO musical chairs shuffle was verbally offered a deal to go work for Salesforce by the Salesforce CEO.

    Several snarky comments around “the company that builds AGI is not the company that builds Tableau”.

    And of course, the OpenAI CEO musical chairs game appears to have reached its conclusion with the interim CEO having asked why Altman was fired and not getting an answer, before Altman himself returned and wiped out the board that fired him.

    Such strange politics.



  • @topspin said in I, ChatGPT:

    Of course the research article isn't open access. :rolleyes:

    And it doesn't seem to be on https://sci-hub.st either.

    I assume this means they decided beforehand what checks to run, not do it on the fly. Good Practice.

    Or at least, this is what they want you to believe... :tinfoil-hat:

    But numbers ending in 7 or 8? There is no good reason why the AI would generate that kind of data. Assuming the data isn't directly generated by an LLM, which might be biased with all kinds of shit, but instead the LLM produced python code to generate the data, as is hinted in the first paragraph.
    So I'm wondering if what they found here actually isn't a sign of fake data but just random chance of testing too many green jelly beans.

    Yeah, that sounds a bit fishy. I guess they could have run their analysis against known "good" datasets (maybe they did? it's not like I'm going to try and read TFA...) to see if none of those had this kind of issue. If so, while it's not obvious why an AI-generated dataset would have it, this would still be a... orange flag?

    This could still be an unintended side-effect of the AI-generated code that creates the data, typically if there's a bug in there (e.g. a rounding one that compounds across further operations to end up in these specific values).


  • Notification Spam Recipient

    @Arantor said in I, ChatGPT:

    Meanwhile, anyone who resigned from OpenAI during the CEO musical chairs shuffle was verbally offered a deal to go work for Salesforce by the Salesforce CEO.

    Several snarky comments around “the company that builds AGI is not the company that builds Tableau”.

    And of course, the OpenAI CEO musical chairs game appears to have reached its conclusion with the interim CEO having asked why Altman was fired and not getting an answer, before Altman himself returned and wiped out the board that fired him.

    Such strange politics.

    I miss when CEOs were people in suits who didn't talk to the media. I'm going to have to listen to more of his drivel now.

    Have they ever elaborated on what the potential human extinction event that spawned this idiocy was?


  • BINNED

    @DogsB said in I, ChatGPT:

    Have they ever elaborated on what the potential human extinction event that spawned this idiocy was?

    Probably decreased ad revenue. 🍹



  • @DogsB it seems like there’s the conflict between the non-profit and profit sides of the business and the folks who ousted him don’t really care about the profit as long as they can keep building a beneficial AGI for humanity, while he’s been too busy playing corporate shill a bit much for their liking.

    But Microsoft owns a decent stake (I believe it’s 49%) and essentially funds the hardware they run on, so when they heard, they were none too impressed.


  • Banned

    @DogsB said in I, ChatGPT:

    He's lucky. There was 50/50 chance the title would reference the other popular movie of this year.




  • Considered Harmful

    @Gustav said in I, ChatGPT:

    @DogsB said in I, ChatGPT:

    He's lucky. There was 50/50 chance the title would reference the other popular movie of this year.

    When being compared to Oppenheimer is the lucky option 💀



  • @remi said in I, ChatGPT:

    I guess they could have run their analysis against known "good" datasets (maybe they did? it's not like I'm going to try and read TFA...) to see if none of those had this kind of issue.

    For context:
    https://en.wikipedia.org/wiki/Benford's_law#Applications


  • BINNED

    @Zerosquare not really. To quote some random parts of the wiki article:

    Benford's law tends to apply most accurately to data that span several orders of magnitude. As a rule of thumb, the more orders of magnitude that the data evenly covers, the more accurately Benford's law applies.

    Ages are single or double digits, with the odd 100+ ones being rare enough to not affect the outcome. Besides, Benford’s law is about leading digits, not trailing ones.



  • @topspin said in I, ChatGPT:

    a disproportionate number of participants whose age values ended with 7 or 8

    ChatGPT used a random algorithm similar to the famous xkcd random algorithm.
    How could ChatGPT of known that xkcd was mocking the geniuses among developers?
    :mlp_shrug:



  • @topspin I agree. I thought about Benford's law (I had forgotten the name, but not its existence) and also decided it probably wouldn't apply here. Green jelly beans effect is more likely.

    Ages are single or double digits

    I missed the fact that those weird numbers were ages. Now I wonder if this could not be a consequence of the children/adult distinction, if the code "decided" that adulthood was at 18 yo. I can imagine weird code (i.e. written by AI) that first picks if a person is an adult or not (this may be a flag in the dataset?), then picks the age randomly, but draws in the full interval and then truncates. So any kid with an age >= 18 ends up being 17 yo, any adult with an age < 18 ends up being 18 yo. And you get your over-represented end-digits.

    But 1) that's a complete WAG which just serves to prove that this anomaly could possibly be somehow related to the AI-generated code (but certainly not that it is related, nor that there isn't another perfectly normal explanation for that) and 2) in this specific scenario I would assume that, before checking the frequency of the last digit, the authors would have checked the distribution of ages and would thus have seen a spike at 17/18.


  • ♿ (Parody)


  • BINNED

    Directly from their own home page:

    This is how ChatGPT explains nuBuilder Forte...

    "nuBuilder Forte is an open-source web application that allows users to create and manage databases and applications without extensive coding knowledge. It is often used by individuals or small businesses who require custom database solutions for various purposes, such as managing information, tracking data, and automating workflows.

    nuBuilder Forte is particularly appealing to those who want to develop their own applications without having to start from scratch or rely on complex programming. It provides a user-friendly interface for creating forms, managing data, and designing applications using a drag-and-drop approach. This means that people who may not have in-depth programming skills can still create functional database applications tailored to their specific needs."

    :wtf:



  • AKA: "You thought Microsoft Access was bad? You ain't seen nothing yet."



  • @Zerosquare I read it more as "Look, we reinvented Visual FoxPro!"


  • Discourse touched me in a no-no place



  • According to a Guy On Twitter ™ the reason that ChatGPT sucks for helping me as a developer is that I'm using it wrong, and that it can do the hard stuff I deal with if only I explained the problem to it as if I was explaining it to another developer.

    And it would then find hidden traps during design for me.

    Never mind what problems he thinks I think are hard or need solving, mind. No, that's the problem, I'm not explaining it properly to the AI machine.

    And if it's covered with an NDA? Just train my own on my machine so I'm not feeding OpenAI with new data, obviously.


  • Discourse touched me in a no-no place

    @Arantor Yep. People like that are the current iteration of hipster techbros. A bunch of people who think that the way they do things is the way that everyone does them (or wants to do them) and just need to be told that they're Doing It Wrong™ if the poor ordinary human claims there's some reason to do anything else. This time it's AI, before that it was Crypto/NFTs, then before that it was... cloud? Or did I miss one?

    Some of these techs turn out to be actually useful, but that isn't why the techbro is pushing it.



  • @Arantor said in I, ChatGPT:

    if only I explained the problem to it as if I was explaining it to another developer.

    Yeah, that sounds like much more work than just solving the problem.



  • @cvi said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    if only I explained the problem to it as if I was explaining it to another developer.

    Yeah, that sounds like much more work than just solving the problem.

    Especially as there’s a chance the other developer might comprehend something if I explain it.


  • Notification Spam Recipient

    @Arantor said in I, ChatGPT:

    According to a Guy On Twitter ™ the reason that ChatGPT sucks for helping me as a developer is that I'm using it wrong, and that it can do the hard stuff I deal with if only I explained the problem to it as if I was explaining it to another developer.

    And it would then find hidden traps during design for me.

    Never mind what problems he thinks I think are hard or need solving, mind. No, that's the problem, I'm not explaining it properly to the AI machine.

    And if it's covered with an NDA? Just train my own on my machine so I'm not feeding OpenAI with new data, obviously.

    I use it a fair bit at work but it's definitely not capable of anything overly complex in the timespans I have. It makes stupid mistakes all the time. Like using lambdas when I specified java 7 or that accursed xml library that was removed in higher versions of java. What happens frequently is it pulls in libraries to do something that is rarely of any use. It would require a library upgrade/addition and I'm sure as shit not spending three weeks on security meetings to get it approved.

    I'm honestly curious about how they got it to do anything more complex than code samples in a reasonable amount of time. I could spend half a day coaxing something out of it or just do it myself in less time.

    Quite handy for finding css stuff and explaining specific things to me but I'm dubious about it replacing us anytime soon. The shit it spits out sometimes is worse than useless.


  • Notification Spam Recipient

    @Arantor said in I, ChatGPT:

    @cvi said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    if only I explained the problem to it as if I was explaining it to another developer.

    Yeah, that sounds like much more work than just solving the problem.

    Especially as there’s a chance the other developer might comprehend something if I explain it.

    Well if you aren't little miss optimistic today.


  • Discourse touched me in a no-no place

    @DogsB said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    According to a Guy On Twitter ™ the reason that ChatGPT sucks for helping me as a developer is that I'm using it wrong, and that it can do the hard stuff I deal with if only I explained the problem to it as if I was explaining it to another developer.

    And it would then find hidden traps during design for me.

    Never mind what problems he thinks I think are hard or need solving, mind. No, that's the problem, I'm not explaining it properly to the AI machine.

    And if it's covered with an NDA? Just train my own on my machine so I'm not feeding OpenAI with new data, obviously.

    I use it a fair bit at work but it's definitely not capable of anything overly complex in the timespans I have. It makes stupid mistakes all the time. Like using lambdas when I specified java 7 or that accursed xml library that was removed in higher versions of java. What happens frequently is it pulls in libraries to do something that is rarely of any use.

    So it's no worse than a lot of offshore developers?


  • Notification Spam Recipient

    @loopback0 said in I, ChatGPT:

    @DogsB said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    According to a Guy On Twitter ™ the reason that ChatGPT sucks for helping me as a developer is that I'm using it wrong, and that it can do the hard stuff I deal with if only I explained the problem to it as if I was explaining it to another developer.

    And it would then find hidden traps during design for me.

    Never mind what problems he thinks I think are hard or need solving, mind. No, that's the problem, I'm not explaining it properly to the AI machine.

    And if it's covered with an NDA? Just train my own on my machine so I'm not feeding OpenAI with new data, obviously.

    I use it a fair bit at work but it's definitely not capable of anything overly complex in the timespans I have. It makes stupid mistakes all the time. Like using lambdas when I specified java 7 or that accursed xml library that was removed in higher versions of java. What happens frequently is it pulls in libraries to do something that is rarely of any use.

    So it's no worse than a lot of offshore developers?

    Yes.



  • @DogsB said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    @cvi said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    if only I explained the problem to it as if I was explaining it to another developer.

    Yeah, that sounds like much more work than just solving the problem.

    Especially as there’s a chance the other developer might comprehend something if I explain it.

    Well if you aren't little miss optimistic today.

    Given my track record at explaining things to other people, yes.



  • @Arantor said in I, ChatGPT:

    According to a Guy On Twitter the reason that ChatGPT sucks for helping me as a developer is that I'm using it wrong, and that it can do the hard stuff I deal with if only I explained the problem to it as if I was explaining it to another developer.

    The case where it really helped me is when I'm using a tool I'm not familiar. Think a programming language you never used. As you already know programming, it would get you on speed a lot faster

    The hard to spot generated bugs are real, though.

    If the code fails, I take the library and classes it used and read the docs, falling back to the old ways. But just knowing what classes and libraries to look for, and an example is good.

    The worse is when it makes up functions that doesn't exist. When that happens it usually means AI won't be helpful with the task, it's something it doesn't know.



  • @dkf said in I, ChatGPT:

    @Arantor Yep. People like that are the current iteration of hipster techbros. A bunch of people who think that the way they do things is the way that everyone does them (or wants to do them) and just need to be told that they're Doing It Wrong™ if the poor ordinary human claims there's some reason to do anything else. This time it's AI, before that it was Crypto/NFTs, then before that it was... cloud? Or did I miss one?

    Some of these techs turn out to be actually useful, but that isn't why the techbro is pushing it.

    while it's immensely inflated like crypto, it has some use, this comparison is a bit unfair


  • Discourse touched me in a no-no place

    @sockpuppet7 said in I, ChatGPT:

    while it's immensely inflated like crypto, it has some use, this comparison is a bit unfair

    I also listed Cloud in that, and that's also been massively over-inflated yet is actually useful when you know WTF it's really doing. Whereas crypto... has anyone found a use for that yet that wouldn't be better done with a git repo somewhere?


  • Notification Spam Recipient

    @dkf said in I, ChatGPT:

    @sockpuppet7 said in I, ChatGPT:

    while it's immensely inflated like crypto, it has some use, this comparison is a bit unfair

    I also listed Cloud in that, and that's also been massively over-inflated yet is actually useful when you know WTF it's really doing. Whereas crypto... has anyone found a use for that yet that wouldn't be better done with a git repo somewhere?

    It's a pity blockchain got caught up in that. It's an interesting concept but I still have no idea why everyone wanted to replace DBs with it.

    👨🏿 : we should put this data on the blockchain.
    👴 : It's relational data! Fucking why?


  • ♿ (Parody)

    @dkf said in I, ChatGPT:

    Whereas crypto... has anyone found a use for that yet that wouldn't be better done with a git repo somewhere?

    Much easier to get people to part with actual money for crypto than a git repo.


  • BINNED

    @DogsB said in I, ChatGPT:

    It's a pity blockchain got caught up in that. It's an interesting concept but I still have no idea why everyone wanted to replace DBs with it.

    Even ignoring cryptocurrencies, I don't see a point in blockchain, either. What exactly are you supposed to do with it where git isn't literally a signed chain of blocks?


  • Notification Spam Recipient

    @topspin said in I, ChatGPT:

    @DogsB said in I, ChatGPT:

    It's a pity blockchain got caught up in that. It's an interesting concept but I still have no idea why everyone wanted to replace DBs with it.

    Even ignoring cryptocurrencies, I don't see a point in blockchain, either. What exactly are you supposed to do with it where git isn't literally a signed chain of blocks?

    I think the idea is that multiple users can receive notifications and build their own blockchains so that multiple people can compare and contrast and then weed out people that are trying to fabricate data.

    narrator who would put resources into that.
    DogsB I said it was interesting not plausible.



  • @DogsB said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    @cvi said in I, ChatGPT:

    @Arantor said in I, ChatGPT:

    if only I explained the problem to it as if I was explaining it to another developer.

    Yeah, that sounds like much more work than just solving the problem.

    Especially as there’s a chance the other developer might comprehend something if I explain it.

    Well if you aren't little miss optimistic today.

    :technically-correct: One in a million is a chance.


  • ♿ (Parody)

    @DogsB said in I, ChatGPT:

    @topspin said in I, ChatGPT:

    @DogsB said in I, ChatGPT:

    It's a pity blockchain got caught up in that. It's an interesting concept but I still have no idea why everyone wanted to replace DBs with it.

    Even ignoring cryptocurrencies, I don't see a point in blockchain, either. What exactly are you supposed to do with it where git isn't literally a signed chain of blocks?

    I think the idea is that multiple users can receive notifications and build their own blockchains so that multiple people can compare and contrast and then weed out people that are trying to fabricate data.

    narrator who would put resources into that.
    DogsB I said it was interesting not plausible.

    Yeah, in theory having the storage decentralized could be useful for transparency and trust. In practice I don't see it though.


  • Notification Spam Recipient

    @boomzilla said in I, ChatGPT:

    @DogsB said in I, ChatGPT:

    @topspin said in I, ChatGPT:

    @DogsB said in I, ChatGPT:

    It's a pity blockchain got caught up in that. It's an interesting concept but I still have no idea why everyone wanted to replace DBs with it.

    Even ignoring cryptocurrencies, I don't see a point in blockchain, either. What exactly are you supposed to do with it where git isn't literally a signed chain of blocks?

    I think the idea is that multiple users can receive notifications and build their own blockchains so that multiple people can compare and contrast and then weed out people that are trying to fabricate data.

    narrator who would put resources into that.
    DogsB I said it was interesting not plausible.

    Yeah, in theory having the storage decentralized could be useful for transparency and trust. In practice I don't see it though.

    I only got interested in this recently because my Mother and sister were buying a house. The skullfuckery that goes on in property is unbelievable. Blockchain would bring a lot transparency to that industry but noone would sane would sign up for it.



  • Meanwhile this is happening.

    ac5ef6a3-9813-43db-b986-200498796a49-F_9YgETXcAAE2Z6.png

    Welcome to the brave new world. Everyone gets 15 minute of fame but none of it matters because it'll be utterly drowned in everything else.



  • @Arantor What are they doing with that traffic? Serving ads?

    I sort-of remember some PhD doing something like this maybe 10 years ago for their research (ostensibly security). I think they got into trouble? (And also offering this a service and making some money off it.)

    Tangential thought: we'll soon have a market for organic free-range websites, where there is some sort search engine/directory that indexes only those. (I'm not even sure I wouldn't pay extra for that.)



  • @cvi I don't quite know but stealing marketing traffic from your competitor (that they're probably paying for in the first place) seems like it's effective.


  • BINNED

    The only thing this really proves (which everybody already knew) is that Google results have become absolutely useless with the content farm bullshit.



  • @topspin they were already heading that way, this just runs round pouring all the accelerant on the dumpster fire.



  • @Arantor Yeah. I guess I'm mostly wondering about the why and the scope of it. Is it just to damage the competitor or do they actually offer a competing service? Given that they created a bunch of AI articles, I strongly suspect the former, rather than the latter. And how "durable" is their thing? The tweet mentions 18 months, but also says 3.6M total and 500K monthly. But at 500K/months, that only covers 7 months, so clearly it hasn't been 500K every month. (They're also tweeting about it, which leads me to guess that it maybe has stopped working ... still 18 months is a long time.)

    That said, the ball is in the search engine's court (and, really, Google's) to deal with this. (And quite frankly, if they figure out what company pulled that shit, basically eradicate them from the internet.)



  • @cvi said in I, ChatGPT:

    But at 500K/months, that only covers 7 months, so clearly it hasn't been 500K every month.

    I'd guess it probably started at 0 and has increased over that 18 month span to 500k/month, with a cumulative total of 3.6M.

    eradicate them from the internet

    This. With thermonuclear fire.



  • @HardwareGeek said in I, ChatGPT:

    I'd guess it probably started at 0 and has increased over that 18 month span to 500k/month, with a cumulative total of 3.6M.

    I was wondering if 500K/month was a peak that has since passed, or if it's going somewhat steady at that rate.



  • @cvi 🦉 👃 ?


  • Notification Spam Recipient

    @da-Doctah said in Random Thought of the Day:

    It's just a shame Stan's no longer around to discuss these ideas.

    I'm rather surprised there hasn't been news of people training GPT models based on specific people for this purpose. They were so fast to do it with voices, after all....


  • Considered Harmful

    @HardwareGeek said in I, ChatGPT:

    @cvi said in I, ChatGPT:

    eradicate them from the internet

    This. With thermonuclear fire.

    There's absolutely no way leaving this to Google could go wrong.



  • @LaoC not all the time people are shovelling money at Google to be seen.


  • BINNED

    @LaoC said in I, ChatGPT:

    @HardwareGeek said in I, ChatGPT:

    @cvi said in I, ChatGPT:

    eradicate them from the internet

    This. With thermonuclear fire.

    There's absolutely no way leaving this to Google could go wrong.

    When the real Judgement Day comes, Google won't launch thermonuclear missiles but will physically make us watch ads 24/7. With tooth picks between our eye lids, or that thing from Clockwork Orange. And maybe shove in Captain Marvel for temporary release.



  • @topspin the only question in that outcome is who’s paying for the ads if we’re all watching them 24/7 leaving us unable to do anything else?


Log in to reply