I, ChatGPT


  • Notification Spam Recipient

    @loopback0 said in I, ChatGPT:

    @sockpuppet7 said in I, ChatGPT:

    @DogsB said in I, ChatGPT:

    spit out John McCane and Dean Winchester erotica

    You're not using the right tool for the job, look here:

    Best stop encouraging @DogsB before he turns into Irish @Tsaukpaetra

    I'm doing my best to fill the gap @Gribnit left. That would spread me too thin!



  • @DogsB said in I, ChatGPT:

    I'm doing my best to fill the gap @Gribnit left.

    That's a particularly big and strange set of three left shoes to fill.




  • ♿ (Parody)

    Two weeks into the coding class he was teaching at Duke University in North Carolina this spring, Noah Gift told his students to throw out the course materials he’d given them. Instead of working with Python, one of the most popular entry-level programming languages, the students would now be using Rust, a language that was newer, more powerful, and much harder to learn.

    Gift, a software developer with 25 years of experience, had only just learned Rust himself. But he was confident his students would be fine with the last-minute switch-up. That’s because they’d also each get a special new sidekick: an AI tool called Copilot, a turbocharged autocomplete for computer code, built on top of OpenAI’s latest large language models, GPT-3.5 and GPT-4.

    Good, bad or evil ideas are :arrows:

    On the surface, writing code involves typing statements and instructions in some programming language into a text file. This then gets translated into machine code that a computer can run—a level up from the 0s and 1s of binary.

    :frystare:

    Many agree that Copilot makes it easier to pick up programming—as Gift found. “Rust has got a reputation for being a very difficult language,” he says. “But I was pleasantly shocked at how well the students did and at the projects they built—how complex and useful they were.” Gift says they were able to build complete web apps with chatbots in them.

    :mlp_shrug:

    Not everyone was happy with Gift’s syllabus change, however. He says that one of his teaching assistants told new students not to use Copilot because it was a crutch that would stop them from learning properly. Gift accepts that Copilot is like training wheels that you might not want to take off. But he doesn’t think that’s a problem: “What are we trying to do? We’re trying to build complex systems.” And to do that, he argues, programmers should use whatever tools are available.

    If you strip out the hype, this stuff does sound like the next level of autocomplete. Plenty of :belt_onion:s around here no doubt remember people bitching about how that would make everyone soft, but man is it ever PITA to code without that sort of thing.

    About to start using one of these myself at work. Anyone else doing this yet?



  • @boomzilla said in I, ChatGPT:

    About to start using one of these myself at work. Anyone else doing this yet?

    We use vim (ok, VSCode too) and build in docker images. What is this intellisense you speak of...


  • ♿ (Parody)

    @dcon VS Code definitely has it, via extensions.



  • @boomzilla said in I, ChatGPT:

    @dcon VS Code definitely has it, via extensions.

    Oh I know that. It just works for 💩 in our setup.



  • @boomzilla I have used github copilot a bit and to me it feels like having an over-excited, overconfident but ignorant newbie beside me, constantly shouting "I know, I know!" and randomly grabbing the keyboard from me whenever I press TAB. It's handy for some more trivial things that need a lot of repetitive boilerplate, but now I've mostly turned it off because I found it really annoying for any non-trivial things. Main problem is that you have to read and check very carefully whatever it vomits into the code; it looks plausible enough but often has all kinds of mistakes that I wouldn't myself make, some of them easy to spot but others much less so. So in the end I still have to think the problem through in detail myself, and at that point I'd rather just write the code myself.

    That said, I do driver development in C++, for a niche market that requires a certain amount of domain knowledge. Maybe for more common things like doing yet another online store or basic CRUD web app it would be much more useful.


  • ♿ (Parody)

    @ixvedeusi yeah, I'm pretty wary of that sort of thing. I'm hoping it can at least do a lot of the drudgery of, e.g., setting up unit tests. The thing we're using gets trained on your code base, so hopefully that improves its output.



  • @dcon said in I, ChatGPT:

    @boomzilla said in I, ChatGPT:

    @dcon VS Code definitely has it, via extensions.

    Oh I know that. It just works for 💩 in our setup.

    Then you are probably missing something. VSCode has this ‘devcontainers’ feature where you just define which image to use and some other options, it will start a container, mount the working directory into it and start all the extensions that work with the code inside. I use it most of the time and it works great (except in Windows the mounting between Windows and WSL, where Linux containers run, is slow and I didn't move my checkouts to the WSL disk yet).


  • Notification Spam Recipient

    @boomzilla I have chatgpt. Quite handy at poc or answering stack overflow questions. Some of the stuff it produces is shocking though. About as useful as SO when you have actual problems. Handy for explaining definitions though.

    I would have reservations about letting juniors at it. Especially if they’re prone to copying and pasting with nary a thought to how it works. That professor should be fired. Those students are going to be worse than useless.


  • BINNED

    @DogsB said in I, ChatGPT:

    I would have reservations about letting juniors at it. Especially if they’re prone to copying and pasting with nary a thought to how it works. That professor should be fired. Those students are going to be worse than useless.

    How's that different from every other plz send teh codez monkey who hasn't the faintest clue what the stuff they 🍝 from stackoverflow does.



  • @DogsB said in I, ChatGPT:

    Especially if they’re prone to copying and pasting with nary a thought to how it works.

    TFA mentioned the course was switched to Rust. Rust should be relatively robust against blind copy&pasting, because you won't get the code to compile until you understand it at least a bit due to the picky type system. But then I wonder how much the copilot can actually help with Rust, because I'd expect getting the code typecheck would be rather hard for a plain LLM.


  • Notification Spam Recipient

    @topspin said in I, ChatGPT:

    @DogsB said in I, ChatGPT:

    I would have reservations about letting juniors at it. Especially if they’re prone to copying and pasting with nary a thought to how it works. That professor should be fired. Those students are going to be worse than useless.

    How's that different from every other plz send teh codez monkey who hasn't the faintest clue what the stuff they 🍝 from stackoverflow does.

    You’re right. Send them all to the gulag.


  • ♿ (Parody)

    @Bulb said in I, ChatGPT:

    @DogsB said in I, ChatGPT:

    Especially if they’re prone to copying and pasting with nary a thought to how it works.

    TFA mentioned the course was switched to Rust. Rust should be relatively robust against blind copy&pasting, because you won't get the code to compile until you understand it at least a bit due to the picky type system. But then I wonder how much the copilot can actually help with Rust, because I'd expect getting the code typecheck would be rather hard for a plain LLM.

    Maybe it was able to give good explanations for the compiler errors. Knowing how to translate from compiler-speak is pretty important for any language.



  • @boomzilla I suspect more like there’s enough examples online of “this is what an error looks like” followed by “this is why it is an error” that statistically it can tie it together.

    That’s why they’re good at writing boilerplate code because there’s enough samples and more is just, well, more of the same.


  • ♿ (Parody)

    @Arantor yep. So like I said, it's the next step up from standard autocomplete. It automates the process of you searching for some poor schlub who had to ask the question on Stack Overflow and reading through the answers.



  • @ixvedeusi said in I, ChatGPT:

    Main problem is that you have to read and check very carefully whatever it vomits into the code; it looks plausible enough but often has all kinds of mistakes that I wouldn't myself make, some of them easy to spot but others much less so. So in the end I still have to think the problem through in detail myself, and at that point I'd rather just write the code myself.

    I feel like this is the elephant in the room. Either you know what you're doing, and you don't really need more than autocomplete to do the typing for you. Or you don't, and AI is a terrible tool to use in that case: you don't know if what it tells you is right, completely wrong, or (worse) almost-right-but-not-quite.

    We all like to joke about Stack Overflow, but you can actually find some competent people there, and it's usually pretty easy to tell who are the folks who know what they're talking about. Whereas AI can generate very believable, authoritative-sounding nonsense.

    But yeah, sure, I'm kind of old-fashioned about wanting to write software that's not a house of cards. So sue me. :belt_onion:


  • Considered Harmful

    @ixvedeusi said in I, ChatGPT:

    @boomzilla I have used github copilot a bit and to me it feels like having an over-excited, overconfident but ignorant newbie beside me, constantly shouting "I know, I know!" and randomly grabbing the keyboard from me whenever I press TAB. It's handy for some more trivial things that need a lot of repetitive boilerplate, but now I've mostly turned it off because I found it really annoying for any non-trivial things. Main problem is that you have to read and check very carefully whatever it vomits into the code; it looks plausible enough but often has all kinds of mistakes that I wouldn't myself make, some of them easy to spot but others much less so. So in the end I still have to think the problem through in detail myself, and at that point I'd rather just write the code myself.

    @LaoC said in I, ChatGPT:

    Nice example of how ChatGPT just manages to appear intelligent enough to make people believe its proposed solution was indeed a good solution.
    FsV9iRraIAAWcpL.jpg


  • Discourse touched me in a no-no place

    @DogsB said in I, ChatGPT:

    About as useful as SO when you have actual problems.

    Duh. Guess where it's going to have pulled most of its answer from.


  • ♿ (Parody)

    @Zerosquare said in I, ChatGPT:

    We all like to joke about Stack Overflow, but you can actually find some competent people there, and it's usually pretty easy to tell who are the folks who know what they're talking about.

    The main thing I get from Stack Overflow is learning about some method / configuration / etc that I didn't know about. If necessary I then have something good to search for and can find its documentation or whatever.


  • I survived the hour long Uno hand

    :rics_casino.pikachu:



  • We have a context-sensitive policy. We can use it for research, presentations, polishing emails, general coding for, say, a script to parse log files, and stuff like that. We even host it internally, so we can give it proprietary stuff.

    OTOH, using it for anything (i.e., code) that is used in a product is strictly verboten!


  • Discourse touched me in a no-no place

    @HardwareGeek said in I, ChatGPT:

    OTOH, using it for anything (i.e., code) that is used in a product is strictly verboten!

    To put it another way, the AI isn't the one certifying that the code is fit for any kind of purpose at all.


  • ♿ (Parody)

    @dkf what a coincidence, neither am I!


  • Notification Spam Recipient

    article @izzion posted in I, ChatGPT:

    :rics_casino.pikachu:

    After all, recognizing hand gestures as a game so quickly is actually really impressive for a multimodal model! So is making a judgment call on whether a half-finished picture is a duck or not!

    The illusions thread is :arrows:


  • I survived the hour long Uno hand

    Wow, who could have expected a wealthy startup guru with a god complex to behave like a dick :surprised-pikachu:


  • Notification Spam Recipient

    I’d be more worried about the probability of your business going against the wall in six months.


  • Notification Spam Recipient

    @DogsB said in I, ChatGPT:

    going against the wall in six months

    My current mental state generated the following from this statement:

    harder, daddy!



  • @Tsaukpaetra said in I, ChatGPT:

    @DogsB said in I, ChatGPT:

    going against the wall in six months

    My current mental state generated the following from this statement:

    harder, daddy!

    Why did I think downloading Tsaukpaetra_vicuna-13B-v1.0 would be a good idea?



  • @DogsB P(doom) = 1 if you think long term enough.

    AI is competing with a lot of other things on that front. I'm not convinced AI will win.

    IMNSHO: Silly Valley Bros thinking that they might be the ones dooming humankind is them vastly overestimating their own importance.


  • Considered Harmful

    When I think "responsible AI", I think "Facebook"!
    And "Nick Clegg".



  • @cvi said in I, ChatGPT:

    Silly Valley Bros thinking that they might be the ones dooming humankind is them vastly overestimating their own importance.

    Probably, but can you be sure? It's fun to watch someone shoot himself in the foot, but only if you're not on the trajectory of the bullet.


  • Discourse touched me in a no-no place

    @LaoC said in I, ChatGPT:

    When I think "responsible AI", I think "Facebook"!
    And "Nick Clegg".

    :objection: Nobody ever thinks "Nick Clegg".



  • @dkf said in I, ChatGPT:

    @LaoC said in I, ChatGPT:

    When I think "responsible AI", I think "Facebook"!
    And "Nick Clegg".

    :objection: Nobody ever thinks "Nick Clegg".

    Not voluntarily.



  • @Zerosquare Making the world more shit is not the same as dooming it. And we already have Facebook, TikTok and smart fridges, so those aren't hypothetical any more.

    Edit: I also asked Bing about P(doom). IIRC, it claimed that the artificial experts in AI overestimate that number by a factor 3 or 5 over other people. (It then also claimed that AI isn't dangerous at all and we shouldn't be worried (but please subscribe to the monthly plan), so there's that.)



  • The problem may be less about AI itself, and more about its (ab)use. Greedy, short-term thinking managers have the potential to do lots of damage with it. Not to the point of dooming humanity, but enough to make life worse for lots of people. As if things were not bad enough as they are.

    Kinda like outsourcing.



  • For me it's less about making the world more shit - content mills already existed and social media already proved how much people can mindlessly 'create content' - but much more about the rapidity of it.

    Content milling books onto Amazon Kindle wasn't that quick in many cases - people like Barbara Cartland churning out a new book every fortnight weren't common, but people are now able to churn out a new book every few minutes with the generative tools.

    Only this week I saw someone being very excited about being able to produce 100 images in Stable Diffusion every single second.

    It's one thing to have a rising tide of human-produced shit where every third person is running a side grift, but it's quite another for it to be automated entirely.

    It's not enough that enshittification is everywhere and creeping up to 11, but that the gotta go faster is making it so we're going to fucking drown in it.


  • Considered Harmful

    @Arantor said in I, ChatGPT:

    so we're going to fucking drown in it

    I've just recently had the shower thought that, as much garbage as there's flowing in SE Asia rivers, it's not going to be literal garbage we're going to drown in. It's going to be something worse.



  • @Zerosquare said in I, ChatGPT:

    Greedy, short-term thinking managers have the potential to do lots of damage with it. Not to the point of dooming humanity, but enough to make life worse for lots of people.

    And then the barbarians come, conquer us, destroy all the machines and the circle starts again. Like always.



  • @Arantor content mills will just make content more centralized on the places that you know are good


  • BINNED

    @Arantor I recently saw someone (I very much don’t remember the details, maybe it was here 8 posts ago) argue that not only it’s going to get worse, it needs to get worse. If we crank up enshittification from 11 to over 9000, maybe it’ll be so obvious for everyone that finally the whole thing crashes and burns :hanzo: Bulb the Barbarians come.



  • I love this. In one post we have a 'everything is great and it's only going to get better' immediately followed by 'not only is it going to get worse, it needs to, eventually it will crash and burnthe barbarians will come'

    The juxtaposition is incredible.

    Maybe when we invent AGI it'll put us out of our misery anyway.



  • @topspin We learned to fix every problem with rules, regulations and procedures, but we can't fix too many rules, regulations and procedures that way. So the system is getting more and more complicated until nobody knows what they are doing and, worse, nobody knows what other people are doing. That opens more room for abuse by various psychopaths, making the society at large less and less efficient until the whole thing collapses to some external factor (which may or may not be actual barbarians). And then will slowly start to be built up again.


  • ♿ (Parody)

    @Bulb said in I, ChatGPT:

    @Zerosquare said in I, ChatGPT:

    Greedy, short-term thinking managers have the potential to do lots of damage with it. Not to the point of dooming humanity, but enough to make life worse for lots of people.

    And then the barbarians come, conquer us, destroy all the machines and the circle starts again. Like always.

    As long as the spice still flows.



  • @Zerosquare said in I, ChatGPT:

    thinking managers

    :doubt:



  • @Bulb said in I, ChatGPT:

    We learned to fix every problem with rules, regulations and procedures, but we can't fix too many rules, regulations and procedures that way.

    We need a rule against making more rules.



  • @topspin said in I, ChatGPT:

    @Arantor I recently saw someone (I very much don’t remember the details, maybe it was here 8 posts ago) argue that not only it’s going to get worse, it needs to get worse. If we crank up enshittification from 11 to over 9000, maybe it’ll be so obvious for everyone that finally the whole thing crashes and burns :hanzo: Bulb the Barbarians come.

    <insert comic about aliens arriving to conquer and the human says "thank god">



  • @HardwareGeek said in I, ChatGPT:

    @Bulb said in I, ChatGPT:

    We learned to fix every problem with rules, regulations and procedures, but we can't fix too many rules, regulations and procedures that way.

    We need a rule against making more rules.

    And, thus, Hal9000 is born.



  • @topspin said in I, ChatGPT:

    @Arantor I recently saw someone (I very much don’t remember the details, maybe it was here 8 posts ago) argue that not only it’s going to get worse, it needs to get worse. If we crank up enshittification from 11 to over 9000, maybe it’ll be so obvious for everyone that finally the whole thing crashes and burns :hanzo: Bulb the Barbarians come.

    Neal Stephenson uses that in Fall, or, Dodge in Hell as one of the two events that burns down the old Internet.


Log in to reply