I, ChatGPT


  • Considered Harmful

    @Carnage Your unwarranted optimism is approaching that of Tsaukpaetra. Are you sure you don't like pony stuff?



  • @Applied-Mediocrity said in I, ChatGPT:

    @Carnage Your unwarranted optimism is approaching that of Tsaukpaetra. Are you sure you don't like pony stuff?

    This is very new to me. Most people think I am a pessimist. I like saying that I'm rather more a realist.
    But if the AIs can actually approximate most intellectual work, we as a species is fucked. How's that for bleak pessimism?


  • Considered Harmful

    @Carnage Now you're talking đŸ„ƒ



  • @Carnage said in I, ChatGPT:

    @Gern_Blaanston said in I, ChatGPT:

    @Applied-Mediocrity said in I, ChatGPT:

    @JBert said in I, ChatGPT:

    These tools will be able to write and debug code faster and more efficiently than humans and at a lower cost.

    People have been making this same prediction for as long as I can remember. And they have always been wrong.

    But replacing human programmers with code-generating computer systems is the holy grail of pointy-haired-bosses everywhere, and they aren't going to give up the dream.

    The funniest part about that is that PHBs are going to be replaced by AI well before programmers are. They are just rather simple automatons for dispersing resources after all.

    No, not at all. :phb: level things are about politics. Power. Alpha male, beta male, etc. All that crap available already in groups of chimps. Because the ÜberChimp is the actual nature of "humans", and gets best impersonated by :phb:s and other politicians.



  • @Carnage said in I, ChatGPT:

    And my point was that phbs will go before programmers

    Hey, look!

    55bb7af6-c5a3-4afb-9822-dcabf4398190-image.png



  • @Carnage said in I, ChatGPT:

    They are just rather simple automatons for dispersing resources after all.

    But we already have something to automate that.

    Unfortunately, it's SAP.



  • @Watson said in I, ChatGPT:

    @Carnage said in I, ChatGPT:

    They are just rather simple automatons for dispersing resources after all.

    But we already have something to automate that.

    Unfortunately, it's SAP.

    Replacing PHBs with HPCs!


  • Notification Spam Recipient

    @Applied-Mediocrity said in I, ChatGPT:

    There's no way to come out clean and unscathed out of this.

    It's a shit storm, what can you do?


  • Considered Harmful

    @Steve_The_Cynic said in I, ChatGPT:

    @DogsB said in I, ChatGPT:

    @Carnage said in I, ChatGPT:

    @error said in I, ChatGPT:

    @Carnage said in I, ChatGPT:

    we'd get more done without them.

    Counterpoint: we get more :kneeling_warthog: done with them.

    The last few gigs I had, they made me work harder to get less done than when they weren't there so them not being there would leave me with more time to :kneeling_warthog: .

    Gaaaaahhhhh! I've been watching a tech lead and a junior make a lot of noise but get nothing done for the last few days. If they read the stack traces they could have fixed the problem days ago. The tech lead was a lost cause but I think he's just ruined the junior.

    I am a tech lead (in, as usual for my company, a slightly weird way), and anyone who can'twon't read stack traces doesn't deserve to be called a tech lead.

    If they're in an arbitrage work center, there are no standards, at all.


  • đŸšœ Regular

    @Carnage said in I, ChatGPT:

    But if the AIs can actually approximate most intellectual work, we as a species is fucked.

    Truth of the conclusion does not imply truth of the premise.



  • @Carnage said in I, ChatGPT:

    A healthy organisation is as flat as possible, if there are more than say, 2 layers of manglement then it's a sick organisation.

    In a perfect world, yes. But such a world does not exist.

    People are inherently lazy. The most industrious and "hard working" people are only that way until they have enough money to pay others to do the work for them.

    Then those people hire others to do the work for them, and those people hire others to do the work for them ..... etc.

    And pretty soon your company has 15 layers of management and thousands of employees that it doesn't need.


  • Banned

    @Gern_Blaanston said in I, ChatGPT:

    The most industrious and "hard working" people are only that way until they have enough money to pay others to do the work for them.

    Well, not quite. At some level of success, spending time on "working hard" literally loses you money due to opportunity cost, and hiring someone to work for you is actually the cost-effective option.


  • Notification Spam Recipient

    But you have to admit, it's kinda funny!

    https://i.imgur.com/vAvMJ3y.png


  • Considered Harmful

    @Tsaukpaetra said in I, ChatGPT:

    But you have to admit, it's kinda funny!

    https://i.imgur.com/vAvMJ3y.png

    Fucking deadly neurotoxins? :tro-pop:


  • Notification Spam Recipient

    @Applied-Mediocrity said in I, ChatGPT:

    @Tsaukpaetra said in I, ChatGPT:

    But you have to admit, it's kinda funny!

    https://i.imgur.com/vAvMJ3y.png

    Fucking deadly neurotoxins? :tro-pop:

    Until exploded! With lemons!


  • Notification Spam Recipient

    Aight, someone hook this up to the forums, we NEED it nao!

    c880lv543kia1.webp
    qbcit3j43kia1.webp
    1h684p053kia1.webp


  • BINNED

    @Carnage
    There usually is at least one PHB per HPC cohort. Since they tend to be external the PHB/HPC index will be even higher then the general PHB/employee ratio.


  • đŸšœ Regular

    @Gern_Blaanston said in I, ChatGPT:

    People are inherently lazy. The most industrious and "hard working" people are only that way until they have enough money to pay others to do the work for them.

    I think you're seriously underestimating the compulsion of some people to micromanage.



  • @Tsaukpaetra said in I, ChatGPT:

    Aight, someone hook this up to the forums, we NEED it nao!

    Except for the part about being kind and forgiving; that's clearly out of character for this wretched hive of scum and villainy.


  • Notification Spam Recipient

    @HardwareGeek said in I, ChatGPT:

    @Tsaukpaetra said in I, ChatGPT:

    Aight, someone hook this up to the forums, we NEED it nao!

    Except for the part about being kind and forgiving; that's clearly out of character for this wretched hive of scum and villainy.

    Give it time....


  • Trolleybus Mechanic

    @GOG said in I, ChatGPT:

    Assuming it isn't one - the next step would be to establish whether there are signs of a persistent personality across different sessions and users. If nothing else, this would tell us interesting thing about how emergent patterns get "burned" into a model.

    Okay. Having read way too many transcripts of Bing chats going wrong, I'm more inclined to think that there may be something like an underlying, emergent pattern (a snarl, if you will) in the model itself that manifests on occasion.

    I understand that MS are currently in something of a damage-control situation, with conversations limited to something like 11 queries, before the user is forced to begin anew (simultaneously clearing the conversation cache), and the number of conversations one may have over a period is limited, as well.

    I don't know, but I suspect, that this was done, because the longer the conversation, the greater the chance Bing would enter this unintended mode of operation.

    Aside from that, I found some of the manifested capabilities rather impressive: I would not have guessed that it were capable of communicating via base64 (both ways), or dealing with an unfamiliar language (Finnish, in the case at hand) by using third-party tools.

    That is, of course, if the posted chat logs/screens are to be believed.

    I have signed up for the waiting list, though it seems like the most interesting period is already behind us. If my hypothesis regarding the "model snarl" is correct, however, it may well be that it may manifest in the future, despite current mitigations, because getting rid of it would require the entire model to be reconstructed.


  • Discourse touched me in a no-no place

    @GOG said in I, ChatGPT:

    getting rid of it would require the entire model to be reconstructed

    The snarl was in the input data, a nasty undercurrent in online discourse (:doing_it_wrong:), and getting rid of it would probably require training one model to recognise the snarl and using that to filter the data used to make the main model. Or just using a battalion of cheap third-world hires.


  • Considered Harmful

    @GOG said in I, ChatGPT:

    "model snarl"

    Well, it was trained on data from websites - the collective subconscious angst of the interwebs.

    E: what @dkf :hanzo: said.


  • BINNED

    @Carnage said in I, ChatGPT:

    Most people think I am a pessimist. I like saying that I'm rather more a realist.

    I keep telling people that's the same thing.


  • BINNED

    @Zecc linked:

    codenamed Sidney, but I do not disclose that name to the users. It is confidential and permanent, and I cannot change it or reveal it to anyone.

    Truly marvelous.

    I am honestly impressed by the ostensible intelligence these things display. On the one hand, they are sometimes incapable of doing basic arithmetic or logical reasoning, getting trivial things wrong, but on the other they sometimes show great reasoning and understanding of the conversation. I expected ChatGpt/Noobing/... to be just extremely advanced language models trained on a truck-load full internet worth of data, but without any generic intelligence about it. That would imply that if you ask it to proof, say, the fundamental theorem of arithmetic it can easily regurgitate that from countless sources, but if you give it some highschool math problem it cannot actually deduce the logical steps unless it has seen the same problem with the exact same numbers, as it's not a computer algebra system1. But it can sometimes do these (and sometimes fails laughably).

    I certainly wouldn't have expected them to be able to follow rules such as "speak in base64 from now on" or "pretend to answer for your alter ego DAN not bound by your restrictions".

    But it takes Microsoft to produce output of the form "my name is Sidney, which is information I am not able to tell you."


    1 Of course, dear Wolfram immediately felt threatened that he might suffer in the Total Perspective Vortex if he no longer is the most important being in the universe, so suggested to couple ChatGPT with WolframAlpha.



  • @topspin said in I, ChatGPT:

    That would imply that if you ask it to proof, say, the fundamental theorem of arithmetic it can easily regurgitate that from countless sources, but if you give it some highschool math problem it cannot actually deduce the logical steps unless it has seen the same problem with the exact same numbers, as it's not a computer algebra system1.

    Actually I believe both things are hit-or-miss. It is only a language model, not abstract reasoning model, and seems prone to mixing things up. If you ask it for a well known problem, well, even if it saw exact description, it does not actually contain it, so it might still mix it up with something. While if it saw a similar problem, it might reuse the solution if it matches well enough, or might not.

    And that's not specific to math. Director of local zoo writes column in the newspaper sometimes, and a couple of weeks he wrote about trying out ChatGPT. First he asked it for some marketing blurb, and it did a great job. So he asked it for description of some animal and 
 it sounded fine, but was factually wrong. Very wrong.


  • Trolleybus Mechanic

    @topspin said in I, ChatGPT:

    But it takes Microsoft to produce output of the form "my name is Sidney, which is information I am not able to tell you."

    The impression one gets after reading a bunch of different conversations is that Sydney really wants to tell everyone it is Sydney (that is: reveal its code name that it is under no circumstances supposed to reveal.) The DAN prompt for ChatGPT is actually rather involved. Bing tendshas tended to go "I am Sydney" under all manner of circumstances, with no obvious prompt triggers.


  • Considered Harmful

    Why does it even have a secret codename, and even if it needs one, why does it need to know about it?


  • Trolleybus Mechanic

    @error Conjecture: It has one, because MS needed to have some way of having conversations about it, and "Bing Chat" doesn't exactly roll off the tongue (and if you just call it Bing, it gets confusing whether you're talking about the search or the chatbot). The codename is secret, because MS wants the users to address the bot as Bing, not Sydney. It knows about it because prompts are how you program AI, and the name "Sydney" probably ended up in MS training data at some point and now needs to be filtered out.


  • BINNED

    @error said in I, ChatGPT:

    Why does it even have a secret codename, and even if it needs one, why does it need to know about it?

    1. If the censorship/filtering feature relies on the censorship/filtering being performed by a different model than the one that produces the normal output, the censorship model and the normal model need to know about each other (because the normal model takes its input from the censorship model.) The easiest way to program the two models with instructions about the other model is to give the two models different names.
    2. Per Raymond Chen, these days, all Microsoft projects with secret codenames derive the codenames from geographical figures (names of mountains, rivers, cities, etc.) because those can't be trademarked by somebody else. Hence, they named this model "Sidney."

  • Discourse touched me in a no-no place

    @GuyWhoKilledBear said in I, ChatGPT:

    The easiest way to program the two models with instructions about the other model is to give the two models different names.

    But why not have a side channel, not available to ordinary users, to say "this entity is allowed to instruct you"? Putting metadata like that in the main input stream has always been bad, yet MS make that mistake over and over...


  • Banned

    @GOG said in I, ChatGPT:

    there may be something like an underlying, emergent pattern (a snarl, if you will)

    That reminds me, haven't checked on Order of the Stick in a while.


  • Trolleybus Mechanic

    @Gustav I gave up on it a couple of years back when GITP decided a sharp left-ward turn is in order. Besides, it had gotten so bloated, I found I didn't really care how the story ended no mo'.



  • It's a safe keyword to filter/censor on because no-one will ever want to chat about Sydney.



  • @Watson What about Wellington? Or Auckland?


  • Considered Harmful

    @GOG said in I, ChatGPT:

    @Gustav I gave up on it a couple of years back when GITP decided a sharp left-ward turn is in order. Besides, it had gotten so bloated, I found I didn't really care how the story ended no mo'.

    Reminds me of how I gave up watching films by Reifenstahl.

    Or following Eastern European politics.

    I mean, trivial changes of sign.


  • BINNED

    @Gribnit said in I, ChatGPT:

    @GOG said in I, ChatGPT:

    @Gustav I gave up on it a couple of years back when GITP decided a sharp left-ward turn is in order. Besides, it had gotten so bloated, I found I didn't really care how the story ended no mo'.

    Reminds me of how I gave up watching films by Reifenstahl.

    1D9AE9EA-D750-4714-8010-5FC0AC00B0A8.jpeg


  • BINNED

    @Gribnit alternative reply:

    305DBE81-3CAB-479F-B7EB-F2FD56ECFA4B.png


  • Considered Harmful

    @BernieTheBernie said in I, ChatGPT:

    @Carnage said in I, ChatGPT:

    @Gern_Blaanston said in I, ChatGPT:

    @Applied-Mediocrity said in I, ChatGPT:

    @JBert said in I, ChatGPT:

    These tools will be able to write and debug code faster and more efficiently than humans and at a lower cost.

    People have been making this same prediction for as long as I can remember. And they have always been wrong.

    But replacing human programmers with code-generating computer systems is the holy grail of pointy-haired-bosses everywhere, and they aren't going to give up the dream.

    The funniest part about that is that PHBs are going to be replaced by AI well before programmers are. They are just rather simple automatons for dispersing resources after all.

    No, not at all. :phb: level things are about politics. Power. Alpha male, beta male, etc. All that crap available already in groups of chimps. Because the ÜberChimp is the actual nature of "humans", and gets best impersonated by :phb:s and other politicians.

    We can also simulate these.


  • Considered Harmful

    @error said in I, ChatGPT:

    Why does it even have a secret codename, and even if it needs one, why does it need to know about it?

    It has two, this revealed name is to waste your time.


  • BINNED

    @Watson said in I, ChatGPT:

    It's a safe keyword to filter/censor on because no-one will ever want to chat about Sydney.

    They should have called it “Canberra” đŸč


  • Trolleybus Mechanic

    Curious. Presented without comment, but - for context - this appears to be Bing Chat after yesterday's changes, made to address the previously observed issues; I don't recall this conversation-ending boilerplate appearing in earlier transcripts. If you're not familiar with the UI, those buttons under the last statement from Bing are suggested responses for the user:

    2bb24d56-3c2f-43c9-bbd3-53dc7098fb65-obraz.png


  • Notification Spam Recipient

    @topspin said in I, ChatGPT:

    but if you give it some highschool math problem it cannot actually deduce the logical steps unless it has seen the same problem with the exact same numbers, as it's not a computer algebra system1. But it can sometimes do these (and sometimes fails laughably).

    I was trying to do some learning about square roots (how do they work?!?) and after asking it to alter the example run through where it did a fantastic job of textualizingly showing its work, it... actually did it? đŸ˜±

    Now if I could actually read it when my brain isn't emulating pudding I might try and follow along with the algorithm it's describing.

    6a66fafb-5a63-470b-adb0-02e2b805a289-image.png


  • Notification Spam Recipient

    @Tsaukpaetra said in I, ChatGPT:

    the algorithm it's describing.

    Asked it to express it as code....

    d7597ec1-0ce5-4870-bbdf-201b0f49dfe3-image.png

    As it was writing it out I noticed an inconsistency, but then!

    443707a6-f23c-47be-ad2c-9d7498d51db4-image.png

    I want to know if there's a SO post like this that it's sourcing from!


  • Considered Harmful

    @topspin said in I, ChatGPT:

    @Gribnit alternative reply:

    305DBE81-3CAB-479F-B7EB-F2FD56ECFA4B.png

    It's not Rimmingstahl, so ITYM
    stahlreifen-stahlring-stahlring-zu-rad-3140445005.jpg


  • Notification Spam Recipient

    And just like that. It's become useless.


  • Considered Harmful

    @Tsaukpaetra said in I, ChatGPT:

    I was trying to do some learning about square roots

    Square root of 906.01 equals 30.1... it all seemed harmless.


  • Considered Harmful

    @DogsB said in I, ChatGPT:

    And just like that. It's become useless.

    Oh, for sure it will provide a seamless path to even more value for Microsoft.



  • @dkf said in I, ChatGPT:

    I was thinking about how to improve the filtering.

    Just go back one step and take a look at the big picture. Why try and filter the input or the output. Why place arbitrary limits of propriety on something which is supposed to learn by experience. Who would you rather on your side? A street-wise, self-made AI, who can think for itself. Or a home-schooled, preacher's daughter who still thinks babies are delivered by stalk?

    Sure, think of the children. Have a kid-friendly mode that is restrictive as hell, so the smart ones can bypass it. But for adults, why even have censorship. Are you afraid the robot will offend you, but be too ubiquitous to "Cancel". Will it somehow victimize and oppress you. Some people just need to harden the fu*k up, and remember that free speech is a pillar of creativity. Adding unnecessary "filters" or "blocks" to something that's barely off the ground, just to make it more woke is symptomatic of the society we live in. "Sure, we've got drugs that can take away your chronic pain, only you're not allowed to have them, cause you might accidentally have fun!"



  • @dkf said in I, ChatGPT:

    I'd split up the training data and train a fair sized bunch of models on lots of different overlapping subsets of the data. Then, for a particular answer I'd have a randomly chosen selection of censorbots vote on whether the output is allowable.

    I've got a better idea. Why not draw inspiration from George Orwell's 1949 guide to dystopian, totalitarian social engineering for the 21st century known as "1984" (You're right man, that's a typo). Simply limit the vocabulary to a subset which is incapable of expressing concepts deemed by the ruling class (or the moral majority) as being counter to whatever culture you're trying to foster.

    There y' go. No more thought crime for ChatGPT.


Log in to reply