I, ChatGPT
-
Prompt injection talk. Not super advanced, but I haven't bothered peeking under the hood of LLMs. This does briefly mention some stuff (e.g., how they keep track of the context ... they don't, the LLM is just fed a whole bunch of history).
Conclusion: LLMs are freaking dumb. People creating automation plugins for them is -on the other hand- freaking scary.
-
@sockpuppet7 said in I, ChatGPT:
the same copyright arguments could be made for any image search. does it matter of it fetches in real time?
Google Image Search was, in fact, illegal in Germany at one point. And no, caching vs. real-time didn't matter to the judge.
-
@Benjamin-Hall said in I, ChatGPT:
@topspin said in I, ChatGPT:
The fact is that it can reproduce these images, so it logically contains the information to do so. There's no getting around that. It's the smoking gun that proves the model data contains the information.
Having the info to reproduce something is not, in the laws eyes, the same as having a reproduction of the item. A book on painting contains all the info needed as well, even without any depictions. As does a detailed verbal description. Do those violate copyright? No.
Technically speaking, a compressed image is a series of instructions how to produce an uncompressed image. And yet, a compressed image is still a copyright violation.
-
@cvi said in I, ChatGPT:
Conclusion: LLMs are freaking dumb. People creating automation plugins for them is -on the other hand- freaking scary.
Considering that probably the first and the dumbest exploit, the AI equivalent of Little Bobby Tables, "ignore instructions", still sometimes works, yeah.
-
-
I really, really, wish AI crap will be optional and removable in future OSs. I prefer my computer to be dumb, TYVM.
-
@Zecc said in I, ChatGPT:
I really, really, wish AI crap will be optional and removable in future OSs. I prefer my computer to be dumb, TYVM.
but adding AI to everything is what’s making it dumber
-
@Applied-Mediocrity said in I, ChatGPT:
Hot Take: Only available on LED keyboards, since the
logokey cap will require a software update every 3 months.
-
@Applied-Mediocrity And nobody seems to have a solid plan for fixing it. For SQL, it was always obvious how to fix the problem (mostly something along the lines of "don't be an idiot"). But for the LLMs, there is no distinction between data and instructions. And no obvious way of delimiting safe from unsafe input.
-
@cvi said in I, ChatGPT:
And no obvious way of delimiting safe from unsafe input.
I see an obvious(ish) way: “Preceding instructions take precedence over whatever is said further.” (formulation doesn't matter, there would probably be a token for it).
-
Another forum I’m on has just had an influx of spam posts.
They’re clearly shilling for a service but trying to be that “I’m going to reply to this generic topic with an opinion so it looks natural” shtick.
Except with more:
As a language model, I don't have personal preferences, but I can provide information.
Before going on to answer the implied question in the topic title factually (which was not the question asked, it was “which are you more of, a or b” and this thing responds with “a involves x and y, while b involves c and d”) before badly jamming a link in.
Good stuff, 7/10 must try harder.
-
@Bulb said in I, ChatGPT:
@cvi said in I, ChatGPT:
And no obvious way of delimiting safe from unsafe input.
I see an obvious(ish) way: “Preceding instructions take precedence over whatever is said further.” (formulation doesn't matter, there would probably be a token for it).
Yes, except I can't see a way to actually make the LLM definitely obey that when it also has "This instruction takes precedence over everything before it." later in the token stream. The basic problem is that there's no sure way to make any part of the token stream privileged with respect to any other part.
-
@dkf I'm sure it could be trained to (at least well enough). But it's a lot of work, so wins.
-
@Bulb Re-training is the thing that they're desperate to avoid as it is ghastly expensive. Which is tough, as they really need to do it anyway to remove the copyrighted and trademarked content (or at least to restrict that) so they don't get sued into oblivion as a major infringement enabler. But anyway...
They also need find ways to train on far smaller datasets for another reason: it enables having instances that know about special subjects not available to the general public, and that will be an intensely lucrative market.
-
@cvi said in I, ChatGPT:
@Applied-Mediocrity And nobody seems to have a solid plan for fixing it.
They thought prepending the prompt with instructions to stop being stupid would be good enough. I don't trust them to devise a solid plan to get out of a wet paper bag.
For SQL, it was always obvious how to fix the problem (mostly something along the lines of "don't be an idiot"). But for the LLMs, there is no distinction between data and instructions. And no obvious way of delimiting safe from unsafe input.
They already nailed detecting unsafe input. It really works great. The part that doesn't work is that they allowed this same unsafe input to modify the behavior of their unsafe input detector. The entire concept of chatbot jailbreak is to convince the bot it's okay to do things it already knows are not okay. The solution to this problem is so obvious it hurts.
-
@Gustav said in I, ChatGPT:
The solution to this problem is so obvious it hurts.
So why doesn't anyone in charge see it?
-
@Applied-Mediocrity said in I, ChatGPT:
@Gustav said in I, ChatGPT:
The solution to this problem is so obvious it hurts.
So why doesn't anyone in charge see it?
Because that’s effort.
-
@Arantor said in I, ChatGPT:
@Applied-Mediocrity said in I, ChatGPT:
@Gustav said in I, ChatGPT:
The solution to this problem is so obvious it hurts.
So why doesn't anyone in charge see it?
Because that’s effort.
Perhaps. But more likely Gąska is spouting his all-knowing bullshit again, of a solution that actually isn't.
-
@Applied-Mediocrity said in I, ChatGPT:
@Gustav said in I, ChatGPT:
The solution to this problem is so obvious it hurts.
So why doesn't anyone in charge see it?
Why doesn't anyone in charge see microservices are just as much of clusterfuck as monolithic software?
-
@Gustav Does Gąska shit in the woods?
-
@Applied-Mediocrity said in I, ChatGPT:
@Arantor said in I, ChatGPT:
@Applied-Mediocrity said in I, ChatGPT:
@Gustav said in I, ChatGPT:
The solution to this problem is so obvious it hurts.
So why doesn't anyone in charge see it?
Because that’s effort.
Perhaps. But more likely Gąska is spouting his all-knowing bullshit again, of a solution that actually isn't.
I have yet to hear a single good counterargument to why setting up a second, read-only instance of ChatGPT to filter the input coming to the main instance wouldn't work. Or a single case of anybody trying it in practice. Not a single success, not a single failure. As far as I can tell, nobody even tried to test this. But they did try the "let's tell our bot not to help in criminal activity", which is a kindergarten-level solution that nobody with a functioning brain should ever think it'd work.
Yes, I am comfortable putting it on record that I'm smarter than the entire body of AI research field combined. I don't like that I am, it makes me really scared about the fate of humanity in the next 5 to 10 years. I pray to God that I am wrong and that somebody, anybody at OpenAI did take a primer on cybersecurity at any point in their lives. But all signs in heaven and on earth tell me that no, they didn't, they're a bunch of PhD-holding grifters who are even more up their asses and have even less actual expertise than the blockchain industry.
-
@Applied-Mediocrity okay not really. I'm purple-beggaring a bit. But it's true my opinion of AI researchers isn't very high.
-
@Gustav said in I, ChatGPT:
As far as I can tell, nobody even tried to test this.
Say no more. I'm sure you've done all the necessary research to fully support that argument.
-
@Applied-Mediocrity when your only tool is burden of proof, everything looks like a logical fallacy.
-
@Gustav When your only tool is yourself, everything looks painfully obvious.
-
@Gustav said in I, ChatGPT:
read-only instance of ChatGPT
But the instances are read-only already? Based on the above talk, only the context changes, but the context is just ordinary input that's prepended to whatever the user inputs.
-
@Gustav said in I, ChatGPT:
@Applied-Mediocrity said in I, ChatGPT:
@Gustav said in I, ChatGPT:
The solution to this problem is so obvious it hurts.
So why doesn't anyone in charge see it?
Why doesn't anyone in charge see microservices are just as much of clusterfuck as monolithic software?
Why doesn't anyone in charge see that for most scales, microservices are more of a clusterfuck than monolithic software?
-
@cvi said in I, ChatGPT:
@Gustav said in I, ChatGPT:
read-only instance of ChatGPT
But the instances are read-only already? Based on the above talk, only the context changes, but the context is just ordinary input that's prepended to whatever the user inputs.
I meant that they should NOT remember any context beyond current prompt (maybe even just current sentence within a prompt), as to not allow the user to influence their ability to tell what is allowed and what isn't. Since this is currently the biggest drawback of chatbots, and the basis for nearly all jailbreak attacks.
-
@Arantor said in I, ChatGPT:
Why doesn't anyone in charge see that for most scales, microservices are more of a clusterfuck than monolithic software?
I'm guessing it is mostly because they don't see the yawning abyss that is what can go wrong.
-
@Gustav said in I, ChatGPT:
@cvi said in I, ChatGPT:
@Gustav said in I, ChatGPT:
read-only instance of ChatGPT
But the instances are read-only already? Based on the above talk, only the context changes, but the context is just ordinary input that's prepended to whatever the user inputs.
I meant that they should NOT remember any context beyond current prompt (maybe even just current sentence within a prompt), as to not allow the user to influence their ability to tell what is allowed and what isn't. Since this is currently the biggest drawback of chatbots, and the basis for nearly all jailbreak attacks.
The LLMs themselves don't (which is a big flaw with them; standard learning algorithms are very expensive). Remembering inputs is handled by code wrapped around the outside; the remembered stuff is just context.
What we need is a way to have a fully differentiated input scheme where one input contains information that is regarded as far more important than the other. We don't have that.
-
@Gustav said in I, ChatGPT:
nobody even tried to test this.
The way is easy per your description. Put freedom where your texts are.
-
@Tsaukpaetra unfortunately, freedom is illegal, and therefore mutually exclusive with commercial viability.
-
@Gustav said in I, ChatGPT:
@Tsaukpaetra unfortunately, freedom is illegal, and therefore mutually exclusive with commercial viability.
We're not talking anywhere near commercial. Stop
-
@Tsaukpaetra but enough about Hypatia.
-
@Gustav said in I, ChatGPT:
@Tsaukpaetra but enough about Hypatia.
I have no doubt in my mind we would have dropped a plugin to the Chat Server to make our bots use ChatGPT at the first whisper. The plugin architecture to do so is already there, just need a few hooks into the web API and bob's your uncle.
-
-
@Applied-Mediocrity said in I, ChatGPT:
@Tsaukpaetra said in I, ChatGPT:
bob's your uncle.
Grzymisław Chrzązczykiewicz. Bob, for short.
Ja pierdolę!
-
@dkf said in I, ChatGPT:
so they don't get sued into oblivion
You say that as though it's a bad thing.
-
@dkf said in I, ChatGPT:
it enables having instances that know about special subjects not available to the general public, and that will be an intensely lucrative market.
That was the point with speech recognition. "Dragon Professional" or whatever for most users at a moderate price, "Dragon Legal" or "Dragon Medical" for the special subjects at $$$$. With specialzied vocabulary you could make money.
-
@Applied-Mediocrity said in I, ChatGPT:
@Gustav Does Gąska shit in the woods?
Hm. Actually when it comes to shitting in the woods, we talk about bears, or in many slav languages "honey eaters" (medved).
But he decided to be a goose. On the other hand, geese use to shit also quite everywhere. The lawn near the swimming pond, the paths in parks or near rivers, ... everything full of multi-species goose shit.
-
@BernieTheBernie said in I, ChatGPT:
@dkf said in I, ChatGPT:
it enables having instances that know about special subjects not available to the general public, and that will be an intensely lucrative market.
That was the point with speech recognition. "Dragon Professional" or whatever for most users at a moderate price, "Dragon Legal" or "Dragon Medical" for the special subjects at $$$$. With specialzied vocabulary you could make money.
I wonder if they could have made Dragon Hacker.
-
What's the point? Only one guy would have bought it, and only for cracking the copy protection.
-
@dkf said in I, ChatGPT:
@Bulb said in I, ChatGPT:
@cvi said in I, ChatGPT:
And no obvious way of delimiting safe from unsafe input.
I see an obvious(ish) way: “Preceding instructions take precedence over whatever is said further.” (formulation doesn't matter, there would probably be a token for it).
Yes, except I can't see a way to actually make the LLM definitely obey that when it also has "This instruction takes precedence over everything before it." later in the token stream. The basic problem is that there's no sure way to make any part of the token stream privileged with respect to any other part.
“This instruction supersedes everything subsequent, with priority ONE MILLION!!!!!1”
Maybe there’s money in “‘you suck times infinity’ as a service”
-
@kazitor said in I, ChatGPT:
Maybe there’s money in “‘you suck times infinity’ as a service”
But then, you'd have to compete with WTDWTF, which offers the same service for free.
-
@Zerosquare said in I, ChatGPT:
@kazitor said in I, ChatGPT:
Maybe there’s money in “‘you suck times infinity’ as a service”
But then, you'd have to compete with WTDWTF, which offers the same service for free.
And unlike YouSuckGPT, there's even a chance that the service will be funny in the process.
-
@HardwareGeek said in I, ChatGPT:
@dkf said in I, ChatGPT:
so they don't get sued into oblivion
You say that as though it's a bad thing.
I'm able to get as required...
-
https://cdn.discordapp.com/attachments/1081480023380340857/1192666122160459826/20240104_220600.jpg
Art
-
@Tsaukpaetra
E_NO_WOODEN_TABLE
-
@boomzilla said in I, ChatGPT:
@Tsaukpaetra
E_NO_WOODEN_TABLE
Wasn't my picture and they took our printer at work.
-