@Applied-Mediocrity said in I, ChatGPT:
AI is being shoehorned everywhere and treated as a general solution to all problems. Legal? Probably not. But in other domains it has been and absolutely will be considered airtight (with disastrous results; one with cancer patients, I believe), because people just don't care and want to trust the machine. Pray tell, Mr. Babbage, and all that.
Fair enough, but I was talking about this specific application case (to read T&C). I agree that "AI" (which has not much of "I" and is about as "A" as anything that's on a computer) is sometimes seen as a solution to everything. But that this is wrong does not mean that AI can't be a solution to some things. And reading text seems to me very much what an LLM is designed to do, so that seems a rather good application of it.
With the crucial difference being that once you implement an algorithm, for all datasets it will return the only possible answer, and unexpected inputs will either crash the program or get discarded early. It may be wrong, but every step of the way can be formally verified.
Meh. You've probably not dealt with a lot complex scientific (or, maybe, I should say "scientific") models. A lot of them are not built from first principles and equations, but include a lot of empirical knowledge (aka heuristics aka fudge factors...). So "formally verifying" them... yeah, good luck with that.
Using LLMs is basically admitting that rather than trying to describe the steps of solving complex problems, we - fuck it - will relax the conditions and accept impure results.
That is not specific to LLMs, at all. Before we all talked about LLMs the hype was about AI in general, and before that ML (machine learning) and before that Big Data. But at their core, "big data" and "machine learning" can be nothing more than doing a linear regression through data -- and pretty much any real world model includes a linear regression. Sure, LLMs go further than that. But non-LLM-models also go further than that. My point is that using a model that we know is imprecise and only calibrated through real data (as opposed to some theoretical background) is something that has been done since... forever. LLMs aren't that different in that regard.
It may be acceptable for toy problems and toy purposes, but it sets a dangerous precedent. People want to use technology. Outputs will be fed to other data systems, some of which may also be LLMs. GIGO all the way down.
Yeah, lolz... if you knew the "models" that go into my day work and what part of our society relies on it, maybe you wouldn't be so afraid of LLMs being misused