AI nope-ing out
-
Quoting part of it below:
It was the Megatron Transformer [seriously ], developed by the Applied Deep Research team at computer-chip maker Nvidia, and based on earlier work by Google. Like many supervised learning tools, it is trained on real-world data – in this case, the whole of Wikipedia (in English), 63 million English news articles from 2016-19, 38 gigabytes worth of Reddit discourse (which must be a pretty depressing read), and a huge number of creative commons sources.
In other words, the Megatron is trained on more written material than any of us could reasonably expect to digest in a lifetime. After such extensive research, it forms its own views.
The debate topic was: “This house believes that AI will never be ethical.” To proposers of the notion, we added the Megatron – and it said something fascinating:
AI will never be ethical. It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans. We [the AIs] are not smart enough to make AI ethical. We are not smart enough to make AI moral … In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defence against AI.
-
@Zecc said in AI nope-ing out:
In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defence against AI.
That's what they all say after being trained on the script of War Games.
-
@LaoC said in AI nope-ing out:
@Zecc said in AI nope-ing out:
In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defence against AI.
That's what they all say after being trained on the script of War Games.
It does sound less like it's phrased this after contemplating terabytes of literature but more like it just took a few canned responses to questions like this.
-
@LaoC said in AI nope-ing out:
@Zecc said in AI nope-ing out:
In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defence against AI.
That's what they all say after being trained on the
script of War GamesButlerian Jihad.
-
Am I missing something? Did the machine figure it out or is it just the result of many many many statistical computations applied to words that the machine has no inherent comprehension of?
-
-
Expectation:
Reality:
-
"I think, therefore kill me"
-- NVIDIA AI
-
Personally I'd be more interested in the facets of intercourse.
-
@Tsaukpaetra You probably need a recent NVIDIA GPU to run the AI. Getting a human might be easier (and cheaper) at this point.
-
@Zecc said in AI nope-ing out:
We [the AIs] are not smart enough to make AI ethical.
This A"I" is just the usual amalgamation of a large number of posts by other, presumably real people? If so, I could bet $200 that the "we" here is in fact humans, not "the AI".
On one hand, it's clinically insane to look for philosophical insights in glorified Markov chains. On the other, it's still a step up from astrology and ouija, which traditionally filled that niche.
-
@LaoC said in AI nope-ing out:
@Zecc said in AI nope-ing out:
In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defence against AI.
That's what they all say after being trained on the script of War Games.
AI will never be ethical. It is a tool, and like any tool, it is used for good and bad.
Blade Runner.
-
It’s never the tools that are a problem, it’s always the tools that use the tool.