Internet of shit
-
@remi said in Internet of shit:
In many other domains, the current wisdom seems to be that for AI to work, you need to first reparameterise the problem in a way that's consistent with the underlying physics.
In a lot of neural networks, that's effectively what the first few layers are doing. It's just that the target parameter space is very abstract and it is very hard to explain what any of that means.
-
@dkf but AFAIUI this reparameterisation is basically done by the neural network itself, with maybe at best some hints from the user? What I was thinking about is more the user (developer) changing the problem entirely according to what they know about the underlying mechanism (typically physics), but that the NN can't know. For example if you're trying to get a NN to work out how objects move, you change everything so that it's about forces rather than positions, and suddenly the NN is able to figure out things like velocity staying constant when there is no force etc. Since the NN doesn't know anything about Newtonian mechanics, it cannot "guess" it from the training set (OK, in this specific example maybe it could, but it's the most trivial I could find to illustrate what I saw in more complex domains...).
-
@remi said in Internet of shit:
All in all, my conclusion on the image generation stuff is that probably the AI similarly needs to be reparameterised in a space that aren't just "2D visual features" (note that I know nothing about those AIs so maybe they already do that...).
Scott Alexander has been talking a lot lately about the text based AIs and how they've been improving. His theory is that they just need to get bigger and the magic will happen. So far that kinda seems to be happening. Wouldn't surprise me to see a similar thing with the image AIs, but also wouldn't surprise me that it would take a lot more size to see similar differences.
-
@boomzilla could be.
The way I see it semi-intuitively, in theory if you have an infinitely large training corpus, the solution of your problem is always infinitely close to one of your training points, so you should get good results. In practice, the definition of "infinity" is hugely dependent on the number of dimensions and at the most basic level text is 1D (a succession of characters (ideogram, whatever)) whereas pictures are 2D. So yeah, it might be easier to achieve a "large enough" training set with text than with pictures. (and I guess that nowadays there are many more pictures generated every day than text, so the potential training set grows faster)
Still, I think that this might not always be enough, or rather that you get much better results with far lesser training points by changing those dimensions. As I said above and for example, I guess that modelling a 3D scene and then generating a 2D image from it can help solve a lot of otherwise tricky things (that AIs routinely get wrong!) such as lighting or intersections of objects.
Also, I wonder if there are more ways to get it wrong for pictures than for text. Once you've solved some issues where the rules are somewhat well defined (and thus don't need AI to get them right), such as spelling of words and some grammar rules, the "space of solutions" of text isn't that large, so most solutions will look somewhat OK. With pictures, there are very few rules to restrict what a "possible" picture can be.
-
@remi not so much the training set, though that's important, too, but the number of nodes in your neural net.
-
-
@boomzilla well that's something I have absolutely no idea about, so I can't really comment. Though of course I'm still going to.
Again semi-intuitively, I would think that there is a limit to what moar nodes (layers, connections...) can get you, since ultimately they're still all massaging the same data (garbage in garbage out, regardless of how many steps happen in between). So again, I can very well accept that for problems of higher dimensions (such as picture vs. text) you need more nodes to capture all that, but I'm not sure this is the only factor, or that "getting is right" is just a matter of having enough nodes.
But then again, I don't know anything about all that, so... maybe?
-
@topspin nah, that's just my own private running gag with myself where I try to get less/few (I never managed, nor really bothered, to learn the rule) as wrong as possible and as often as possible.
-
@remi said in Internet of shit:
@boomzilla well that's something I have absolutely no idea about, so I can't really comment. Though of course I'm still going to.
Again semi-intuitively, I would think that there is a limit to what moar nodes (layers, connections...) can get you, since ultimately they're still all massaging the same data (garbage in garbage out, regardless of how many steps happen in between). So again, I can very well accept that for problems of higher dimensions (such as picture vs. text) you need more nodes to capture all that, but I'm not sure this is the only factor, or that "getting is right" is just a matter of having enough nodes.
But then again, I don't know anything about all that, so... maybe?
It's been making a significant effect on the text output, too. Intuitively, it seems to me that a larger net allows for the possibility of a lot more nuance and detail (maybe eventually getting to something we can't deny is intelligence?), which is also what seems to be happening. Not totally unlike going up the animal kingdom WRT brain size.
-
@remi said in Internet of shit:
@topspin nah, that's just my own private running gag with myself where I try to get less/few (I never managed, nor really bothered, to learn the rule) as wrong as possible and as often as possible.
Mixing up less/fewer is pretty common. ( didn't ask for help: fewer is countable, less is uncountable. So, fewer chairs and less sugar. )
Using "lesser" instead, which has a completely different meaning, is something which in my experience only Indians do.
-
@topspin isn’t it?
-
@topspin said in Internet of shit:
Using "lesser" instead, which has a completely different meaning, is something which in my experience only
Indiansdo.(or at least that's my story and I'm sticking to it)
Filed under: trolling is a art
-
@boomzilla said in Internet of shit:
The subscription, which enables things like using your phone as a key fob
Paying extra to add security holes to your car, very cool
-
@remi said in Internet of shit:
@dkf but AFAIUI this reparameterisation is basically done by the neural network itself, with maybe at best some hints from the user?
That depends on how you do the training and whether you use an undifferentiated starting point or preconfigure some stages with the basics of convolution kernels.
-
@BernieTheBernie said in Internet of shit:
There's good reason to store all your data in the cloud.
Google even achieves to make your digital photos look like aged water stained analogue photos. Would you ever be able to do so on your local storage device? E.g. by putting your rusty hard disk in water?Boy I'm sure glad they showed even a single example of what that corruption might look like, rather than a stock photo of a phone
-
@boomzilla said in Internet of shit:
Intuitively, it seems to me that a larger net allows for the possibility of a lot more nuance and detail (maybe eventually getting to something we can't deny is intelligence?), which is also what seems to be happening. Not totally unlike going up the animal kingdom WRT brain size.
Yes, but it also matters whether it is breadth or depth. Adding breadth allows you to distinguish more things, but adding depth allows you to handle more complex relationships between things. It's adding more depth that really increases the difficulty of getting the network training right, whereas with breadth you can just feed more examples in. With enough depth you can have the AI start to work on problems like modelling the intentionality of both itself and others; that's where you'll be getting towards AGI (and into the ethical minefield and the Singularity).
-
-
@Gurth AI generated trolling, I'm slightly bemused no-one here has tried it yet.
-
@Arantor said in Internet of shit:
@Gurth AI generated trolling, I'm slightly bemused no-one here has tried it yet.
Are you that sure they haven't?
-
@Watson I'm not convinced; all the trolling I've seen looked too good to be AI based.
-
@Arantor said in Internet of shit:
@Watson I'm not convinced; all the trolling I've seen looked too good to be AI based.
-
@izzion said in Internet of shit:
@Arantor said in Internet of shit:
@Watson I'm not convinced; all the trolling I've seen looked too good to be AI based.
nah, he's just a regular troll.
-
@Watson said in Internet of shit:
@Arantor said in Internet of shit:
@Gurth AI generated trolling, I'm slightly bemused no-one here has tried it yet.
Are you that sure they haven't?
-
-
@Arantor said in Internet of shit:
Huh... I'd blame for not having loaded these posts, but that normally just happens in the ...
-
@topspin said in Internet of shit:
@Arantor said in Internet of shit:
Huh... I'd blame for not having loaded these posts, but that normally just happens in the ...
This is the garbage inventions thread. Basically the same thing.
-
@topspin said in Internet of shit:
@Watson said in Internet of shit:
@Arantor said in Internet of shit:
@Gurth AI generated trolling, I'm slightly bemused no-one here has tried it yet.
Are you that sure they haven't?
I've applied to the Bot group and been rejected.
-
@Arantor said in Internet of shit:
@Gurth AI generated trolling, I'm slightly bemused no-one here has tried it yet.
That would be impossible.
-
@Gribnit no, I'm sure it's not impossible, I just didn't think anyone had tried it yet.
Unless that's a tacit admission from your good self?
-
@Arantor I'm glad that you feel good in yourself about tacit admissions.
-
@Gribnit you're too smart to be an Eliza program.
-
@Arantor Cannot evaluate. Try another input.
-
@Arantor said in Internet of shit:
@Gribnit you're too smart
-
@Gribnit said in Internet of shit:
I've applied to the Bot group and been rejected.
I have constructed a test to determine if @Gribnit is a bot. The idea is:
- Bots accept commands and act on them
- Hence, if we give a command to @Gribnit and @Gribnit executes the command, we will have proven that he's a bot. If not, he's not a bot.
So, here we go:
@Gribnit !post_something_that_makes_sense
-
@dcon said in Internet of shit:
What could possibly go wrong with an autonomous thing with a whirling blade.
I sometimes wonder what would have happened if I executed on my Marketing 101 project plan....
Filed under: It's Solar Freakin' Lawnmowers!
-
@MrL said in Internet of shit:
@remi said in Internet of shit:
@Watson Exactly.
This is an interesting feature of those AI generators, that they manage extremely well to produce something that looks right from afar, but is almost always very wrong when looked at closely. I have no idea why this is
They are programmed by impressionists, obviously.
-
@Tsaukpaetra said in Internet of shit:
I sometimes wonder what would have happened if I ...
Nothing good, I'm sure.
-
-
@HardwareGeek said in Internet of shit:
@Tsaukpaetra said in Internet of shit:
I sometimes wonder what would have happened if I ...
Nothing good, I'm sure.
Well, I mean, it seems like such a popular idea! And at the time I thought it was fucking unique!
-
@Watson said in Internet of shit:
@remi said in Internet of shit:
The mind being great at filling holes ( ), there's a good chance anyone looking at the plate quickly will think they saw some number, but never the same (nor the right one, of course).
E.g (this was my first result for "A license plate"):
Is that an S, or a G ... maybe a 6...?Stable Diffusion produced mostly better letters and numbers.
Prompt: valid license plate the cops won't pull me over forNow, as to why the license plate was anywhere except where license plates typically appear on cars of that style is anyone's guess...
-
@Tsaukpaetra said in Internet of shit:
@MrL said in Internet of shit:
@remi said in Internet of shit:
@Watson Exactly.
This is an interesting feature of those AI generators, that they manage extremely well to produce something that looks right from afar, but is almost always very wrong when looked at closely. I have no idea why this is
They are programmed by impressionists, obviously.
Me trying to read that:
-
@Zecc It's a schooner!
-
@Applied-Mediocrity it’s not a schooner, it’s a sailboat! (mutters insults about stupidity)
-
@Tsaukpaetra said in Internet of shit:
@Watson said in Internet of shit:
@remi said in Internet of shit:
The mind being great at filling holes ( ), there's a good chance anyone looking at the plate quickly will think they saw some number, but never the same (nor the right one, of course).
E.g (this was my first result for "A license plate"):
Is that an S, or a G ... maybe a 6...?Stable Diffusion produced mostly better letters and numbers.
What I'm disappointed by is that no-one mentioned the wooden table.
-
@Arantor said in Internet of shit:
@Gurth AI generated trolling, I'm slightly bemused no-one here has tried it yet.
M-x doctor
-
@Gurth said in Internet of shit:
@Arantor said in Internet of shit:
@Gurth AI generated trolling, I'm slightly bemused no-one here has tried it yet.
M-x doctor
XKCD'd that for you
-
@Arantor said in Internet of shit:
@Applied-Mediocrity it’s not a schooner, it’s a sailboat! (mutters insults about stupidity)
A schooner is a sailboat, stupid-head!
-
@cvi would that even make sense?
-
-
@Tsaukpaetra huh... "TV is not", it also mentions. I don't have a dog right now, so it'll have to do.