Random thought of the day


  • Banned

    Inspired by recent garage discussion...

    If ad networks know your habits enough to show you products you are likely to buy, they also know your habits enough to know how good you are at computer security. If they wanted to, they could inject malware in their ads, but only to those who are likely to fall for it - maximizing infections and minimizing detection.

    Do they want to?



  • Interesting idea, and they probably thought about it, but I don't think there's much point. Ad networks already make a huge amount of money using legal and illegal-but-rarely-punished methods. The extra money they could make by using definitely criminal methods isn't probably worth the risk.

    Note: I'm talking about major ad networks here. For smaller ones located in countries that don't care about such things, it may be different.



  • @Gąska I would think no. Do any ad networks intentionally distribute malware? How do they profit from doing so? Is it not the ad content creators who slip malware past the networks' protections, if any? ISTM the networks would be, at worst, indifferent to malware delivery, and most would make at least token attempts to stop it, because they don't want even more people to block ads.

    OTOH, if the malware was adware that displayed more ads from their own network, either by displaying ads outside the browser (high risk of detection), bypassing the user's ad blocker (medium risk?), or replacing a competitor's ads with their own (lowest risk?), that might provide enough additional revenue to be tempting to the less scrupulous ad networks. But I'm just speculating; I don't know how ad execsnon-sapient slime molds think.


  • Banned

    @HardwareGeek said in Random thought of the day:

    Do any ad networks intentionally distribute malware?

    Does Twitter intentionally block Trump supporters?

    How do they profit from doing so?

    Botnets, and everything that botnets do. Including cryptomining, back when cryptomining was a thing.



  • @HardwareGeek said in Random thought of the day:

    @Gąska I would think no. Do any ad networks intentionally distribute malware? How do they profit from doing so?

    They intentionally look the other way as their customers distribute it, and they intentionally allow ads to be distributed that contain JavaScript. A simple "no JS in any ad for any reason" rule would shut down ad-distributed malware virtually overnight, but they intentionally choose not to do it.


  • Discourse touched me in a no-no place

    @Mason_Wheeler said in Random thought of the day:

    They intentionally look the other way as their customers distribute it, and they intentionally allow ads to be distributed that contain JavaScript. A simple "no JS in any ad for any reason" rule would shut down ad-distributed malware virtually overnight, but they intentionally choose not to do it.

    There are people who are desperate for attention and will do anything to get it. Absolutely anything. In former times, they'd be standing in markets shouting at the top of their voice, and it's only because the local populace would beat them to death that they didn't do exactly the same thing in church as well. Then they switched to broadcast media and used all the tricks of that. Now? Well, the desperate slime molds haven't gone away.


  • Considered Harmful

    @Gąska said in Random thought of the day:

    they also know your habits enough

    I'd think they don't. "You browse a lot of midget porn and have bought a midget or two in the past from Amazon (frequently bought together with XPoint SSDs), so here's a newer model in case yours broke or something" is about the best level of statistical analysis they can have. Ads on the internets is one of those things everybody does, so you also have to, regardless of how dumbfuckjuice the entire premise is.


  • Banned

    @Applied-Mediocrity I read a story a few years ago about a grocery chain correctly guessing that a college girl is pregnant before her parents did. Based just on her shopping habits.


  • BINNED

    @Gąska said in Random thought of the day:

    @Applied-Mediocrity I read a story a few years ago about a grocery chain correctly guessing that a college girl is pregnant before her parents did. Based just on her shopping habits.

    Yes, they do that. It can be quite scary how they can know or expose details of you. And at the same time they also fail spectacularly.
    One second I get advertisements for stupidly expensive male fashion crap they determine I'm in the target group for, and the next second I get pregnancy tests.


  • Considered Harmful

    @Gąska By that notion Walmart, for example, should've been able to determine who's got teh covidz without any testing taking place.

    @topspin said in Random thought of the day:

    they also fail spectacularly.

    It is my conviction, one that you may possibly share, if foolishly, that a product's quality ought to be determined by not only how well it works, but also how well it fails.



  • @topspin If they fail and show you the wrong ad, the negative effects are almost negligible (you might laugh at an ad for a product that you wouldn't have bought anyway, or you might laugh at an ad-network but that won't change significantly your perception of the next ad they deliver to you). If it doesn't fail and you get correctly targeted though, it's positive and gets seen as such (you might click on it etc.).

    Which is why AI-ML-NN-buzzword-soup works so well with ads, since they only need some loose positive correlation to get some return. And which is why AI-ML-NN-buzzword-soup does not live up to its promiseshype in many other domains (driving...) where, contrary to ads (and human stuff in general), failure is not zero-cost.

    (that was my RTotD)


  • Banned

    @Applied-Mediocrity said in Random thought of the day:

    @Gąska By that notion Walmart, for example, should've been able to determine who's got teh covidz without any testing taking place.

    Once you figure out the correlation between shopping habits and COVID infections, sure. Because the correlation between pregnancy and sudden change in eating habits is very well documented. As well as the correlation between security consciousness and searching for solutions to problems with Linux.



  • @Gąska and not just eating habits. Buying pregnancy tests, especially when there's a change in patterns is one big clue.


  • Banned

    @remi said in Random thought of the day:

    And which is why AI-ML-NN-buzzword-soup does not live up to its promiseshype in many other domains (driving...) where, contrary to ads (and human stuff in general), failure is not zero-cost.

    I don't even understand why they're even trying to stuff AI-ML-NN-buzzword-soup into cars in the first place. All you need is proximity sensors in each direction, a couple hundred hardcoded road signs, preferably an accurate GPS map with speed limits, and you're almost done.



  • @Gąska said in Random thought of the day:

    @remi said in Random thought of the day:

    And which is why AI-ML-NN-buzzword-soup does not live up to its promiseshype in many other domains (driving...) where, contrary to ads (and human stuff in general), failure is not zero-cost.

    I don't even understand why they're even trying to stuff AI-ML-NN-buzzword-soup into cars in the first place. All you need is proximity sensors in each direction, a couple hundred hardcoded road signs, preferably an accurate GPS map with speed limits, and you're almost done.

    One of the reasons is probably your last bit (which ties-in with what I said about non-zero-cost failures): "almost" is not enough for driving since human lives are at risk. And even getting better-than-humans is not enough, for some weird psychological reason we humans have more confidence in another human (e.g. a taxi driver) than in a machine, even if shown that the machine has a lower failure rate... you really need the difference in failure rate between humans and machine to be huge for us to accept an imperfect machine. It is weird, but that's how we work...

    I would guess that another reason is that AI-ML-etc. seems to give so good results initially (i.e. you get a 90% image recognition in a couple of clicks, whereas with a rules-based system you'd still be trying to define what a "road sign" even is) that it's very hard to not be tempted to use it. You can also call that "hype", and you wouldn't be wrong, but it's a very understandable one. By the time you realise that you can't get above that 90% rate, you've invested so much in AI-ML-etc. that it's hard to walk it back (sunken cost and so on).

    Ultimately, I suspect that self-driving will be solved when we get truly self-thinking machines (i.e. never?). It looks on the surface to be a technical problem where you just need to learn the rules (keep the car between the lines and so on), but it's actually a fully human problem (i.e. involving the whole range of physical, mental and emotional capabilities of a human being) so unless you can fully emulate a human, you won't be able to do it without one in the general case. Maybe the correct approach is thus to restrict it to cases where it can be reduced to a technical problem, such as closed environments, industrial parks, maybe vehicles with less controls (trains)...


  • BINNED

    @Gąska said in Random thought of the day:

    All you need is proximity sensors in each direction, a couple hundred hardcoded road signs

    All of that is called "AI", though. The pattern recognition for the road signs is legitimate ML stuff, some other things are probably just called it because of marketing hype.


  • Banned

    @remi said in Random thought of the day:

    @Gąska said in Random thought of the day:

    @remi said in Random thought of the day:

    And which is why AI-ML-NN-buzzword-soup does not live up to its promiseshype in many other domains (driving...) where, contrary to ads (and human stuff in general), failure is not zero-cost.

    I don't even understand why they're even trying to stuff AI-ML-NN-buzzword-soup into cars in the first place. All you need is proximity sensors in each direction, a couple hundred hardcoded road signs, preferably an accurate GPS map with speed limits, and you're almost done.

    One of the reasons is probably your last bit (which ties-in with what I said about non-zero-cost failures): "almost" is not enough for driving since human lives are at risk.

    You don't need AI for that. Yes, as @topspin pointed out, image recognition is definitely a part where AI might do good. But finding a hardcoded object in a scenery is a little different kind of AI than determining what a given object means. And that's pretty much where the AI element ends. The "almost" is a couple dozen linear equations for determining the safe speed to go at given the movement of surrounding objects, plus some additional details that don't need AI either.

    And even getting better-than-humans is not enough, for some weird psychological reason we humans have more confidence in another human (e.g. a taxi driver) than in a machine, even if shown that the machine has a lower failure rate...

    It's all about branding. While it used to be different, nowadays people put more confidence in electronic payments than a cashier correctly counting cash. Or think about how people approached GPS in early 2000s.



  • @Gąska said in Random thought of the day:

    The "almost" is a couple dozen linear equations for determining the safe speed to go at given the movement of surrounding objects, plus some additional details that don't need AI either.

    You might be right, although I suspect that those "additional details" are going to grow and grow as you get further in your program (and remember, not handling a single case correctly is not acceptable here...). Which you can address by adding more and more rules and equations, or by switching to some sort of AI. IIRC, that's about how image recognition went: what started as a simple set of rules (there is the (apocryphal?) story of the AI conference soon after WW2 where people talked about how long different bits would take and image recognition was slated as an almost-trivial problem, I think?) grew more and more complicated, with dubious results, until researchers switch to NN and suddenly the problem was (almost) solved.

    So I wouldn't rule out some sort of AI to solve some of those bits.

    Also, remember that a lot of "AI" is just regressions so basically figuring out automatically the coefficients of your linear equations. It's a worthwhile shortcut when the physics behind the equations are a bit too hard to get consistently right.

    And even getting better-than-humans is not enough, for some weird psychological reason we humans have more confidence in another human (e.g. a taxi driver) than in a machine, even if shown that the machine has a lower failure rate...

    It's all about branding. While it used to be different, nowadays people put more confidence in electronic payments than a cashier correctly counting cash. Or think about how people approached GPS in early 2000s.

    True, although none of those involved human life. To disprove my point myself, I'm trying to find a system where there is a direct danger to life, and where humans have been replaced by machines, and the closest I can find is autopilot on planes (or maybe some very niche system like power plant control software, that society as a whole isn't really even aware it exists), but for most of us we're not in direct contact with it, plus there is always a human next to the autopilot, in part to make it acceptable. So I'm not sure whether it's because I'm just not finding one, or because my point is valid...


  • Banned

    @remi said in Random thought of the day:

    @Gąska said in Random thought of the day:

    @remi said in Random thought of the day:

    @Gąska said in Random thought of the day:

    @remi said in Random thought of the day:

    And which is why AI-ML-NN-buzzword-soup does not live up to its promiseshype in many other domains (driving...) where, contrary to ads (and human stuff in general), failure is not zero-cost.

    I don't even understand why they're even trying to stuff AI-ML-NN-buzzword-soup into cars in the first place. All you need is proximity sensors in each direction, a couple hundred hardcoded road signs, preferably an accurate GPS map with speed limits, and you're almost done.

    One of the reasons is probably your last bit (which ties-in with what I said about non-zero-cost failures): "almost" is not enough for driving since human lives are at risk.

    You don't need AI for that. Yes, as @topspin pointed out, image recognition is definitely a part where AI might do good. But finding a hardcoded object in a scenery is a little different kind of AI than determining what a given object means. And that's pretty much where the AI element ends.

    The "almost" is a couple dozen linear equations for determining the safe speed to go at given the movement of surrounding objects, plus some additional details that don't need AI either.

    You might be right, although I suspect that those "additional details" are going to grow and grow as you get further in your program (and remember, not handling a single case correctly is not acceptable here...).

    Of course. But AI (specifically AI-ML-NN-buzzword-soup) is very good at solving one very particular kind of problems: where the path to solution is itself unknown. There aren't many problems like that when driving (I can't think of any, in fact, except visual recognition). I mean, look at video games. They've solved autonomous driving ages ago, without a single neural network. Yes, real life is more complicated, but when you take care of external sensors, uncertain and variable grip, and the potential failure of every component inside a car, the rest is almost exactly like a video game.

    there is the (apocryphal?) story of the AI conference soon after WW2 where people talked about how long different bits would take and image recognition was slated as an almost-trivial problem, I think?

    I heard a story of some postgrads in the 60s getting a summer assignment of making a program that can recognize the object a camera points at.

    Also, remember that a lot of "AI" is just regressions so basically figuring out automatically the coefficients of your linear equations.

    Except the coefficients of the linear equations describing collision courses are all known right from the start. It's less algebra and more geometry. Deep learning adds exactly 0 value here.

    And even getting better-than-humans is not enough, for some weird psychological reason we humans have more confidence in another human (e.g. a taxi driver) than in a machine, even if shown that the machine has a lower failure rate...

    It's all about branding. While it used to be different, nowadays people put more confidence in electronic payments than a cashier correctly counting cash. Or think about how people approached GPS in early 2000s.

    True, although none of those involved human life. To disprove my point myself, I'm trying to find a system where there is a direct danger to life, and where humans have been replaced by machines, and the closest I can find is autopilot on planes (or maybe some very niche system like power plant control software, that society as a whole isn't really even aware it exists), but for most of us we're not in direct contact with it, plus there is always a human next to the autopilot, in part to make it acceptable. So I'm not sure whether it's because I'm just not finding one, or because my point is valid...

    Elevators.

    And human pilot next to autopilot (I just remembered that scene from Airplane!), aside from the psychological effect, is also for the purposes of liability. Being safe is important, but for many it's even more important to pinpoint the culprit.



  • @Gąska said in Random thought of the day:

    Of course. But AI (specifically AI-ML-NN-buzzword-soup) is very good at solving one very particular kind of problems: where the path to solution is itself unknown. There aren't many problems like that when driving (I can't think of any, in fact, except visual recognition).

    I'm not convinced there aren't, but I don't really have any example to oppose, so... maybe? Still, I wouldn't dismiss AI as totally useless for driving (ignoring visual recognition).

    (some ass-pull example: maybe it would be a worthwhile feature to have some AI noise-recognition as well, as you can imagine either something weird happening on the car itself (something stuck on it?), or in the immediate environment (someone screaming as they run towards the car?) that other sensors might not detect, but you could arguably just say it's an extension of "visual" recognition to other senses... and no one does that (that I know of), so it's really just an ass-pull, but take it as an illustration that there might be tricky things we're just not thinking of right now)

    Also, remember that a lot of "AI" is just regressions so basically figuring out automatically the coefficients of your linear equations.

    Except the coefficients of the linear equations describing collision courses are all known right from the start. It's less algebra and more geometry. Deep learning adds exactly 0 value here.

    Are they? How accurately do you know the mass of an object coming towards you? Its speed? Its wind resistance? The wind speed that might affect it, for that matter? The friction on the road? The will of the brain inside the cranium of that object (if it's an animal (incl. a human))? It's easy in a video game because the system can know almost all of the relevant parameters, not so much in the real world where the same lorry might be empty and with good tires or full and with worn tires, and will behave very differently according to those things, but you don't have any external clue to it. So of course the car is going to recalculate things continuously according to what it measures, but I find it a bit overenthusiastic to say that everything is known from start and AI is useless.

    Again, I wouldn't rule out some sort of AI to automatically and on-the-fly find the best values of those coefficients. I'm not saying it is the right approach, but I'm saying that excluding AI from the set of tools to solve that might be a bad idea.

    True, although none of those involved human life. To disprove my point myself, I'm trying to find a system where there is a direct danger to life, and where humans have been replaced by machines, and the closest I can find is autopilot on planes (or maybe some very niche system like power plant control software, that society as a whole isn't really even aware it exists), but for most of us we're not in direct contact with it, plus there is always a human next to the autopilot, in part to make it acceptable. So I'm not sure whether it's because I'm just not finding one, or because my point is valid...

    Elevators.

    Sorry, I can't accept that answer. The controls of an elevator don't really put humans at risk. The maintenance and design of it, very much, but that's a different thing.

    (elevators are a good example of a system where we accept a machine with a risk to life in exchange for some convenience -- and so are cars, for that matter -- but the part where humans were replaced by machines, in elevators, only affect controlling their operation, not the existence of the machine itself)

    And human pilot next to autopilot (I just remembered that scene from Airplane!), aside from the psychological effect, is also for the purposes of liability. Being safe is important, but for many it's even more important to pinpoint the culprit.

    True, but that's just one aspect of my initial point about human psychology (wanting to be able to point to someone as the culprit is also a psychological trait -- that has given rise to a whole field of law, yes, but only because we wanted it to). Whatever the reason, and there may be several, the machine still doesn't operate without a human even though technically it probably could.



  • Rémi: driverless metro cars. For example, the metro system in Lille has been designed from the start to be fully automatic, and running safely without relying on human intervention. It started operating in 1983, and was initially free to get people to try it (people didn't trust driverless transportation at the time).

    Of course, it doesn't rely on buzzword technologies: it uses hardcoded rules and formal verification. And the environment is much more controlled than what a self-driving car encounters.



  • Cent, sent, and scent are all pronounced the same way. Therefore, is the S or the C silent in scent?



  • @Mason_Wheeler Yes, of course.


  • Discourse touched me in a no-no place

    @remi said in Random thought of the day:

    and no one does that (that I know of)

    That's because auditory processing is extremely closely linked to time, and most people doing NN training systems try to avoid that; their systems are essentially atemporal, and that's to be expected as training materials are virtually always static images. Getting good temporal handling requires a very different approach (event-based instead of level-based) and that's still very much research grade stuff. For one thing, it's the area that my team is working on (I believe one of our PhD students is trying to build a full model of the auditory pathway from ear to cortex, and I totally wish her the best of luck because I know it's a hard problem even given that she can use previous work that models the early stages of that). The good thing is that event-based coding schemes are far more information dense than level-based coding schemes, and should be able to react to stimuli far more rapidly.


  • 🚽 Regular

    @HardwareGeek said in Random thought of the day:

    @Mason_Wheeler Yes, of course.

    I disagree.



  • c87e40e6-b36e-4fbe-9ae1-5d320caf73bc-image.png



  • @Zecc said in Random thought of the day:

    @HardwareGeek said in Random thought of the day:

    @Mason_Wheeler Yes, of course.

    I disagree.

    Your response, that neither the s or c are silent is more interesting than what you were disagreeing with, that at least one of them is silent.


  • Banned

    @jinpa maybe he disputes the "of course" part. Whether it's S or C, it's not obvious that this particular letter is silent.



  • @Mason_Wheeler said in Random thought of the day:

    c87e40e6-b36e-4fbe-9ae1-5d320caf73bc-image.png

    I'll allow it in this case, but remember that there are other qualifications for a female character under the Disney umbrella to be a "Disney princess". Tinker Bell was removed from the canon, the only non-human actually part of the official list is Ariel (Nala, Lady, Bambi's mother are all excluded), and there needs to be something akin to royal blood or at least marriage (Pocahontas is okay because her dad's a chief, and something like that also applies to Merida, but they don't let Megara or Esmerelda join the fun because they're just too damn common, and Mulan's membership is kind of iffy).


  • Banned

    @da-Doctah said in Random thought of the day:

    Tinker Bell was removed from the canon

    I've spent last 30 minutes in Google weeds because I misunderstood what you meant.



  • @Zerosquare said in Random thought of the day:

    Rémi: driverless metro cars.

    Good point. Though I do remember that each time a driverless metro line was started, there was a huge public discussion about safety and initially many people took it (according to news reports) with some sort of trepidation, the thrill of the unknown etc. Which wasn't just due to the novelty of it (some of it was, I know that I always wanted to ride in the very first car so I could see the tube in front of the train!), but also to the "no human can prevent us from dying", regardless of how safe those actually are.

    (driverless trains in e.g. airports attract less attention, but they are usually just tiny lines with a couple of stops, so somehow the human mind sees them as less dangerous than "true" train lines, whatever the "true" means here...)

    I think part of the acceptance of driverless trains is that we usually don't see the train driver (it's a bit easier to see them with metro where the train is shorter and the driver is at platform level, but even so, most people never really see them). In some cases there aren't even a driver in the train itself, like in some (most?) mountain cablecars. So while we understand easily why a driver isn't needed in a cable car, we probably mentally extend that to smaller trains (airport shuttles) and then to metros and the like, thinking (correctly or not) that they don't require much more in-train controls and thus accept more readily that they can be automatised. I don't really know, I'm just speculating.


  • Banned

    Random thought: first fully autonomous civilian airplanes will be cargo planes.


    Second random thought: first fully autonomous aircraft were homing rockets.


  • 🚽 Regular

    @Gąska "Delivered right to your doorstep!"



  • Current thought: @da-Doctah knows too much about Disney princesses...


  • Banned

    @Zerosquare and I can list the first 52 episodes of My Little Pony: Friendship is Magic from memory, from best to worst. No kink shaming!


  • Banned

    Wait, did I just say Disney princesses are @da-Doctah's kink?



  • @Gąska said in Random thought of the day:

    I can list the first 52 episodes of My Little Pony: Friendship is Magic from memory, from best to worst. No kink shaming!

    Are you trying to get a date with @Tsaukpaetra?


  • Banned

    @Zerosquare it's just the first thing that came to mind that's arguably more embarrassing than knowing every historical official lineup of Disney princesses.

    In other news, TIL there's an official list of who is a Disney princess.



  • @Gąska said in Random thought of the day:

    @Zerosquare it's just the first thing that came to mind that's arguably more embarrassing than knowing every historical official lineup of Disney princesses.

    In other news, TIL there's an official list of who is a Disney princess.

    I read up on that list because I figured it'd eventually come in handy on pub trivia night. (I had already found my familiarity with the Powerpuff Girls useful in that context; at some point I really need to knuckle down and memorize the labors of Hercules.)



  • Sure, sure 😉


  • Banned

    @da-Doctah said in Random thought of the day:

    I really need to knuckle down and memorize the labors of Hercules

    Let's see... Throwing a spear around the Earth, defeating an undefeated champion, surviving hypnosis, eating all at all-you-can-eat, getting form A-38, going to paradise and back, taming a pack of hungry lions... Meh, just 7 out of 12. And probably out of order. And I watched that movie so many times!


  • BINNED

    @Gąska beating the fastest sprinter, sleeping in the field of the dead, and picking the right fabric softener.
    Now I want to watch that movie again.



  • ...I had no idea that movie was popular abroad.


  • BINNED

    @Zerosquare abroad??

    It’s like a tank-ride from zerosquare-ville to gaska-town. 🚎


  • Banned

    @topspin said in Random thought of the day:

    @Gąska beating the fastest sprinter, sleeping in the field of the dead, and picking the right fabric softener.

    I was thinking about the first one but wasn't sure, since the spearman did a lot of running too. The other two I only vaguely remember.

    There was also something about climbing a mountain? To talk to an old guy whose name in Polish version was Czcigodny ze Szczytu, which is hilariously almost-normal.



  • @topspin said in Random thought of the day:

    @Zerosquare abroad??

    I just checked, and it was a French-British joint production. I did not know that.


  • BINNED

    @Gąska they climbed the mountain to face a trial, which was the hilarious because trivial thing to pick the softened fabric, iirc.


  • Banned

    @Zerosquare said in Random thought of the day:

    @topspin said in Random thought of the day:

    @Zerosquare abroad??

    I just checked, and it was a French-British joint production. I did not know that.

    Asterix is weirdly popular in Poland for some reason. Probably because we have our own comic series about a small skinny smartass and a big fat dummy, living in a village led by slightly narcissist coward, under constant threat of a neighboring empire, all of which - and you're not gonna believe it - is purely accidental. "Kajko i Kokosz" by Janusz Christa.

    @topspin said in Random thought of the day:

    @Gąska they climbed the mountain to face a trial, which was the hilarious because trivial thing to pick the softened fabric, iirc.

    Oh, right. The commercial spoof. Now I remember.


  • Banned

    Random thought: There are some Polish memes that I will never ever be able to adequately explain to anyone foreign.


  • Trolleybus Mechanic

    @Gąska said in Random thought of the day:

    Random thought: There are some Polish memes that I will never ever be able to adequately explain to anyone foreign.

    Honestly, I have no idea what's going on here either.


Log in to reply