@Zerosquare exactly. I got into that specific line of thinking because I remembered one former coworker who, worked before in a company building helicopters and in particular in predictive maintenance. Basically record (with various sensors, including microphones) all sort of noise and vibrations etc., feed that into a big heap of machine learning, stir the heap and hope that the output tells you when a chopper is going to go boom, preferably long-enough before it does that you can send it to the repair shop in time.
Posts made by remi
-
RE: Driving Anti-Patterns - Necro Edition
-
RE: Aviation Antipatterns Thread
@dcon the Programming Confessions thread is , though I'm not quite sure how -y the situation has to be for this to be a confession:
-
RE: Help Bites
Does anyone here know gRPC / protobuf?
I'm passing large-ish chunks of data between my own client and server and have implemented streaming because the overall dataset to pass is larger than the max message size (default is 4 MB, I know I could increase that but probably not to the point where it would cover all my use cases, so streaming it is in any case).
The issue I have is how to find out what size of messages to send in my streaming implementation?
Searching the interwebz I can find tons of discussions on how to set the maximum message size when starting the server, but this is not what I want. What I want is querying an existing server to find out what is that maximum size. Either my google-fu is weak, or nobody ever discusses that?
Currently I need this in two places and in one I've hard-coded a 4 MB (minus a small margin for headers etc.) limit. In the other one I've been smarter and implemented a horrible hack where I parse the string from the error message (!!!) to read the maximum size.
-
RE: Driving Anti-Patterns - Necro Edition
@Carnage it makes noise. Ergo, it does something. Ergo, that something can be optimised, once an objective function has been defined.
I'm an engineer with nothing to keep my mind busy. Of course I am pondering about optimising stuff, which means pondering various possible objective functions, their relative fitness-for-purpose, which means pondering purposes and getting into weird mental tangents.
Such as whether an helicopter (generating its own noise) would be able to detect by flying above a congested highway which proportion of cars have their engine stopped. And that's probably the saner of those tangents.
-
RE: Driving Anti-Patterns - Necro Edition
@Carnage said in Driving Anti-Patterns - Necro Edition:
Fun bit is, keeping space to the cars in front actually helps solve congestion since you dissipate shockwave congestion that way so if more people kept a bit of space to the car in front, traffic flow would be greatly improved.
Exactly! Which is part of why I'm doing it, and also why I tend to prefer staying on the "slow" lane in this setup, because that's where trucks are, and trucks tend to also do this. As a result, the whole lane tends to actually move a bit faster than the "fast" lane (though in practice that depends on a lot of other things such as the layout of the intersection a couple of miles ahead (!) that causes the congestion in the first place...).
Though now that I have a car with start/stop, I'm also wondering about how to optimise that (filed under: engineers keeping themselves busy...), because this slow-but-continuous mode means I'm never stopped and thus stop & start never gets an opportunity to kick in. Then again, unless the traffic is so slow that I stay stopped for at least 10s or so, stop & start is counter-productive (of course I can always manually turn it off but where's the fun in replacing some complicated algorithm that only works half of the time, and we're not even sure of that, by a simple user-action!?? ). Then again again, maybe the slow first gear driving is actually worse (depending on the flow of traffic, again). Then again again again, there is an argument to be made that stop & start is actually useless overall. Then again...
-
RE: Driving Anti-Patterns - Necro Edition
Moron of the day:
2-lanes highway, congested as usual. I'm on the right lane, I tend to drive smoothly in this kind of situation, letting gaps (of, say, 2-3 car lengths) appear in front of me then drive slowly to fill it, rather than accelerating and braking a lot to stay glued to the car in front of me.
Moron behind me doesn't like this and tailgates me, swerving to the right and left as if to spot a gap to pass me. Then suddenly decides he has enough, and swerves onto the hard shoulder to pass me!
Now that's a right here (and illegal). But it's not the first time I've seen people decide that the hard shoulder was a lane specially for them, so that doesn't make this moron a very original moron.
But then! After having passed me, he decides to go back in lane, just in front of me. And stays there for the rest of the congestion. So apparently I was somehow driving in such an insufferable manner that he had to break the law just for me.
Not quite sure if I should be flattered, or annoyed, or anything else. Though for sure I was impressed by such a display of... I don't even know what?!?
-
RE: Random but Not Dumb Videos Thread
Thanks though, the follow up(s) clarify. A bit.
-
RE: Driving Anti-Patterns - Necro Edition
@HardwareGeek true, of course.
But distractions will always happen even to the most careful of driver. OTOH, "don't run a red light" or even "don't go crazy fast" are not things that a driver should do, ever, and even more so, they're systematically punished even if no accident happens!
So I would still put those two as the top items to hammer into drivers' minds. Because as this video shows, otherwise it is a ton of metal that risks being hammered into their heads.
-
RE: Driving Anti-Patterns - Necro Edition
@Carnage looks like at least half of them are running a red light or a stop sign (hint: don't do that...). And almost all of them would have been avoided by not going way too fast (sometimes that applies to the victims as well as the driver who caused the accident!).
Loss of control of vehicle (skidding in the rain etc.) and other things seem a distant 3rd to those two.
Dunno if that's a bias of the video (and on relying on dashcams).
-
RE: Today in reading the headlines...
@kazitor one class in university, the professor had a lecture where he announced the results of the test (I don't remember why there was a lecture after the test, maybe that professor was giving another class or it was a partial test, whatever).
He first put up a slide with everyone's score, ranging roughly from 2 to 8 (on a scale from 0 to 20 where 20 is "perfect"), meaning everyone had failed (<10).
Then after leaving the class sigh and moan for a few seconds, he said "for [random fake reason I don't remember], I've decided to adjust a bit the scores" and he showed a second slide where the scores were basically the previous, time 2 or 2.5 (so basically most of the class passed, which was the norm).
We all nervously laughed at the "joke" and how the professor was kind to us but rumours were that he pulled a similar trick every year and no one, professor included, ever questioned whether, since this happened every year, the fault was with the professor rather than the students...
-
RE: Nope
@DogsB when something is literally orders of magnitude (10x or 100x) more expensive than another variety of something, you can be almost certain that it is not that amount of times better.
At that stage, the price difference is purely a marker of "I can afford it" rather than quality.
-
RE: The Official Funny Stuff Thread™
@HardwareGeek said in The Official Funny Stuff Thread™:
I blame the French.
Bullies always hit on the same target.
But what I really wanted to post: I recently read that in old Norman (and apparently that sort-of continued for centuries, only disappearing when regional accents mostly disappeared a century or so ago), the French "ch" was spoken as a hard "k" rather than, uh... "ch" (or "sh").
This (partly) explains, for example, how English got "cat" where the French ended up with "chat" or "cart" from "char."
-
RE: The unofficial offical bad pun of the day thread
@Gern_Blaanston said in The unofficial offical bad pun of the day thread:
I broke up with my girlfriend Ruth, so now I'm just living my life ruthlessly.
In the children's classic "Swallows and Amazons," one of the girls who plays at being a pirate is called Ruth, but insists on everyone calling her Nancy, because "pirates are ruthless."
-
RE: I, ChatGPT
Heard this morning on the radio (couldn't find a link after
a very thoroughone semi-random web search ):An undercover report on the condition of women in Iran or Afghanistan with the voices of interviewed persons filtered through AI to make them unrecognisable.
One one hand, that seems a not-totally-stupid use of AI? On the other, journalists have been using various kind of sound filters (or having "the words are spoken by an actor") to do do that for ages so why use AI? On the third hand (), all those filters were always obvious and sort-of broke the flow of the report, so... why not?
-
RE: WTF is happening with Windows 10? And nothing else
@Arantor Right.
So next you're going to get angry at being forced to stick to Win10, before telling yourself that maybe you can live with it if only you can fix a few things, before falling in a deep sadness about your lost productivity, and finally saying that after all Win10 isn't that bad.
Filed under: the five stages of Windows
-
RE: WTF is happening with Windows 10? And nothing else
@Arantor said in WTF is happening with Windows 10? And nothing else:
the thing is, Win10 bucked that trend in a nasty way.
Au contraire, I think it's pretty much following the trend, as you illustrate since you're now clinging to it. That you do that because of the shittiness of Win11 rather than any love of Win10 doesn't change that.
-
RE: WTF is happening with Windows 10? And nothing else
@DogsB said in WTF is happening with Windows 10? And nothing else:
@Arantor said in WTF is happening with Windows 10? And nothing else:
I would love to go back to Windows 7.
I hate to concede it but windows 10 wasn’t god awful until about 2022.
It feels like every version of Windows is first hated, then grudgingly accepted, then desperately clung to as a newer version appears.
The kind interpretation is that every version starts shittier than the previous, mature one, and gradually improves during its life cycle.
The less kind interpretation is the same but stops before "and."
-
RE: The Official Good Ideas Thread™
@Arantor said in The Official Good Ideas Thread™:
But also I still have some of my DOS era big boxes.
Same here. I probably even have a couple of boxes with floppies inside (not 5 1/4, I may be -old but not (yet) -old).
I don't have any working floppy drive anywhere in the house (though... there may be one on an old box from my wife's childhood that's buried at the back of a cupboard, but I have no idea if the machine still works, let alone the floppy drive), and even if I had it is somewhat unlikely the floppies would still work. But I still have them.
-
RE: The Official Good Ideas Thread™
@Arantor do video games still come in physical boxes?
-
RE: The Official Good Ideas Thread™
@acrow my brother has a large collection of board games and he has some source of good quality rubber bands that work perfectly. They aren't flat, either. They're longer than the usual office supplies (though he has several sizes, some are smaller), and also sturdier, but they are neither significantly more expensive, nor exceptionally difficult to find, he just knows which brand to buy. I do have a handful of them that he gave me at various points, just because he has tons of them (he buys it by bags of 100 or more).
So OK, it's not quite a "normal office rubber band" but it isn't "artisan hand crafted by a monk in the Himalayas from vegan organic hevea trees harvested during a full moon with a 6 months order delay" either.
Basically, just buy long not shitty rubber bands and you'll be fine.
@ixvedeusi said in The Official Good Ideas Thread™:
A good game box has some features that can be difficult to get with improvised boxes:
Sure, but aside from the fact that few boxes are "good," many games don't have that many components that it's such a hassle to spend 2 min manually sorting them at the beginning of a game (if they're all mixed together).
So if you "only" threw out bad game boxes, and boxes for games that don't have that many components, you'd still be throwing out the vast majority of boxes (not all boxes are ridiculously over-sized though, so for many there wouldn't be a point in throwing them away...).
-
RE: The Official Good Ideas Thread™
@acrow a rubber band works as well, you don't need a (more expensive) strap. And in many cases, the box lid is snug enough that you don't even need that.
But, as to the OP's point, even stored upright, there is so much variance in boxes' sizes that you almost always end up with weird stacking patterns. Unless you've got a large-enough collection of games that you can first pre-sort them by (rough) box size, but at that point the space needed to store the whole collection is probably more of an issue than the stacking itself.
Related, some boxes are also annoyingly full of empty (or sometimes a plastic shell that amounts to the same). Sometimes it's because the box size is dictated by the largest element (usually the board) but I suspect in some cases it's purely to make a game appear larger (and thus worth more money?). Some people accept to repack things in smaller boxes (or store several (related) games together) and throw away some of the boxes, but most people don't like to do that -- which is a weird psychological thing since the box itself is (usually) completely useless and the game just as fun if you only have the components, and yet people will keep the box even when complaining it's annoying to do so.
-
RE: WTF Bites
@Bulb this is where I'll mention (once more) one of the main French ISPs/mobile phones network, called... "Free."
Though tbf, they are so large here that (I assume) Google and the like have optimised their search results (in France) so that "free [anything vaguely related to ISP/mobile phone" tends to include at least one result from their site, not too far down.
Still a pain to search for anything.
-
RE: Random Question of the Day
I knew there was at least one such logo that I was semi-familiar with, but couldn't put my finger on it. Thanks!
Now I'm pretty sure there are other(s), based on how many close-but-not-quite matches any search for e.g. "square logo" returns, including a lot of stock images (so basically "hey business owner without any graphical skills, here's simple stuff you could use"). Any other suggestions?
-
RE: Random Question of the Day
RQotD: can you point me to companies (in any industry, the larger the better) whose corporate logo is a square with 3 rounded corners?
Something a bit like this (ideally with a bit less rounded corners), except just one square, not 4 (doesn't really matter which corner is not-rounded, nor which colour the whole thing is):
-
RE: The Official Funny Stuff Thread™
@topspin said in The Official Funny Stuff Thread™:
Then again, I'm almost certain that some administrations must tag everything as "not for public release" unless it was explicitly decided that it was for public release.
So that search must return so much crap that it's basically security by obscurity.
BEST PRACTISE!
-
RE: The abhorrent 🔥 rites of C
: Why?? But why?!?
: Because the C++ Standard, paragraph 42.6.7, says so, that's why.
Filed under: whose law is it that every thread will end up discussing the C++ Standard?
-
RE: The abhorrent 🔥 rites of C
@dkf Necroing an 8 years old thread to quote one of your own posts (and deep in the middle of the original thread), and to mention "Especially" a "hidden pitfall" of C++.
Come on, we all know you're dying to tell us a good story about
bool
in C++. Stop teasing us! -
RE: Random Question of the Day
@BernieTheBernie said in Random Question of the Day:
The noise is lo
aud enough to be heard miles away -
RE: Random Question of the Day
@BernieTheBernie you know, I would actually understand that, if the revving happened anywhere close other people -- friends, potential mates, etc.
But in this case, that happens even when there isn't anyone (except neighbours ) around. So why??
-
RE: The Official Funny Stuff Thread™
@cheong the only part of the trick that they missed is that the device inside the bus should have been looking like this:
Of course, in this case it would have been powered by , not deuterium/tritium, but .
-
RE: Nope, you eat it
@topspin said in Nope, you eat it:
@HardwareGeek the Johnny Cash cover is really good.
I don't know if the guy in the repair shop was named Johnny, and I wouldn't really say he ended up "covered" in cash (*) but yeah, he got some.
(the "Puns so Bad they don't even hurt" thread is nowhere, and that's a good thing)
(*) 20 euros, about what I was expecting. Amusingly he asked us first whether we were paying cash before telling us the price...
-
RE: D&D thread
My brother has a table a bit like this (with the sunken middle) though he uses it for board games, not RPGs. There is a nice wooden top that slots on top of the sunken part, so you can actually use it as a regular (dinner) table while still leaving a game fully laid out below. There are also some add-ons that clip to the side, such as drink holders (so you don't balance your drink on the edge, or put it in the middle) and tablets (if you need e.g. to write on a character sheet), but those are a bit gimmicky and not much used.
It's a bespoke thing that cost him a small fortune, but it is a real nice piece of massive wooden furniture and it sees enough use to make it worth it (for him)...
-
RE: Nope, you eat it
@Carnage said in Nope, you eat it:
Preferrably dill flavored white spirits.
You're just rephrasing my previous post.
-
RE: Nope, you eat it
@Carnage and chase it down with a shot of flavourless alcohol.
-
RE: Nope, you eat it
@Carnage said in Nope, you eat it:
And for the topic, here's a picture of lutfisk:
I actually tried that once. It was exactly like you describe it, and very much "meh" and not "nope" at all.
At least its (lack of) flavour didn't spoil the (lack of) flavour of the accompaniment (boiled potatoes).
-
RE: (are (arguments for (using lisp)) (still valid?))
@Kamil-Podlesak said in (are (arguments for (using lisp)) (still valid?)):
You actually can do it in C++, because it does have function overloading, although only at compile time.
Ergo, you can't do it. Read above (have we reached Remi's point, by which new posts can be answered by pointing to previous ones?), it's all about doing this multiple dispatch at runtime.
If you want to do it at compile time, C++ also has templates that perfectly fit the bill, except again it's at compile time.
As said above - this
dynamic
part should not be in the language at all, as it is basically just .NET implementation internal thing. IMHO it is exposed only to make C# the "ultimate" .NET language that is superset of every other .NET langauge.I guess that's a reasonable (?) explanation. At least to me, it very much looks like a hack more than a feature (due to this burden-on-the-caller thing).
-
RE: Nope, you eat it
@Zerosquare said in Nope, you eat it:
This weekend's mystery was "how a nail longer than my hand ended up in my tyre, with just the head still visible?"
It was longer than the height of the tyre so it necessarily had to be at an angle. But who the hell uses nails that long, how did one end up on the road and then in my tyre?
Thankfully the hole was neat enough that it could be patched easily (and cheaply!).
-
RE: Random Question of the Day
Why do teenagers with dirt bikes feel the need to fiddle with the engine for hours, without going anywhere? (source: a neighbour a couple of houses down the street )
I understand having a noisy engine and going to play on dirt tracks. I understand going up and down the street if you can't go on tracks. I even understand playing tricks in your backyard if you can't go outside. I also understand tuning the engine if you've changed something.
What I don't understand is turning the engine on, let it idle for a minute or so, rev it for half a minute, let it idle again for a minute, then turn it off. And repeat every 5 minutes. For hours. Every single fucking day you're at home.
-
RE: Things that remind you of WDTWTF members
@Zerosquare maybe he meant () this?
-
RE: I, ChatGPT
@boomzilla said in I, ChatGPT:
It is certainly possible to train smaller LLMs on more focused topics and get useful things out of them. I know that's been going on at my company.
Yeah, some people in my company have also done that IIRC. Their problem was that in our field we sometimes get data reports that are decades old and of course don't follow any sort of standard -- and a lot, if not most, of the type of data we're dealing with isn't numerical data, it's more maps and human interpretations, so it's not just pulling out numbers from an old print-out.
Apparently training some sort of AI on that yielded great results in that they were able to query the mush of data and get meaningful answers, including asking a text-based question (duh) and getting pictures and maps as part of the result.
-
RE: I, ChatGPT
@boomzilla this is how I decided to read the title of your link.
That would fit well within what their T&C allows them to do.
-
RE: I, ChatGPT
@Applied-Mediocrity you know, I'm sometimes amazed that with all the approximations and "models" that we use, there are still some Nice Things churning out at the end of the chain.
I'm also amazed that people pay me to do that. I'm more worried when they ask me to justify my salary, but so far I've managed to survive that
-
RE: I, ChatGPT
@Applied-Mediocrity said in I, ChatGPT:
AI is being shoehorned everywhere and treated as a general solution to all problems. Legal? Probably not. But in other domains it has been and absolutely will be considered airtight (with disastrous results; one with cancer patients, I believe), because people just don't care and want to trust the machine. Pray tell, Mr. Babbage, and all that.
Fair enough, but I was talking about this specific application case (to read T&C). I agree that "AI" (which has not much of "I" and is about as "A" as anything that's on a computer) is sometimes seen as a solution to everything. But that this is wrong does not mean that AI can't be a solution to some things. And reading text seems to me very much what an LLM is designed to do, so that seems a rather good application of it.
With the crucial difference being that once you implement an algorithm, for all datasets it will return the only possible answer, and unexpected inputs will either crash the program or get discarded early. It may be wrong, but every step of the way can be formally verified.
Meh. You've probably not dealt with a lot complex scientific (or, maybe, I should say "scientific") models. A lot of them are not built from first principles and equations, but include a lot of empirical knowledge (aka heuristics aka fudge factors...). So "formally verifying" them... yeah, good luck with that.
Using LLMs is basically admitting that rather than trying to describe the steps of solving complex problems, we - fuck it - will relax the conditions and accept impure results.
That is not specific to LLMs, at all. Before we all talked about LLMs the hype was about AI in general, and before that ML (machine learning) and before that Big Data. But at their core, "big data" and "machine learning" can be nothing more than doing a linear regression through data -- and pretty much any real world model includes a linear regression. Sure, LLMs go further than that. But non-LLM-models also go further than that. My point is that using a model that we know is imprecise and only calibrated through real data (as opposed to some theoretical background) is something that has been done since... forever. LLMs aren't that different in that regard.
It may be acceptable for toy problems and toy purposes, but it sets a dangerous precedent. People want to use technology. Outputs will be fed to other data systems, some of which may also be LLMs. GIGO all the way down.
Yeah, lolz... if you knew the "models" that go into my day work and what part of our society relies on it, maybe you wouldn't be so afraid of LLMs being misused
-
RE: I, ChatGPT
@topspin said in I, ChatGPT:
I interpreted @remi’s post as “this is a property you want” (unlike low false positives, which is nice to have but not important), not as “this is a property LLMs have”.
Re-reading the OP on that, my phrasing was indeed ambiguous and could be taken as meaning either. But you're right, I meant the former, not the latter.
I have no idea whether this is a property that LLMs do have, though since it's playing with language (rather than, say, playing with Doom), I have more hope that they do than the property of "being able to play Doom."
-
RE: I, ChatGPT
@Arantor said in I, ChatGPT:
find things in the docs that aren't reliant on word matching?
Not sure what you mean by that? Is that "searching the doc without having to type the exact words of the section you're looking for?" Like a Google search returning terms related to your query, not necessarily those exact words? If so, I'm guessing this would be a thing that LLMs are absolutely perfectly suited to do, this being basically how they're built ("statistically, these words happen in close relation to those words"). And I would be extremely surprised if Google and Bing do not already use that kind of technique internally (maybe not when you type your query, maybe that's part of their indexing).
But is that what you meant?
-
RE: (are (arguments for (using lisp)) (still valid?))
@sockpuppet7 I'm not sure I get what you're saying. But I suspect you're a bit too hung up on the specific example given above, which can be fully and more simply be expressed by
virtual
functions (and classes) in C++.But the subthread was about the more complex idea of dynamically dispatching a call to various functions, based on the types of several arguments (e.g.
Inspect(int, int)
is not the same asInspect(int, string)
is not the same asInspect(string, int)
etc. but all of those can be called by simply callingInspect(var1, var2)
). I found this wiki page yesterday that explains it pretty well.You can't do that natively in C++. You can emulate it with various workarounds (some of them are described in the Wiki page). But they're not direct features of the language, like it apparently is in C#. Those workarounds are going to be either more (boilerplate or weird) code, or worse performances. Likely both at the same time.
But, back to the C# thing, the part that bothers me is that it is up to the caller to ensure they say "hey, this variable is
dynamic
, don't forget to dispatch based on it" which sounds wrong to me. The workaround (shown in this subthread and in the Wiki page) is to use another wrapper function around it, but that's ugly boilerplate, which makes the feature less attractive. -
RE: I, ChatGPT
@Applied-Mediocrity said in I, ChatGPT:
How would you know they're low?
You don't, if you just look at the result on one specific set of T&Cs. But if you (or some other (group of) people you trust) feed it enough T&Cs, some of which you already know they contain fishy stuff because it's been found by humans before, you can check that on the whole, the rate of false negatives is low. That's work, yes, but that's how you validate anything in the real world.
Now you can't guarantee that a low rate on known documents will necessarily mean a low rate on a new unknown document. But that's the same for any model you're applying on any kind of data, so there isn't anything new here. And yes, I'm calling the output of the LLM a "model" of the full T&C because that's what it is: a simplification of the thing that allows you to better understand it.
Besides, I don't think anyone would consider this LLM analysis as a full, airtight legal interpretation (if anything, because there ain't such a thing, it always depends on the judge etc.). But as a way to quickly flag if those 900 pages of T&C are just the usual bullshit, or if they are likely trying to find new and ways to shaft you, why not?
In my mind, false negatives are far worse than hallucinations.
I agree, which is why I said a low rate of them was required. Hallucinations are kind of like false positives, you can spot and discard them fairly easily and as long as they don't make the output longer to read than the input, you've still gained some time overall. Though, humans being humans, they will still erode your confidence in the result, so keeping them low is likely a good thing. But less so than false negatives.