WTF Bites
-
@Applied-Mediocrity said in WTF Bites:
It still zhryot toka kak pizdyec.
Damn, that random data corruption bug is still not fixed...
-
@Luhmann there's something about human brain that makes "forty hundredths" far easier to comprehend than "forty percent". It's dumb and illogical but it is what it is.
Might be time to just start translating “percent”—it literally means “in a hundred”.
I'm all for that! But people, especially people in finances, are dead set on using % everywhere they can.
-
get a technically accurate picture?
That. The overall number only remedies your misplaced curiosity. By all means, keep it there if you want. Just like progress bars.
On the other hand, if you have actual concerns about CPU usage, detailed picture is the right starting point.
-
@Polygeekery said in WTF Bites:
VDI
That reminds me of our ongoing battle with one of the IT departments of our clients. This particular application we are talking about is shipped as a heavy desktop app by us as a VMWare Thinstall app to conquer some strange long lasting client requirements, we essentially build the required runtime into the app. The runtime is a turd, it's stupid with data handling requiring you to make big selects, handling locks terrible and disconnects even worse. Hey ... it replaces a mainframe app so there is an up side.
This rather large customer was/is using VDIs for certain stuff ... no biggy, it just works even when for previously unforeseeable reasons workforces started shifting from campus to home. Heavy lifting is all done inside the VDI ... tons of badly queried data is probably slushed around but the VDI makes it fast to the end-user as only the screen is actually going over the wire.
Now for they took away these people's VDIs and handed them laptops. Now we are fielding complaints about how terribly slow the app has become because of their own choices ...
Running this fucker on VDI/Remote Desktop/Citrix has been our advise for decades when running this app over slow connections.
-
1.07E-6 football fields.
soccer or US Brain Smash style?
I am pretty sure that many manufacturers would actually use the table football ones. Some of them might even declare that (with a very fine print).
-
@Applied-Mediocrity said in WTF Bites:
get a technically accurate picture?
That. The overall number only remedies your misplaced curiosity. By all means, keep it there if you want. Just like progress bars.
On the other hand, if you have actual concerns about CPU usage, detailed picture is the right starting point.
The thing is, nobody ever has actual concerns about CPU usage. While "does this thing monopolize the entire core just for itself, never yielding?" and "wait, how many cores does it monopolize!?" are questions I actually do happen to ask from time to time.
-
Best you can do is trust the engineers at Intel
I do trust the engineers. I don't trust it's the engineers who get to decide which approach of all those evaluated will eventually end up being produced, packaged and sold.
-
@Applied-Mediocrity but they do. CPU architecture is one of the few areas where executive meddling isn't happening. They may have arbitrary asspulled performance goals that have nothing to do with reality, but they don't have any prikazes from above how to achieve them.
-
In conclusion - ¯\_(ツ)_/¯. Best you can do is trust the engineers at Intel to know what they're doing. And before you question their competences - remember it works both ways; they were the ones who had the very brillant idea to introduce OoOE to x86 in the first place.
It's things like drivers that really need very low latency; when a hardware buffer fills up you've got a very short window to empty it out or the hardware either stalls or starts overwriting the stuff that's already in there (which you get depends on configuration). Neither of those are particularly great options; data loss is visible to user code, and stalling spreads the trouble to other systems, which tends to be even worse.
So you keep drivers running on their own core if you can. It doesn't really need to be all that fast — it'll probably delegate most copies to DMAs — but it needs to be very available with predictable performance. Most application code is a lot less latency sensitive, yet more demanding of raw CPU power for at least some of the time; that can go on higher-speed cores that get their power regulated up and down a lot more.
(In my current project we don't do this, but that's because the deployed hardware only has one speed of core available. OTOH, we do have the OS on its own core; application code interacts with it via messaging, and the OS does most of the funky hardware handling. Our next-gen hardware — we've got test chips and test boards — can have different core speeds set up. We've not got the cooling to run everything at full.)
-
The thing is, nobody ever has actual concerns about CPU usage.
Except for detecting cryptominers. A few CPU-minutes of usage isn't a big deal mostly, but lots of CPU-hours points to an issue as that starts to cost real money in electricity.
-
@Applied-Mediocrity said in WTF Bites:
get a technically accurate picture?
That. The overall number only remedies your misplaced curiosity. By all means, keep it there if you want. Just like progress bars.
On the other hand, if you have actual concerns about CPU usage, detailed picture is the right starting point.
The thing is, nobody ever has actual concerns about CPU usage. While "does this thing monopolize the entire core just for itself, never yielding?" and "wait, how many cores does it monopolize!?" are questions I actually do happen to ask from time to time.
The thing is indeed. Wouldn't it be nice if there was this detailed per-core picture to answer those questions in an instant? Oh, there is!
Wait, Twitcher 3 runs better with HT off, right?
We've not got the cooling to run everything at full
Neither does Intel's latest (or three)
-
Best you can do is trust the engineers at Intel to know what they're doing.
Um, you know who works at Intel, right?
Also, I used to work there, too, a long time ago, and no, I don't trust me, either.
-
@HardwareGeek are you doing this on purpose?
-
@Gąska I don't know; maybe. What does this mean?
-
Running this fucker on VDI/Remote Desktop/Citrix has been our advise for decades when running this app over slow connections.
As recently as a decade ago there were areas in our downtown that had no reasonable internet connections. In the middle of a city of over a million people, right in the heart of downtown, there were pockets that had no better options for internet than shitty DSL.
As bad as DSL is for supporting an office of any size, it is even more shit for running a VPN. I would have to run the numbers to see the exact amount but it would not take that many concurrent VPN connections to saturate a ~2010 DSL connection's bandwidth just with tunnel encapsulation traffic, let alone the data moving across it.
So that was one area that we focused on. If you don't have to move the files over the internet and instead only serve the presentation layer and do all the heavy lifting on the LAN even a 3Mbs DSL connection can support several people simultaneously.
That market and those the use LOB apps that do not play well with VPN latency, such as apps developed by idiots that are built on frameworks built for and by idiots were our other market. There are still lots of those out there.
-
@HardwareGeek let me quote what I said a few weeks ago when you last did it.
@Gąska said in John Wickonsin memes:
WHAT'S WITH PEOPLE CUTTING OFF THE QUOTE EXACTLY AT THE POINT BEFORE THE PART THAT CHANGES THE WHOLE MEANING‽‽‽‽‽‽
-
@Gąska The rest of the quote was irrelevant to poking a little fun at our resident (known) Intel employee (actually contractor, I think, but that's also irrelevant to the joke).
-
@HardwareGeek anyway, I'd appreciate if you avoided selectively quoting my posts in a way that implies I trust Intel engineers.
-
@Polygeekery said in WTF Bites:
apps developed by idiots that are built on frameworks built for and by idiots were our other market. There are still lots of those out there.
-
-
@Applied-Mediocrity Nah, he thinks (and posts) incessantly about "contacting" but never actually does.
-
@HardwareGeek said in WTF Bites:
@Applied-Mediocrity Nah, he thinks (and posts) incessantly about "contacting" but never actually does.
Oi! I'm a real contractor now. I expensed WinRAR and everything. I'm just waiting for it to show up in the post to take a photo of it.
-
* Personally I'd rather we as a civilization got rid of percents altogether and expressed everything in simple fractions, but oh well.
Good idea. Since we're used in dealing with numbers in base 10, a fraction is simpler when it's in that base. But expressing stuff as x/10 still lacks a bit of flexibility, especially for small-ish fractions.
So I suggest we get rid of percents and instead express everything as simple fractions of
x
/100.:appropriate_emoji:
100 is so arbitrary. Let's take a universal constant like ...
e
orpi
. Or ħ (then we'd have two constants for the price of one!)
-
Oi! I'm a real contractor now.
Congratulations, I guess, but I wasn't talking about you.
-
@HardwareGeek said in WTF Bites:
@Applied-Mediocrity Nah, he thinks (and posts) incessantly about "contacting" but never actually does.
Some contactors are destined to always remain open
-
@Applied-Mediocrity said in WTF Bites:
Some contactors are destined to always remain open
...it just never clicks ()
-
@HardwareGeek said in WTF Bites:
Oi! I'm a real contractor now.
I wasn't talking about you.
-
Or ħ (then we'd have two constants for the price of one!)
Unless you're using it as a variable name, as discussed .
-
I trust Intel engineers.
Ok ok, no need to shout your love from the rooftops.
-
@Applied-Mediocrity said in WTF Bites:
@Gąska Doesn't downsizing these components result in worse IPC, offsetting the power efficiency gains?
Yes, no, maybe, I don't know. Can you repeat the question?
MTFY
remember it works both ways; they were the ones who had the very brillant idea to introduce OoOE to x86 in the first place.
But they didn't bother to actually secure it against side-channels, so the last 3+ years we had one hardware-based security vulnerability after the other.
-
The metric makes sense when you are considering single threads.
Nope. Still not 300 percent of anything.
Huh? The metric seems perfectly cromulent to me. What's the big deal here? Do you have a newsletter where you rant about negative numbers, too?
-
@Applied-Mediocrity said in WTF Bites:
The total [computation] capacity of any system cannot exceed 100%.
You're making unwarranted assumptions about what you're measuring against.
When I look at the total CPU usage, I want to know roughly how much capacity I have remaining for other tasks.
Now do storage bytes vs memory bytes and then .
-
I'm sure the @boomzilla's of the world will say this is just business rights, even though it's illegal under Georgia law.
-
-
@boomzilla said in WTF Bites:
@dangeRuss said in WTF Bites:
I'm sure
The surest way to know that something is false.
So you're saying the hotel should've baked the cake?
-
@dangeRuss , guys!
-
@boomzilla said in WTF Bites:
@Applied-Mediocrity said in WTF Bites:
The total [computation] capacity of any system cannot exceed 100%.
You're making unwarranted assumptions about what you're measuring against.
When I look at the total CPU usage, I want to know roughly how much capacity I have remaining for other tasks.
Now do storage bytes vs memory bytes and then .
Ask me again tomorrow. I'm at home now. And this vidya here won't play itself.
-
@dangeRuss , guys!
yea @boomzilla, no throwing shade in the main forum, even though I may have started it.
-
@dangeRuss said in WTF Bites:
@dangeRuss , guys!
yea @boomzilla, no throwing shade in the main forum, even though I may have started it.
Your grasp of forum rules is as good as your grasp of anything else.
-
@boomzilla Well, you'd surely know!
-
-
The metric makes sense when you are considering single threads.
Nope. Still not 300 percent of anything.
Let's say your program is running on 4 cores simultaneously. In total, how much CPU time did your program get in 1 second? The answer is 4 seconds - or 400%.
Nope. Doesn't work that way. It doesn't matter if you have 1 core or 1 Million. If one second has passed then you have used one second of CPU time.
You might want to brag that your CPU used "1 Million Percent CPU Time" by multiplying cores x time, because it sounds more impressive in your advertising or CPU/Penis Measuring Contest, but it is meaningless and irrelevant bullshit.
One second elapsed is one second.
-
-
Nope. Doesn't work that way. It doesn't matter if you have 1 core or 1 Million. If one second has passed then you have used one second of CPU time.
So "man-hours" is also worthless. If five thousand people take one year to build something, it took one year to build, not "1000 man-years". I guess that's why they're not called man-hours any more.
1 second ≠ 1 CPU-second
-
@Polygeekery said in WTF Bites:
I am considering upgrading my Windows laptop before the end of the year. I go looking at the Dell XPS series machines. I can choose between 8GB of RAM (which is too little for my needs) with a 1920x1200 display that purports to have ~14 hour battery life, or 16GB of RAM that only comes with a 4K display and ~8 hours of battery life.
Why can't I get 16GB of RAM with the HD+ display and nearly 50% more battery life? Why is 4K even a thing on such a small screen? I will trade battery life for resolution every single time.
More related fuckery, but even worse:
A client has a couple of people that are basically never in the office. They work at the state house every single day, and long hours. One of which is the same guy I mentioned in the garage today. I don't even know why they have offices.
As a result, they require long battery life. So we go looking for the best bang for the buck on thin and light laptops with long battery life. Me being me, I have a MacBook Air I use for that, but that will not suit their needs. But I also have a USB-C 65w battery bank I carry with me for the frequent times I forget to charge my stuff. So we arrive on a Dell Latitude 3520 which reviews show as having a ~13 hour battery life while simulating web browsing. Should be good to go.
But that laptop still uses the shitty 4.5mm Dell laptop charger. Why the fuck isn't this thing using USB-C charging? Well, some research shows that it actually can charge from USB-C. So why the fuck does it have a shitty Dell charging port?
While discussing all of this I had mentioned my battery bank I keep for emergencies, like constantly forgetting to charge my shit. So they decide to get them each to go with the laptop. It will do USB-C charging, so that should work, right? Right?
Fucking wrong. No fucking way in hell can we figure out how to get this hunk of shit to charge from a battery bank. It will do USB-C charging. The battery banks are USB-C PD 65w units. So this shit should work, right?
That would make entirely too much sense. It doesn't fucking work at all. If you plug a USB-C laptop charger into it, the laptop charges. If you plug a USB-C battery bank into the laptop.....the fucking laptop charges the battery bank.
What kind of batshit fucking retarded bullshit is this? There is an option in the UEFI/BIOS/Whatever to disable USB-C charging while the machine is off. But as of yet we cannot find any configuration option that will allow the laptop to charge from the USB-C power bank. It will only drain the battery more and faster.
If anyone has any ideas I am all ears. Well, I am mostly penis, but I will listen to your ideas.
-
@Polygeekery Seems to me like your battery bank fucked up the negotiation - it makes sense for the laptop to be able to switch into power supply mode (instead of being a power sink) so that it can power something a bit more power hungry on its own.
And since it does charge just fine with a normal power supply, my guess would be the power bank not getting the memo. I guess it talks to the laptop, gets a "Yes, I can provide power" response and then says: "Fine, I'll take all the power you can give me!"
-
Seems to me like your battery bank fucked up the negotiation
Which would make sense, except it was tested with our battery banks which work fine with other laptops.
-
@Polygeekery Then it's probably the Dell going Herp-Derp. Which would also explain why it has the vestigial Dell charging port.
-
@Applied-Mediocrity said in WTF Bites:
And this vidya here won't play itself.
-
@Rhywden it's a Dell issue. If you plug in a wall plug charger it charges as you would expect. But I'd bet that anything that can be powered would be powered. But in this case "USB-C charging" actually means "USB-C charging, maybe, in some configurations".