WTF Bites


  • BINNED

    @TimeBandit said in WTF Bites:

    @loopback0 said in WTF Bites:

    top on Linux does the same.

    Stop using top and install htop 🤷♂

    Um? :wat:

    htop.png


  • Discourse touched me in a no-no place

    My EC2 instance is restricted so that port 22 is only open to connections from my home IP.
    Occasionally that changes and I need to log into the EC2 admin page to fix it.

    I just tried to login to the site and...

    4a7c424f-329a-43e8-83d6-85c3cb76f571-image.png

    Good job it's not important



  • @loopback0 said in WTF Bites:

    I need to log into the EC2 admin page to fix it.

    Nice admin page you have there; it'd be a shame if you couldn't log into it.



  • @loopback0 said in WTF Bites:

    My EC2 instance is restricted so that port 22 is only open to connections from my home IP.
    Occasionally that changes and I need to log into the EC2 admin page to fix it.

    I just tried to login to the site and...

    4a7c424f-329a-43e8-83d6-85c3cb76f571-image.png

    Good job it's not important

    If you're in us-east-1, they're basically in an outage state.


  • Discourse touched me in a no-no place

    @Benjamin-Hall said in WTF Bites:

    If you're in us-east-1, they're basically in an outage state.

    It's not, but I always figured the login was generic anyway



  • WTF of my day: So, I'm wrestling with OpenCast. We want to provide a video platform and YouTube's ads get a bit annoying. There's also the data privacy issue and such.

    While someone provides a Moodle plugin for that thing, I wasn't able to figure out how to configure access rights. There are groups, roles and rights and I'm not sure how any of those are related to each other. The docs don't help (somone on the mailing list told me, helpfully, that if I ever were to figure it out then I could improve the docs?). Uploading videos also isn't the most userfriendly interface.

    Also, there's no "limit this user to 500 MB worth of video" or "limit this user to 1 video total" otion anywhere.

    But they have a REST API. So, being the brave soul that I am, I set out to create my own frontend. Figured out how to upload videos so they get processed (no, the endpoint for that is neither /addVideo nor is it /upload. No, it's /ingest/addMediaPackage - took me a while to find that).
    Then my server polls Opencast in 10 second intervals for a list of videos (I have to hit three (3!) endpoints for that!), filters the list and presents it to the users.

    Now, the upload took a bit of fiddling and, using PostMan, I finally found the magic incantation. Now, PostMan is very nice and also provides a "Translate To Code" for every request you build with it - for C# it's using RestSharp.

    This works but, video files being rather large upon occasion, I'd like to present a progress report to my users - see, my users will upload the files to my frontend, add some details and then the frontend uploads the file to the Opencast server. Uploading to the frontend's progress report is already working (that's Javascript, though!) but RestSharp does not provide a progress report interface.

    Which made me translate the process into using HttpClient. No worries, it's not complicated ...

    var request = new HttpRequestMessage(HttpMethod.Post, "https://opencast/ingest/addMediaPackage");
    request.Headers.Add("Authorization", "Basic FooBarBaz=");
                   
    using var form = new MultipartFormDataContent();
    using var fileContent = new ByteArrayContent(await File.ReadAllBytesAsync(path));
    fileContent.Headers.ContentType = MediaTypeHeaderValue.Parse("multipart/form-data");
    form.Add(fileContent, "BODY", Path.GetFileName(path));
    form.Add(new StringContent(file.Creator), "creator");
    form.Add(new StringContent(file.Title), "title");
    form.Add(new StringContent("presentation/source"), "flavor");
    form.Add(new StringContent(file.Description), "description");
    request.Content = form;
    
    var client = clientFactory.CreateClient();
    var response = await client.SendAsync(request);
    

    Only, it's not working. Oh, it sends the stuff to the server but I always get a 400 - Bad Request. And after banging my head onto the desk and trawling some logs, I finally find the error message:

    Error: Need to set flavor before file content
    

    I wasn't aware that the order of parameters in HTTP requests played a role.

    edit: Just so you can see what I had to deal with:

    264b2135-3d61-428c-aec2-c1fd4af3fa2d-image.png

    The description is plain bonkers and the "type" parameter is never mentioned anywhere. I have no clue what it wants for that parameter. No, "dublincore/episode" isn't it.
    Oh, and an "event" is actually an video entry. And, yes, hitting /{event-id}/metadata gets you a sample entry ... for the metadata. It never mentions the type.


  • Grade A Premium Asshole

    @Benjamin-Hall said in WTF Bites:

    And it's kinda hard to plug in a testing device or manipulate it when it's all the way over there in a remote server room. Native mobile app development really needs local access. Especially xCode.

    Fair, but that is also largely an Apple issue. VDI for OSX was never a thing, because Apple would have sued the absolute hell out of anyone that ever attempted it. They have to get their pound of fleshonly allow their stuff to run on hardware they sell.

    But to be fair, USB passthrough to a virtual desktop instance is pretty seamless. But I have never tried it for mobile app development so :mlp_shrug:

    @DogsB said in WTF Bites:

    I fucking wish. Worked for two companies using this technology. Whatever their setup was it wasn't an overpowered server somewhere. Both companies were cheapskates and provisioned the equivalent of a core2duo with spinning rust and 16 gigs of ram for their developers. Ram was plenty but the rest of it was awful.

    IOPS were always the name of the game in VDI. When I got into it SSDs were not even a thing yet. If you weren't running on 15K SAS drives you would quickly run out of IOPS and regularly hit on resource contention.

    @DogsB said in WTF Bites:

    They must have provisioned them somewhere in Australia too the network latency was so bad.
    It's a great concept though. Solve the latency problem

    Yeah, well, you can't change the speed of light. We only went after local clients so that was never an issue. I got deep enough into it that I had started researching data centers in Colorado (middle of the country, equal and acceptable latency to both coasts).

    Then MS changed their EULA to basically make all of it unprofitable if you followed the EULA and then Citrix bought out and gutted the product that we were building on (Kaviza). It functioned substantially similarly to how Proxmox does now. You did not need to run a SAN as it would replicate your golden images and such and provide ingress all in a highly available manner so long as you had enough machines to allow for failures. It was a pretty great product. Then Citrix bought it and killed it in order to prevent it from stealing sales from XenDesktop (which is a tremendously shit product).


  • Grade A Premium Asshole

    @cvi said in WTF Bites:

    Had similar experience. Even if the setup starts out overpowered on paper, it rarely stays that way. Before the machine(s) materialize, specs get slightly downgraded. Then there's a few more users. And N years down the line, the previously overpowered setup is now regularly outperformed by a smartphone (but upgrading it is a big investment).

    This is where Kaviza really did things right. It would scale horizontally almost perfectly and did load balancing extremely well. If down the line you added on more users you did not need to replace a server, you could just add another and scale out instead of up.

    Citrix shit all over the entire world when they killed Kaviza. Well, they shit all over the world on the regular. But that is one time that sticks in my craw.



  • @Polygeekery said in WTF Bites:

    Yeah, well, you can't change the speed of light.

    But it's OK. Some users aren't the brightest bulbs, either.


  • Notification Spam Recipient

    @Polygeekery said in WTF Bites:

    But no one would buy because they would rather field a shit ton of laptops that were used like portable desktops, because raisins. The companies that did buy are still running their systems and love them and I wish I never sold the fucking things because we still have to support these one-offs and I am virtually the only person who knows anything about them.

    I'm still laughing my ass off (internally) at some guy that has billed (so far) over 20 hours to install a bog-standard RDS server. No, not even the full VDI, the older-school remote desktop sessions on a shared server mode.

    I'm like "but that's two clicks and a short wizard, what the fuck could he possibly be fucking up?" but I'm not courageous enough to ask...


  • Considered Harmful

    @Watson said in WTF Bites:

    @Zecc said in WTF Bites:

    @Polygeekery said in WTF Bites:

    I can't get past how your avatar looks like Commie Goatse.

    Filed under: things I did not need to be unable to unsee.

    You mean that wasn't the intention?! :mlp_shock:

    Nooo, of course not! Pure coincidence. The gold ring on the left hand is just standard Soviet iconography.


  • Banned

    @El_Heffe said in WTF Bites:

    @PleegWat said in WTF Bites:

    The metric makes sense when you are considering single threads.

    Nope. Still not 300 percent of anything.

    Let's say your program is running on 4 cores simultaneously. In total, how much CPU time did your program get in 1 second? The answer is 4 seconds - or 400%. Same principle applies when you use time command - it's not unusual for user time to be much larger than real time.


  • BINNED

    @Polygeekery said in WTF Bites:

    I use a laptop like they were designed to be used

    You just explained why they aren't ...



  • @Rhywden said in WTF Bites:

    it's /ingest/addMediaPackage -

    and when you want to delete the video some when later, it's /digest/excrementMediaPackage.



  • @Tsaukpaetra said in WTF Bites:

    what the fuck could he possibly be fucking up?

    He knows that the guy who will pay the bill does not understand that.


  • Considered Harmful

    @Gąska The total [computation] capacity of any system cannot exceed 100%. You can add together single core time all you like, but the more you do, the more wrong this number becomes.

    A multi-core CPU does not have total_cap = bogomips x cores.
    A multi-socket system does not have total_cap = bogomips x cores x sockets.

    • Cores are not equal to each other (SMT, bigLITTLE, boost, power saving, etc.) at any given time
    • Caches and memory is highly shared between execution units
    • Scheduler can switch threads around as it sees fit
    • Tasks may become I/O bound at any given point

    When I look at the total CPU usage, I want to know roughly how much capacity I have remaining for other tasks.
    If I have a more intimate knowledge on how a given program employs multi-threading and how a given scheduler and hardware works, then and only then I can begin to estimate how a single specific workload adds up together on all resources.

    Tell me you can't precisely measure the usage for any significant amount of time, and I'll believe you.
    Don't tell me that umpteenth cores somehow make umpteenth percent. That's exactly the :bikeshed: that makes the rest of roll eyes at Aye-Tee for conjuring up arcane twatflappery for the 🍶 of it.

    It's measuring the length of a snake in parrots.

    /rant



  • @Applied-Mediocrity said in WTF Bites:

    When I look at the total CPU usage, I want to know roughly how much capacity I have remaining for other tasks.

    It might make sense. It might even make more sense than what the system monitoring tools actually do. But all system monitoring tools everywhere measure CPU utilization in percent of a single core. You may call it dumb, but that's about all you can do about it.


  • Discourse touched me in a no-no place

    @Applied-Mediocrity said in WTF Bites:

    When I look at the total CPU usage, I want to know roughly how much capacity I have remaining for other tasks.

    That's one use-case. Another is accounting (i.e., tracking usage for billing purposes, though that's often done on allocations not usage). However, when I'm looking at system monitoring, I find it's usually more informative to look at which processes are using lots of memory and/or doing lots of I/O. CPU is typically not the bottleneck; they're so damn fast nowadays…


  • Considered Harmful

    @Bulb I is calling it dumb. There is no 400% of anything. You can measure 4 times 100% and that's as accurate as it's going to get, but you can't add things together like that in a heterogenous system that shares resources.

    Let's suppose I have a somewhat cache-bound program, but I have this newfangled Alder Lake. 2 bulldozer cores and 2 celeron cores. Now, assume ideal gas. That is, all cores speed is equal and constant.
    2 threads of my program spend 60% of CPU time on bulldozer cores and 2 threads spend 90% on the other, because they're still kind of shit.

    Is 300% an accurate reading? Is 300 / 4 = 75% more or less accurate?


  • Notification Spam Recipient

    @Polygeekery said in WTF Bites:

    Yeah, well, you can't change the speed of light.

    You're not allowed to beat sysops for making boned headed decisions either. They started moving away from VDIs for developers but decided that buy-at-bulk discount core i3 laptops were the way to go. The last I heard some of the devs were refusing to give up the VDIs. For a company that could lose a couple million in the couch cushions and not notice, they really cheaped out in some ways.

    It's been the same almost everywhere I've gone. Convincing a company to buy hardware so that I spend less time watching a build and more time typing appears next to impossible.



  • @Rhywden So, turns out that the endpoint I was using has the issue of not automatically assigning access rights. This endpoint simply has no provisions for that. And, while there is an endpoint to assign access rights, it's a) marked as "deprecated" and b) can't be used while the video is being processed.

    Doesn't matter anyway because assigning access rights through that endpoint afterwards doesn't seem to do anything anyway - the GUI shows the proper rights (e.g. "Anonymous Access") but you still have to login to view the video.

    Reworked the whole thing to use another endpoint which does assign access rights.


  • Banned

    @Applied-Mediocrity said in WTF Bites:

    @Gąska The total [computation] capacity of any system cannot exceed 100%. You can add together single core time all you like, but the more you do, the more wrong this number becomes.

    CPU time is a technical term. It means what it means. A 4-thread task will take 4 seconds CPU time per second (assuming no sleeps/waits). This is just facts and there's no point in arguing. Now, is there a point in calculating the ratio of CPU time to real time? Yes, it's a fairly useful statistic. Is there a point to expressing that ratio in percents? I don't see why not*. Should that ratio be called CPU utilization? Now, this is finally something you can :wharrgarbl: about and would have a point. But the point isn't "the 400% number is pure bullshit", but rather "they're using the wrong term for it".

    * Personally I'd rather we as a civilization got rid of percents altogether and expressed everything in simple fractions, but oh well.


  • Considered Harmful

    @Gąska said in WTF Bites:

    * Personally I'd rather we as a civilization got rid of percents altogether and expressed everything in simple fractionsfootball fields, but oh well.


  • Banned

    @LaoC I have 400% CPU utilization. That's as much as 4 CPUs!


  • Considered Harmful

    @Gąska said in WTF Bites:

    @LaoC I have 400% CPU utilization. That's as much as 4 CPUs!

    On a desktop i7 (45x42.5 mm) that's about 1.07E-6 football fields.


  • BINNED

    @LaoC said in WTF Bites:

    1.07E-6 football fields.

    soccer or US Brain Smash style?


  • Considered Harmful

    @Gąska Nobody disputes what CPU time is or isn't, you tłuk. Did I mention CPU time anywhere at all? What I am disputing is that any given reading of CPU time of one core is comparable to that of another core at any given time, and therefore can be added together to get a total.
    This "useful statistic" is misleading at best. Just like it adding together memory usage does not constitute correct overall memory usage. CPU time readings only make sense at the level they are measured - that is, for a single core, no more, no less.


  • Banned

    @Luhmann when they translated Blood Bowl board game to Polish, they renamed it Troll Football. I think that game is very fitting here and in many other situations.


  • Banned

    @Applied-Mediocrity said in WTF Bites:

    What I am disputing is that any given reading of CPU time of one core is comparable to that of another core at any given time, and therefore can be added together to get a total.

    Oh so suddenly you're a huge Alder Lake fan? Because when all cores are identical, then they are interchangable. And yes I am aware of per-core frequency changes. It doesn't matter all that much. Performance metrics nnever were exact science.

    This "useful statistic" is misleading at best.

    Something can be both misleading and useful! Imagine that! Ever took a look at COVID statistics?

    Just like it adding together memory usage does not constitute correct overall memory usage.

    It does on Windows. Almost. Definitely much better than Linux, where the system allocates 5x total memory available on regular basis.


  • BINNED

    @Gąska and when you ask it "I can haz more memory, plz" it says "sure, here you go", then kills you when you touch it. 🎺


  • Banned

    @topspin said in WTF Bites:

    kills you

    Only if you're lucky! Usually it kills something else, at random.

    :stonks: STABILITY


  • BINNED

    @Gąska of a different user! 🏆


  • Considered Harmful

    @Gąska said in WTF Bites:

    Oh so suddenly you're a huge Alder Lake fan?

    Oh jeebus. Alder Lake was just an egregious example. Recent :airquotes:leaks:airquotes: suggest AMD actually had the idea first, but Zen+++ had shorter time to market (Wccftech, I think it was, deleted the article... maybe it was complete twatpull after all :surprised-pikachu:). Anyway, ARM has had bigLITTLE for ages.

    Because when all cores are identical, then they are interchangable.

    But that's the thing! The cores are not identical. They haven't been for a long time. Zen chiplets have 3x worse cache latency depending on which CCX they are. 5900X distributes heat better, because two CCX -> more die area. This may or may not mean better boost clocks - for the binned part, at any rate. Goddamn Bulldozer had that shared FPU thing. Two hyperthreaded cores are not the same as two physical cores.

    And yes I am aware of per-core frequency changes. It doesn't matter all that much.

    Ah yes. Earth is 6000 years old. The evidence is wrong.

    Well, there it is then. 🐍 = 🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜🦜.


  • Banned

    How did we get from bashing journalists to creationism? I think it's time for me to sign out of this discussion.


  • Discourse touched me in a no-no place

    @Applied-Mediocrity said in WTF Bites:

    ARM has had bigLITTLE for ages.

    It makes a ton of sense for (larger) embedded systems, where you often have some workloads that need very fast responses (often doing hardware interrupt handling) and others that don't (background processing). Sharing those all on the same set of cores is one way to do it, but manually allocating work to cores is more common as there's not usually a need to support arbitrary programs.


  • Considered Harmful

    @dkf I believe you. I remain skeptical about Intel's recent attempts to bring it to desktop. It appears to make sense there, too. There's the active task such as a vidya, photo/video/3D editing or science program, or the active browser tab, with a couple threads crunching. Then there's zillion of background services. Except it appears to me that big cores have been able to power down well enough, what with all the C-states. It's that if all of them were big cores and were to power up to the highest state at once, it would exceed any reasonable power budget. Intelligent throttling means more chip complexity on top of that. Plus, maybe lawsuits on why it doesn't work at the speeds being advertised. Now it's stated clearly that all cores are not equal. If it doesn't work as well as any system of 16 real cores, tough luck. Plus it saves die space. Engineers get a bonus. The rest of them get to worry about scheduling complexity of arbitrary programs.


  • Considered Harmful

    @Gąska said in WTF Bites:

    How did we get from bashing journalists to creationism? I think it's time for me to sign out of this discussion.

    I don't know, did we? I was bashing you.

    But... as you wish. You wouldn't attend the topic at hand, anyway.


  • Banned

    @Applied-Mediocrity on modern x86 CPU, about 80% of total power consumption is branch prediction, out-of-order execution and register renaming. Downsizing those components can bring huge power savings way beyond regular power management features. At least in theory.


  • Considered Harmful

    @Gąska Doesn't downsizing these components result in worse IPC, offsetting the power efficiency gains? It still zhryot toka kak pizdyec. Maybe laptops will show better results then.



  • @Gąska said in WTF Bites:

    * Personally I'd rather we as a civilization got rid of percents altogether and expressed everything in simple fractions, but oh well.

    Good idea. Since we're used in dealing with numbers in base 10, a fraction is simpler when it's in that base. But expressing stuff as x/10 still lacks a bit of flexibility, especially for small-ish fractions.

    So I suggest we get rid of percents and instead express everything as simple fractions of x/100. :appropriate_emoji:


  • Considered Harmful

    @Gąska said in WTF Bites:

    @topspin said in WTF Bites:

    kills you

    Only if you're lucky! Usually it kills something else, at random.

    :stonks: STABILITY

    I limit, ulimit, we all limit for illconceived puns!


  • Banned

    @Applied-Mediocrity said in WTF Bites:

    @Gąska Doesn't downsizing these components result in worse IPC, offsetting the power efficiency gains?

    Yes, no, maybe, depends on memory access patterns. While AMD64 assembly only defines 16 general purpose registers, the actual CPUs have north of 200 physical registers backing those 16. And just having more registers increases power required to use any particular register due to added circuit switches. And the only reason those 200 registers exist is branch prediction and out-of-order execution - and the more advanced the two, the more registers are needed to hold the what-if computations.

    Those mechanisms only increase IPC when there are memory stalls. Writing programs in a certain way can avoid having stalls almost entirely for some tasks. Of course an average program is very unlikely to be written that way because :kneeling_warthog:. On the other hand, an average program is light on computations and IO-bound so it spends most of the time waiting for signals, so IPC is kinda irrelevant here - running that very light code on very high-performance hardware is a waste anyway.

    In conclusion - ¯\_(ツ)_/¯. Best you can do is trust the engineers at Intel to know what they're doing. And before you question their competences - remember it works both ways; they were the ones who had the very brillant idea to introduce OoOE to x86 in the first place.


  • Banned

    @remi said in WTF Bites:

    @Gąska said in WTF Bites:

    * Personally I'd rather we as a civilization got rid of percents altogether and expressed everything in simple fractions, but oh well.

    Good idea. Since we're used in dealing with numbers in base 10, a fraction is simpler when it's in that base. But expressing stuff as x/10 still lacks a bit of flexibility, especially for small-ish fractions.

    So I suggest we get rid of percents and instead express everything as simple fractions of x/100. :appropriate_emoji:

    You jest but I would see it as an improvement. The problem is that most people don't know what percents are. They don't have this basic intuition that when they see a liquor is 40%, it means it's 2/5ths ethanol and 3/5ths water (plus trace amounts of flavor substances).


  • BINNED

    @Gąska
    and how is 40/100 going to improve that?



  • @Gąska said in WTF Bites:

    when they see a liquor is 40%, it means it's 2/5ths ethanol and 3/5ths water

    That's complicated, though, because 20 ml ethanol + 30 ml water != 50 ml ethanol/water mix. Of course, 20 g ethanol + 30 g water == 50 g ethanol/water mix, but it's not the same mix you get measuring by volume. So I don't blame people for being confused. Whoever decided alcohol content should be ABV should be kicked in the nuts.


  • Considered Harmful

    @Applied-Mediocrity said in WTF Bites:

    Because when all cores are identical, then they are interchangable.

    But that's the thing! The cores are not identical. They haven't been for a long time. Zen chiplets have 3x worse cache latency depending on which CCX they are. 5900X distributes heat better, because two CCX -> more die area. This may or may not mean better boost clocks - for the binned part, at any rate. Goddamn Bulldozer had that shared FPU thing. Two hyperthreaded cores are not the same as two physical cores.

    So what's your suggestion? That tools like top find out whether cores are different or HT is in use and replace the CPU usage display with some large, friendly letters that say "don't panic, this would only mislead you so we've helpfully removed it from your field of view"? Or that they only display separate graphs per (virtual) core along with the current clock frequency averaged over the last video frame and detailed description of which of these share what functional units so you can quickly glance at it for half a day and get a technically accurate picture?


  • Banned

    @Luhmann there's something about human brain that makes "forty hundredths" far easier to comprehend than "forty percent". It's dumb and illogical but it is what it is.


  • Discourse touched me in a no-no place

    @Gąska said in WTF Bites:

    They don't have this basic intuition that when they see a liquor is 40%, it means it's 2/5ths ethanol and 3/5ths water (plus trace amounts of flavor substances).

    @HardwareGeek said in WTF Bites:

    That's complicated, though, because 20 ml ethanol + 30 ml water != 50 ml ethanol/water mix. Of course, 20 g ethanol + 30 g water == 50 g ethanol/water mix, but it's not the same mix you get measuring by volume. So I don't blame people for being confused. Whoever decided alcohol content should be ABV should be kicked in the nuts.

    Sure but as a measure of which one has you vomiting in a gutter the quickest, it works fine. The average person doesn't care about the details beyond that and they have no need to.


  • Grade A Premium Asshole

    @DogsB said in WTF Bites:

    @Polygeekery said in WTF Bites:

    Yeah, well, you can't change the speed of light.

    You're not allowed to beat sysops for making boned headed decisions either. They started moving away from VDIs for developers but decided that buy-at-bulk discount core i3 laptops were the way to go. The last I heard some of the devs were refusing to give up the VDIs. For a company that could lose a couple million in the couch cushions and not notice, they really cheaped out in some ways.

    It's been the same almost everywhere I've gone. Convincing a company to buy hardware so that I spend less time watching a build and more time typing appears next to impossible.

    I would say that in addition to lots of workplaces making silly decisions in not adopting VDI, and Citrix killing the best technology out there, the next nail in the coffin would have been shitty consultants underprovisioning hardware to implement it in order to maximize their profits on the deployment.

    The idea behind VDI was that you share resources among the desktops, that all users would be unlikely to be using significant amounts of resources at any particular time and that unutilized resources could be used by others to maximize performance.

    Your ideal VDI hardware is a good compromise between overall core count and performance per core. You need enough cores to support the baseline needs of your users, and performant enough cores to quickly clear the pipeline when users do more intensive tasks. When I was called in to consult on deployments that others had done and were not working as desired I saw either deployments of lots of low performance cores and deployments with high clock speed CPUs but lesser cores.

    Now keep in mind that I was heavily involved in this in 2009-2012. Around that timeframe in Xeons you could get more cores or higher clockspeed but if you wanted both you were paying through the nose. So you would look for a good tradeoff in the price-performance area. Look for the area in the price-performance curve where price went through the roof and right at the base of that curve get as many cores as you could at the highest clock speed. Then order your chassis with as many sockets as possible even if you were not going to use them. If you calculated your need at two 8-core Xeons, go ahead and get the four socket chassis. If later on you discover your estimates were off you could easily add processors and not have to replace all the hardware. If necessary things could be scaled up at a lower incremental cost versus throwing everything in the bin and starting over.

    Beyond that it is just a matter of RAM and then IOPS on the disks.

    If you did all of that, and installed a properly engineered system, there is no reason that even most developers would not be happy running on VDI. When it came time to compile there would be excess resources sitting there to handle that load and compile very quickly and get you back to copying and pasting from StackOverflowtyping out your artisinal handcrafted artistry in the form of code.

    Now, if only the most competent people were the ones to get the bid.

    In the years that followed "everything is a web app" and "every device is a phone" became the trend. Lots of businesses have started using Slack so even quad core 8GB of RAM desktops choke while trying to run a stupid chat program. Leaving tabs of forum software open on your phone can drain the battery in less than an hour. The entire software world has become retarded and is worse and worse every single year that we continue.



  • @Gąska said in WTF Bites:

    @Luhmann there's something about human brain that makes "forty hundredths" far easier to comprehend than "forty percent". It's dumb and illogical but it is what it is.

    Might be time to just start translating “percent”—it literally means “in a hundred”.


Log in to reply