Linux locks and a kinder, gentler Linus



  • @LaoC said in Linux locks and a kinder, gentler Linus:

    creating a new compiler process for every compilation unit which is a very fast operation on Linux but expensive on Windows

    That raises 2 questions:

    1. Why is spawning processes more expensive on Windows? :sideways_owl:
    2. If it's that expensive, then why does Visual Studio spawn processes left, right and center? Like they didn't want their shit to work fast. :wtf:


  • @boomzilla said in Linux locks and a kinder, gentler Linus:

    I'm just saying that for normal desktop usage (web browsing, office apps, listening to music, watching video, etc) my experience is reversed. There are often weird and inexplicable hiccups and pauses in doing stuff on Windows. And this is on a six core machine!

    Same here.

    Under load (especially heavy IO), Windows just seems to suck. Doing a full build will cause video/audio to interrupt (regardless of whether that's with MSVC or GCC or whatever). LInux handles those situations much better.

    Not to mention that MSVC's is slow as hell to begin with. Though, apparently, they've been improving perf recently.



  • @acrow said in Linux locks and a kinder, gentler Linus:

    @LaoC said in Linux locks and a kinder, gentler Linus:

    creating a new compiler process for every compilation unit which is a very fast operation on Linux but expensive on Windows

    That raises 2 questions:

    1. Why is spawning processes more expensive on Windows? :sideways_owl:
    2. If it's that expensive, then why does Visual Studio spawn processes left, right and center? Like they didn't want their shit to work fast. :wtf:

    Re 2. I don't know if it's still like this, but it used to be the case that a build from VS would launch one instance of CL.EXE for each different "blob" of command line options. If all source files could be compiled the same way (same command-line #defines, -I include directory directives, etc. etc. etc. with just the filenames of input and output changing), VS would launch one instance of CL, not one per file.



  • @acrow said in Linux locks and a kinder, gentler Linus:

    Like they didn't want their shit to work fast

    Well they don't want anything to get in the way of their wheely chair foam sword fights obviously.

    8def0398-1e19-4aed-93eb-15c4e0975e50-image.png



  • @LaoC said in Linux locks and a kinder, gentler Linus:

    @admiral_p said in Linux locks and a kinder, gentler Linus:

    (we say that we can hear 20Hz-20kHz but it's just a convenient rule of thumb; in the real world the figure is closer to 50Hz-15kHz, and gets worse as you get older, I can't hear CRT whine anymore for instance 😢 ).

    I'll be there soon. It used to be a really annoying tone for me but only the faintest hint of it is left.

    Of course, most PC screens (ever since a very long time ago) had CRT whine significantly higher than 15 kHz anyway. The 15K thing is the horizontal refresh frequency of a 60-fields-per-second NTSC or 50-fields-per-second PAL/SECAM CRT. At 800x600 interlaced using 60 fields per second, that frequency becomes 18 kHz (60x300). At 800x600 progressive using 60 frames per second, it's 36 kHz, well outside human hearing.

    And LCDs don't have CRT whine at all...



  • @Vixen said in Linux locks and a kinder, gentler Linus:

    @acrow said in Linux locks and a kinder, gentler Linus:

    Like they didn't want their shit to work fast

    Well they don't want anything to get in the way of their wheely chair foam sword fights obviously.

    8def0398-1e19-4aed-93eb-15c4e0975e50-image.png

    OI! THAT WAS A FUNNY JOKE!

    02be0549-fbe8-464f-9adc-f59cee191a4c-image.png


  • Resident Tankie ☭

    @cvi @boomzilla tbh there is one scenario where Linux is probably worse than Windows. I get it on an old laptop I have. And that's a low RAM scenario. When the OS starts thrashing the HDD you can sometimes just resign yourself to pushing that reset button, because while the HDD furiously clicks away, even X stops refreshing and you lose control of the UI. So you try switching to a console tty. A minute or two if you're lucky. You login. Another minute or two. ps aux or maybe sudo iotop, wait another minute or two, identify which process is pinning the PC at 99.9% IO, kill it, wait and then maybe you regain control of the computer. In my experience that doesn't happen on Windows. In these scenarios, Windows just chugs along extremely slowly but you do get to kill the offending process much faster.


  • Resident Tankie ☭

    @Steve_The_Cynic said in Linux locks and a kinder, gentler Linus:

    @LaoC said in Linux locks and a kinder, gentler Linus:

    @admiral_p said in Linux locks and a kinder, gentler Linus:

    (we say that we can hear 20Hz-20kHz but it's just a convenient rule of thumb; in the real world the figure is closer to 50Hz-15kHz, and gets worse as you get older, I can't hear CRT whine anymore for instance 😢 ).

    I'll be there soon. It used to be a really annoying tone for me but only the faintest hint of it is left.

    Of course, most PC screens (ever since a very long time ago) had CRT whine significantly higher than 15 kHz anyway. The 15K thing is the horizontal refresh frequency of a 60-fields-per-second NTSC or 50-fields-per-second PAL/SECAM CRT. At 800x600 interlaced using 60 fields per second, that frequency becomes 18 kHz (60x300). At 800x600 progressive using 60 frames per second, it's 36 kHz, well outside human hearing.

    And LCDs don't have CRT whine at all...

    My uncle still has a CRT TV. He doesn't mind the whine because, of course, to him there is none. And he doesn't mind the low resolution either both because CRT TVs are much better at low resolutions and because he doesn't care. At this year's family dinner, I stopped minding the whine because... there was none to me too.



  • @Gąska said in Linux locks and a kinder, gentler Linus:

    Up to 29. And that's not just theory. There are many games that I can run on my mid-low-range computer in 40-50FPS. V-sync would only allow for 30FPS. And a 59-60FPS variation becomes 30-60FPS variation, and that looks even worse than constant 30FPS.

    Ah, you're counting worst-case instantaneous FPS. Okay.

    I did say marginally, didn't I? And yes, it probably won't matter - but hey, there are no downsides. It's free lunch! Why not take it?

    Up to you. My eyes have suffered irreparable damage from a bunch of shooters in the 00's which looked like ass if you turned off vsync, and my old game programming texts stressed the importance of vsync as you only wanted to update the video buffer when the raster was not active, or else you end up with half of one frame and half of the other. Of course, that was back in the days of CRTs and directly writing to 0xA0000-0xAFFFF, so things may have changed a bit.

    First person shooters run at way more than 20-30FPS. Counter-Strike: GO, for example, uses 64 for matchmaking, and most tournaments use 128. But that's for server - it's possible the client sends commands more often than that.

    It depends. There are a lot of tradeoffs, especially if the game has a lot of replicated data. More on that in a moment.

    Tickrate for games like first-person shooters can vary from 60 ticks per seconds for games like Quake or Counter-Strike: Global Offensive in competitive mode to 30 ticks per seconds for games like Battlefield 4 and Titanfall.

    And I believe the default for the Source engine was 20 for quite a while. 60Hz seems like a waste on a Counter-Strike game, when there were far more twitchy Half-Life mods that consisted of players flying through the air that I played in college which might have benefited far more.

    MMOs are a special kind of video games in that they can actually afford abysmally low performace. The gameplay is so slow it doesn't matter whether you have ping 30 or 300. RuneScape used to have a full second delay for every user action - and it didn't matter at all because MMO is not the kind of game where this matters. Whatever you say about MMOs most likely doesn't apply to any other genre except turn-based strategies (the only genre where lag tolerance is even higher).

    MMO doesn't necessarily mean "game where player clicks to select target and pushes nuke or heal button every two seconds" anymore. I've played MMOs with 50+ man aerial dogfights and the like. Even the "modern" FPS standard of 100-man maps can create a lot of performance considerations if there's a lot of replicated data.


  • Considered Harmful

    @acrow said in Linux locks and a kinder, gentler Linus:

    @LaoC said in Linux locks and a kinder, gentler Linus:

    creating a new compiler process for every compilation unit which is a very fast operation on Linux but expensive on Windows

    That raises 2 questions:

    1. Why is spawning processes more expensive on Windows? :sideways_owl:

    From what I read, the NT kernel isn't all that slow, it also supports some kind of fork() functionality. Win32 however needs dozens of backwards compatibility checks and supposedly even fucks around with the registry while creating a process.
    Unixes (most of them anyway, supposedly Sun was so fond of threads because they didn't get fork() to perform well) are extremely optimized for the kind of workload you get from shell scripts which involve many tiny transient processes.

    1. If it's that expensive, then why does Visual Studio spawn processes left, right and center? Like they didn't want their shit to work fast. :wtf:

    No fucking clue, sorry.


  • ♿ (Parody)

    @admiral_p said in Linux locks and a kinder, gentler Linus:

    @cvi @boomzilla tbh there is one scenario where Linux is probably worse than Windows. I get it on an old laptop I have. And that's a low RAM scenario. When the OS starts thrashing the HDD you can sometimes just resign yourself to pushing that reset button, because while the HDD furiously clicks away, even X stops refreshing and you lose control of the UI. So you try switching to a console tty. A minute or two if you're lucky. You login. Another minute or two. ps aux or maybe sudo iotop, wait another minute or two, identify which process is pinning the PC at 99.9% IO, kill it, wait and then maybe you regain control of the computer. In my experience that doesn't happen on Windows. In these scenarios, Windows just chugs along extremely slowly but you do get to kill the offending process much faster.

    Could be. I haven't really been in a low RAM scenario (on the desktop) for at least 10 years, and that was on Windows, not Linux. My experience is where what I'm doing is way below the threshold for both RAM and CPU.


  • Banned

    @admiral_p said in Linux locks and a kinder, gentler Linus:

    @Gąska still, I don't find it implausible that Linux is at least a bit faster in server and developer workstation loads.

    I agree it's not implausible. But any actual improvements coming from the system itself being better suited for a particular use case are going to be vastly overshadowed by the programs themselves adapting to the quirks of the target system. Like spinlocks in OP - they're used in games for Windows because spinlocks on Windows are fast. And any game using spinlocks will be slower on Linux not because Linux is worse for gaming, but because it has different quirks than Windows and it just so happens that the spinlock quirk is one of the differences. Conversely, many programs that target Linux or other Unix-like systems fork very often, and it's a well known fact that Windows sucks at forking. It doesn't make Windows worse for any particular use case - it's just that any program that implements those use cases using a lot of forks will have significantly lower performance on Windows. In most cases there's no real benefit to forking over an alternative design, but there's no real drawback either, so the developer picks whatever happens to work best on their target machine (usually their own PC), or just whatever they feel like that day (which is often influenced by what's commonly used for their target machine).


  • Resident Tankie ☭

    @Gąska I acknowledge that. In fact I did talk about a possible negligible advantage. I'm the kind of person that doesn't see much point in overclocking, let alone squeezing a 5% speed advantage.



  • @admiral_p said in Linux locks and a kinder, gentler Linus:

    @Gąska I acknowledge that. In fact I did talk about a possible negligible advantage. I'm the kind of person that doesn't see much point in overclocking, let alone squeezing a 5% speed advantage.

    I buy overclockable chips, not because i want to overclock out of the gate, but because once the chip starts underperforming for being old I can generally extend its life at least a year by clockchipping it up, because once it's that old i don't care about longevity, and even stability isn't super critical as long as it's "stable enough"

    then when it either gets unstable that its no longer "stable enough" or dies entirely i'll replace the chip and mainboard.

    that's the theory anyway. generally it doesn't work out that way as something comes up that makes me upgrade before that point for other reasons.



  • @levicki said in Linux locks and a kinder, gentler Linus:

    If you are dumb enough as a native English speaker to have to Google [DAW], you have already failed.

    It's not exactly a common everyday acronym for most people



  • @Groaner said in Linux locks and a kinder, gentler Linus:

    Any scheduler worth its salt is going to have to deal with the fact that for about the past 30 years, pretty much every game development textbook and pretty much every game/game engine has a loop that looks like this:

    while(!done)
    {
     CollectPlayerAndAIInputs();
     ApplyAforementionedInputsToGameObjects();
     UpdateWorldStateAndGameSubsystemsEverythingFromNetcodeToPhysicsToUI();
     RenderFrameAndWaitTheRemainderOf16MillisecondsUnlessUserDisabledVSyncWhyTheHellWouldYouDisableVSyncAnywayTheVisualTearingLooksLikeAss();
    }
    

    This is my copyrighted game API, please remove it from your post



  • @acrow said in Linux locks and a kinder, gentler Linus:

    That raises 2 questions:

    1. Why is spawning processes more expensive on Windows? :sideways_owl:
    2. If it's that expensive, then why does Visual Studio spawn processes left, right and center? Like they didn't want their shit to work fast. :wtf:
    1. Although the kernel supports it, Windows' "personality" doesn't support fork() without exec(). Every process has to be built and launched starting from empty. On Linux, creating a new process via fork() reuses pretty much all of the parent process (copy-on-write memory) and only takes the penalty of starting from scratch when you exec(), which GCC etc don't call.
    2. Same reason as modern browsers and now even Explorer itself: keeping things partitioned partially so you can have a more compatible interface (use whatever C runtime you want! no COM support, no problem!) but mostly so that when a process explodes in flames the shrapnel doesn't take down what the user perceives as the app.


  • @Captain said in Linux locks and a kinder, gentler Linus:

    @xaade Take a step back: you are moving through space-time at the speed of light. A constant speed. If you are 'standing still' in space, you are racing into the future at speed 'c'.

    Now think about what happens if your speed in space-time is constant but you're moving in space. By the pythagorean theorem, you have a hypotenuse of length c, and a "height" of length v. You will be "stealing" from your "temporal speed" to get "spatial speed".

    Yes. That part makes sense.

    I was trying to figure out the simultaneous event thing. If things happened at different moments because light reached you at different moments. Turns out, it stacks. It both actually happens at different moments, and you observe it later too.



  • @Groaner said in Linux locks and a kinder, gentler Linus:

    In terms of decreasing input lag, by going from 60 to 120 FPS, you save eight milliseconds. When your reaction time is around 200.

    Eh

    The rest of your post (and general point) is OK, but higher FPS isn't about optimizing for human reaction time to a random stimulus. 30 fps is more than enough for that. Where higher framerates shine is things like tracking moving targets. The smoother that is, the easier it is for your brain to deal with it.


  • Resident Tankie ☭

    @xaade what I don't get is, if a particle is travelling at a speed close to c (disregarding photons because of their dual nature), say a neutrino for instance: is space-time extremely stretched for it? If so, how can we really tell its speed?



  • @levicki said in Linux locks and a kinder, gentler Linus:

    You can't play Wah without feedback, same for tempo-based delays.

    504ee1a6-b0a2-4536-9fac-13ff2469058b-image.png


  • Discourse touched me in a no-no place

    @hungrier said in Linux locks and a kinder, gentler Linus:

    higher FPS isn't about optimizing for human reaction time to a random stimulus. 30 fps is more than enough for that.

    The higher the FPS, the lower delay before the thing happens on the screen. You see it slightly faster then you can react to it slightly faster.



  • @loopback0 Right, but I don't think the difference between 200 and 208 ms is as big of a deal as the improvement in smoothness


  • Discourse touched me in a no-no place

    @hungrier said in Linux locks and a kinder, gentler Linus:

    @loopback0 Right, but I don't think the difference between 200 and 208 ms is as big of a deal as the improvement in smoothness

    It would be for a competitive gamer.



  • @loopback0 It could be, and their reaction time could also be lower due to lots of training, caffeine, young age, etc. But for your average Linus Tech Tips co-host who is not a competitive gamer, the big improvement seems to be from reduced input lag and smoother tracking



  • @admiral_p said in Linux locks and a kinder, gentler Linus:

    @xaade what I don't get is, if a particle is travelling at a speed close to c (disregarding photons because of their dual nature), say a neutrino for instance: is space-time extremely stretched for it? If so, how can we really tell its speed?

    (Relative) speed is actually one of the things we can measure really really well. Because we can measure energy (when it collides).

    And yes, if you had an object at 0.999999999999...c, it's "internal" clock and an external clock would run very differently. We actually can test this--muons (half life of ~1 us) are created in the upper atmosphere by cosmic ray events. They shouldn't be detectable at ground level--they don't live long enough to make it there. But they are. Because for them, the atmosphere is much shorter (length compression); alternatively their "decay clock" runs slower (time dilation from our perspective).

    But we both agree on their (relative) speed.



  • @admiral_p Actually, do go back to the photon. Time is "stopped" for the photon -- it literally moves spatially at c, so its "time velocity" must be zero.

    That might help clarify what a neutrino is doing. Time is going very very slowly for the neutrino. It feels a billionth of a second or whatever while it "actually" travels billions of light years in billions of years (from our perspective). From the neutrino's perspective, space is extremely compressed (it sees it "all" in its "normal" lifetime). From our perspective, the neutrino's time is stretched -- a particle that "should" last a microsecond before decaying (at rest) ended up traveling the universe.

    But a big part of relativity is the relativity of frames of reference. The "actually" up there is in quotes because there is no "actually" about it. Both frames are equally valid descriptions of the motion of a neutrino (as vague as my description was).


  • Notification Spam Recipient

    @admiral_p said in Linux locks and a kinder, gentler Linus:

    I can't hear CRT whine anymore for instance

    I can't either, mostly because it's been years since I've been in the presence of one...


  • Resident Tankie ☭

    @Captain the way I understood it for light is that it makes little sense to talk about space when it comes to light because it is also a wave. And waves technically have no locality (if waves appear to travel, that is only because of interference, or something like that).


  • Resident Tankie ☭

    @Tsaukpaetra still, low resolution is lower on LCD screens than on CRT TVs (not only because they interpolate, also because you have actual pixels). I remember recording on VHS tapes in LP mode (half the resolution, double the time). It looked bad but not as bad as you'd expect. For a large part of the '00s, I stuck to CRT monitors because older games looked like crap in LCDs.



  • @admiral_p EM waves definitely propagate. So yeah, we can "tell" that a beam of light hit point A and then hit point B a year later (in principle).

    Understanding what "propagation" looks like in the photon's reference frame is harder, because in that frame, the wave isn't being a physical process. It's being a static object.


  • Notification Spam Recipient

    @levicki said in Linux locks and a kinder, gentler Linus:

    That is also not what I do because I can raytrace using GPU. In other words if your audio and video ever stutter in Windows you are doing something wrong or your hardware is not up to the task.

    Yes yes, my Windows computer has been running for 40 days straight and I know I'm doing it wrong, obviously, because that's the only reason I can think of that 20 gb of phantom processes are eating RAM and causing literal draw issues at randomly regular intervals.

    It's rather fascinating to see which programs are using legacy draw calls and which ones are DWM accelerated...



  • @LaoC said in Linux locks and a kinder, gentler Linus:

    unless you're cooling the whole setup (including the microphone and thus the band) in liquid nitrogen.

    There are bands that would be considerably improved by that. ❄


  • Resident Tankie ☭

    @Captain heh, I never really understood that part of physics. Like, for example, I used to think that Heisenberg's uncertainty principle was due to the fact that you have to beam a photon against a particle to identify its position, and by merely doing so you are changing its energy status, thus pushing it onto a different orbit. (A high school physics book provided this as a reason). I have since been told that the actual point is that it makes no sense to talk about position for something that is also a wave, as waves are basically static objects which exist in the entire space at any given time. 🤷♂


  • ♿ (Parody)

    @levicki said in Linux locks and a kinder, gentler Linus:

    No idea what kind of computers you use to run Windows, but on my PC the only tine I have to wait for something I clicked to open is I use CPU for raytracing.

    That's probably more of a disk issue. No, I'm talking about after something is open and I'm using it. Opening an email, typing, etc.



  • @hungrier said in Linux locks and a kinder, gentler Linus:

    The rest of your post (and general point) is OK, but higher FPS isn't about optimizing for human reaction time to a random stimulus. 30 fps is more than enough for that. Where higher framerates shine is things like tracking moving targets. The smoother that is, the easier it is for your brain to deal with it.

    There's this study that kinda supports what you're saying. It's not the best study ever, but the take-away seems that their test subjects (all 25 of them) seemed to prefer higher frame rates (60Hz over 30Hz) over increased resolutions - specifically 720p at 60Hz over 1440p at 30Hz.



  • @admiral_p said in Linux locks and a kinder, gentler Linus:

    @cvi @boomzilla tbh there is one scenario where Linux is probably worse than Windows. I get it on an old laptop I have. And that's a low RAM scenario.

    Yeah, fair. I run without swap on Linux because of this - 99% of the time I have more than enough physical RAM and running out of it usually means that I fucked up something somewhere in development (so getting the process murdered by the OS is actually desirable).

    Firefox is the only other process that occasionally gets OOMed. I'm mostly okay with that. I'm considering putting it into it's own cgroup so that it gets murdered much earlier (i.e, whenever it tries to commit more than 3-4 GB of RAM),



  • @admiral_p: yeah, i mean the test measurement explanation is fine for a high schooler. People that young want to imagine a mechanism. Us olds can get by with the abstract concept. :-)

    The real fact is that the (quantum) wave functions we're talking about are "actually" probability distributions, with means and variances and so on. And you can take a Fourier transform of the wave form. And there are facts about integration on L^2 that tell us how much information we can get out of that measurement (i.e., because the Fourier transform is an isometry on L^2). The Heisenberg uncertainty principle from physics is way more closely related to... say... Shannon's information entropy than to light waves explicitly.



  • @Captain said in Linux locks and a kinder, gentler Linus:

    And there are facts about integration on L^2 that tell us how much information we can get out of that measurement (i.e., because the Fourier transform is an isometry on L^2).

    https://pics.me.me/thumb_mmhmm-yeah-know-some-ofthese-words-mmhmm-yeah-i-know-53324888.png


  • Considered Harmful

    @Gąska said in Linux locks and a kinder, gentler Linus:

    turn-based strategies (the only genre where lag tolerance is even higher).

    @error_bot games have an extremely high tolerance for lag.



  • @hungrier said in Linux locks and a kinder, gentler Linus:

    @levicki said in Linux locks and a kinder, gentler Linus:

    If you are dumb enough as a native English speaker to have to Google [DAW], you have already failed.

    It's not exactly a common everyday acronym for most people

    Audio engineers use computers; most computer people don't do audio engineering. I knew the acronym only because I was trying to find some basic MIDI software a while ago, but all I found was DAW that was massive overkill for what I needed. The point being that I had to look up what DAW meant when I saw it a year or so ago.



  • @acrow said in Linux locks and a kinder, gentler Linus:

    And, yes, most people run their compilation multi-threaded. But in gcc's case, the threading is done by the IDE/make. The compiler executable itself is single-thread (AFAIK).

    It used to(1) be that gcc would (mostly for memory limitations, I suspect, divide the different phases of compilation (preprocess, parse, optimisation, codegen, assembly in approximately that order) into separate processes linked by temp files, and the --pipe option would run those processes in "parallel" linked by pipes. (Of course these days it's really parallel if it still does that.)

    (1) Twenty five years ago, admittedly, but that's the sort of thing that gets baked hard into an system.



  • @error said in Linux locks and a kinder, gentler Linus:

    @error_bot games have an extremely high tolerance for lag.

    I hope you have an extremely high tolerance for @error_bot's lag, it's been an hour and we're still waiting 🍹


  • Notification Spam Recipient

    @levicki said in Linux locks and a kinder, gentler Linus:

    @Tsaukpaetra said in Linux locks and a kinder, gentler Linus:

    my Windows computer has been running for 40 days straight

    That's it, I am reporting you to Microsoft for cruelty towards operating systems.

    Oh, just for cruelty? 😥 That's what pushed you over the edge? Damn son, your priorities are more wack than mine!


  • Notification Spam Recipient

    @levicki said in Linux locks and a kinder, gentler Linus:

    having to look it up.

    @HardwareGeek said in Linux locks and a kinder, gentler Linus:

    The point being that I had to look up

    Like, literally failing to read and comprehend. LITERALLY.



  • @levicki said in Linux locks and a kinder, gentler Linus:

    I see you are still alive and well, unlike some others complaining here about having to look it up.

    The complaining is not that they had to look it up; it's that you insisted that everyone should have known it without needing to look it up.


  • Discourse touched me in a no-no place

    @Mason_Wheeler said in Linux locks and a kinder, gentler Linus:

    @Captain said in Linux locks and a kinder, gentler Linus:

    And there are facts about integration on L^2 that tell us how much information we can get out of that measurement (i.e., because the Fourier transform is an isometry on L^2).

    https://pics.me.me/thumb_mmhmm-yeah-know-some-ofthese-words-mmhmm-yeah-i-know-53324888.png

    It is OK to not really understand the details of what happens when you mix relativity and quantum mechanics. Nobody really does — after all, a "theory of everything" is one the grand unsolved challenges of physics — and only very few even get close. The rabbit hole here seems to be very deep, and linking to information theory (as @Captain pointed out) isn't something I would have spotted early either.

    Fortunately, this isn't my field. 😉


  • ♿ (Parody)

    @levicki said in Linux locks and a kinder, gentler Linus:

    @boomzilla said in Linux locks and a kinder, gentler Linus:

    That's probably more of a disk issue. No, I'm talking about after something is open and I'm using it. Opening an email, typing, etc.

    I agree it may be disk issue, but I don't understand.

    SSDs are cheap enough for that to not be an issue anymore. Even laptops now have M.2 (aka PCI-E x4) SSD slots which can push 3.2 GB/sec read and write (yes, even the older Samsung 960 Pro M.2 SSD can sustain those speeds for quite a while).

    Well, I agree with you, but also I'm not the person who sees delays when I open stuff, so :wtf:. I've allowed, of course, that the stuff I do see is possibly the intrusive anti-malware stuff I'm forced to run when I run Windows, and it's possible that if I were forced to run something like that on Linux I'd see similar nonsense.



  • @TimeBandit said in Linux locks and a kinder, gentler Linus:

    @error said in Linux locks and a kinder, gentler Linus:

    @error_bot games have an extremely high tolerance for lag.

    I hope you have an extremely high tolerance for @error_bot's lag, it's been an hour and we're still waiting 🍹

    Hope you didn't say something in a 🚎 garage thread that @error disagrees with, or he might just put you on the bot's blacklist and you'll be stuck waiting indefinitely!



  • @Mason_Wheeler said in Linux locks and a kinder, gentler Linus:

    Hope you didn't say something in a garage thread that @error disagrees with, or he might just put you on the bot's blacklist and you'll be stuck waiting indefinitely!

    I'm sure that if I'm on that blacklist, it's because of an @error :rimshot:


Log in to reply