Linux locks and a kinder, gentler Linus


  • Considered Harmful

    @levicki said in Linux locks and a kinder, gentler Linus:

    I swear all of you are a bunch of fucking hypocrites.

    YMBNH


  • ♿ (Parody)

    @levicki said in Linux locks and a kinder, gentler Linus:

    @TimeBandit said in Linux locks and a kinder, gentler Linus:

    So I guess you know every acronym in every field then :rolleyes:

    I know most acronyms in IT and electronics, and those I don't know I am not being an asshole about having to go and look them up.

    Yes, you're an asshole who uses obscure terms and thinks it's everyone else's problem. There are lots of different kinds of assholes.


  • ♿ (Parody)

    @levicki said in Linux locks and a kinder, gentler Linus:

    @TimeBandit said in Linux locks and a kinder, gentler Linus:

    So you are being an asshole

    Sure, and no one else ever has been an asshole on this forum. Yet they get upvoted for wrong facts while I get downvoted for chastising someone who should know English acronyms way better than me or at least shouldn't make a drama out of having to look them up or ask if it is ambiguous.

    YMBNH. We give everyone shit who posts obscure stuff like that and doesn't explain it.



  • @Tsaukpaetra said in Linux locks and a kinder, gentler Linus:

    Ah, but did it run Windows of Shitux?

    QNX - well suited for the purpose.


  • BINNED

    @boomzilla said in Linux locks and a kinder, gentler Linus:

    @levicki said in Linux locks and a kinder, gentler Linus:

    @TimeBandit said in Linux locks and a kinder, gentler Linus:

    So I guess you know every acronym in every field then :rolleyes:

    I know most acronyms in IT and electronics, and those I don't know I am not being an asshole about having to go and look them up.

    Yes, you're an asshole who uses obscure terms and thinks it's everyone else's problem. There are lots of different kinds of assholes.

    5C88CAC5-CC56-44A1-87E1-85BF7D720977.jpeg


  • Notification Spam Recipient

    @levicki said in Linux locks and a kinder, gentler Linus:

    default audio stack setup

    There is no such thing. Since you don't know that, you probably shouldn't be talking shit about it.


  • Considered Harmful

    @boomzilla said in Linux locks and a kinder, gentler Linus:

    @Mason_Wheeler said in Linux locks and a kinder, gentler Linus:

    @Tsaukpaetra said in Linux locks and a kinder, gentler Linus:

    36 hour
    three contiguous days

    ❓

    Depends on how fast you're traveling vs how fast the computer is traveling obviously. And you thought the Linux scheduler was weird!

    Just like the Cray machines were reputed to be able to run an infinite loop in five minutes, Windows is so Realtime™ it runs three days in 36 hours. Probably switches processes in half a Planck time, too.


  • Considered Harmful

    @levicki said in Linux locks and a kinder, gentler Linus:

    why am I an asshole then

    That's an excellent question.



  • @levicki said in Linux locks and a kinder, gentler Linus:

    With latency you can't record liveroute the stage monitor feedback through the PC because delay is long and it confuses the musicians.

    FTFY

    Yes, real-time feedback matters to musicians. But the musicians don't need effects applied to their feedback,

    It would be really cool if people wouldn't talk with such authority about things they have no fucking clue about.

    But it wouldn't be very human.

    Linux not being capable of low-latency audio with default scheduler

    Real-time scheduler with tweakable parameters is readily available. So why use and/or complain about the default scheduler?
    I'm not complaining about the unsuitability of the default Windows scheduler for server use, am I?



  • @boomzilla said in Linux locks and a kinder, gentler Linus:

    @xaade said in Linux locks and a kinder, gentler Linus:

    @error said in Linux locks and a kinder, gentler Linus:

    xkcd revolutionary

    It took me a while to figure out why a rocket shooting a rocket shooting a rocket can't go over the speed of light.

    Then I learned about calculus, and learned that with floating point numbers there's always more room for a yet smaller increment. Then the final nail in the coffin was that, because of general relativity, each smaller increment could appear as a flat increase in speed to the previous rocket.

    Also the fiddlywibbly-wobbly nature of time. And space.

    FTFY



  • Any scheduler worth its salt is going to have to deal with the fact that for about the past 30 years, pretty much every game development textbook and pretty much every game/game engine has a loop that looks like this:

    while(!done)
    {
     CollectPlayerAndAIInputs();
     ApplyAforementionedInputsToGameObjects();
     UpdateWorldStateAndGameSubsystemsEverythingFromNetcodeToPhysicsToUI();
     RenderFrameAndWaitTheRemainderOf16MillisecondsUnlessUserDisabledVSyncWhyTheHellWouldYouDisableVSyncAnywayTheVisualTearingLooksLikeAss();
    }
    

    Threading has helped a bit, but not much as each of those above steps is hard to parallelize, apart from parallelizing within each step.

    As you may recall a few days ago there was the information on the Linux kernel scheduler causing issues for Google Stadia game developers.

    I suppose it's a nice break to deal with some new issues as compared to their usual issues of trying to figure out why OnLive and all the other attempts at cloud gaming failed.



  • @Vixen said in Linux locks and a kinder, gentler Linus:

    a massive RAID array composed of 28,000 floppy drives in a Mirror/Stripe configuration

    The sound would be interesting.



  • @Mason_Wheeler said in Linux locks and a kinder, gentler Linus:

    @acrow said in Linux locks and a kinder, gentler Linus:

    What do you mean?

    The perpetual predictions that "this year will be the year of the Linux desktop", followed by its market share not hitting the S-curve tipping point that year, and in fact barely budging at all.

    It doesn't help that many of them haven't seen Windows since 95 or 98 and still benchmark the "Linux Desktop" against that sort of UX.



  • @boomzilla said in Linux locks and a kinder, gentler Linus:

    :sigh: Is DAW anything like CP?

    Not generally, as while one can be a good way to irritate your neighbors, the other is a great way to get a visit from the Party Van.


  • Banned

    @Groaner said in Linux locks and a kinder, gentler Linus:

    WhyTheHellWouldYouDisableVSyncAnywayTheVisualTearingLooksLikeAss

    To have higher FPS instead of being brought down to the nearest divisor of refresh rate. Screen tearing is only a problem if you have higher FPS than your refresh rate, and you can eliminate it by using triple buffering, in which case disabling V-sync has no downsides while marginally decreasing input lag (you<=>server, not you<=>display).



  • @boomzilla said in Linux locks and a kinder, gentler Linus:

    @levicki said in Linux locks and a kinder, gentler Linus:

    @boomzilla said in Linux locks and a kinder, gentler Linus:

    :sigh: Is DAW anything like CP? I have no idea what you're talking about, either, unfortunately, but that's just down to unfamiliar acronyms, not shoulder aliens.

    Digital Audio Workstation?

    What does that mean you're doing? Like, a recording studio?

    A bit more than that. You might be processing input from MIDI devices like keyboards and the like as well as recording audio, but you might also have virtual instruments that work with gigabytes of samples and their own signal processing that occurs during recording/playback.

    My virtual instrument collection is pretty small, and already it takes a big chunk of the 500GB SSD it's on.

    eb37efda-9fee-47d7-b86a-f22a2562c102-image.png

    If you think that's bad, Hollywood Orchestra Diamond requires 680GB by itself. 24-bit, 96k samples with multiple microphone positions and every conceivable articulation adds up quickly.



  • @Gąska said in Linux locks and a kinder, gentler Linus:

    The only thing you have to watch out for is that your code must not crash, but then again, that also is more of a guideline than actual rule.

    Oh man, I still need to roast a few certain Korean games on this point....



  • @Groaner said in Linux locks and a kinder, gentler Linus:

    @Mason_Wheeler said in Linux locks and a kinder, gentler Linus:

    @acrow said in Linux locks and a kinder, gentler Linus:

    What do you mean?

    The perpetual predictions that "this year will be the year of the Linux desktop", followed by its market share not hitting the S-curve tipping point that year, and in fact barely budging at all.

    It doesn't help that many of them haven't seen Windows since 95 or 98 and still benchmark the "Linux Desktop" against that sort of UX.

    Goes the other way too. People seem to assume that Linux Desktop is still shit.

    I'd claim that Ubuntu (with default desktop environment) has superior UX to Windows 10. I've used both daily for the last 2 years(?), since my work machine got the 10, so I feel extra qualified to make this claim. Debian comes pretty close, too. Even closer if you need to use it on a touch-screen. (Yes, I'm claiming that Debian is nicer to use on a touch-screen than Windows 10. It is.)

    And if you can bother to shop around for a nice desktop environment for your Linux, you can even get to Windows 7 levels of usability. But leaving the beaten path of the default setup for non-work reasons is :bdsm: on any OS, so YMMV.



  • @Gąska said in Linux locks and a kinder, gentler Linus:

    @Groaner said in Linux locks and a kinder, gentler Linus:

    WhyTheHellWouldYouDisableVSyncAnywayTheVisualTearingLooksLikeAss

    To have higher FPS instead of being brought down to the nearest divisor of refresh rate. Screen tearing is only a problem if you have higher FPS than your refresh rate, and you can eliminate it by using triple buffering, in which case disabling V-sync has no downsides while marginally decreasing input lag (you<=>server, not you<=>display).

    How many frames are you losing, though? If you're not even pulling 60, there's an argument for disabling it for a marginal increase in performance.

    In terms of decreasing input lag, by going from 60 to 120 FPS, you save eight milliseconds. When your reaction time is around 200. And your ping, 30-50 if you're lucky. And most games' netcode, last I checked, still only ticks around 20-30 FPS. Even getting steady 60FPS can be a challenge with MMOs that have 500 people in a small area, or on certain ancient Korean MMOs that are CPU-bound and yield single-digit framerates if there are more than 50 people in the area. It also doesn't help that at the other end of the spectrum, you have people screaming for 4K support.



  • @Groaner I've never been a fan of that design really. Even less so when multi core CPUs became a thing. Not that I've come up with anything that's significantly better, but it feels wrong. Much like how every UI framework feels wrong.


  • Banned

    @Groaner said in Linux locks and a kinder, gentler Linus:

    @Gąska said in Linux locks and a kinder, gentler Linus:

    @Groaner said in Linux locks and a kinder, gentler Linus:

    WhyTheHellWouldYouDisableVSyncAnywayTheVisualTearingLooksLikeAss

    To have higher FPS instead of being brought down to the nearest divisor of refresh rate. Screen tearing is only a problem if you have higher FPS than your refresh rate, and you can eliminate it by using triple buffering, in which case disabling V-sync has no downsides while marginally decreasing input lag (you<=>server, not you<=>display).

    How many frames are you losing, though?

    Up to 29. And that's not just theory. There are many games that I can run on my mid-low-range computer in 40-50FPS. V-sync would only allow for 30FPS. And a 59-60FPS variation becomes 30-60FPS variation, and that looks even worse than constant 30FPS.

    If you're not even pulling 60, there's an argument for disabling it for a marginal increase in performance.

    I don't consider 30-60% marginal.

    In terms of decreasing input lag, by going from 60 to 120 FPS, you save eight milliseconds.

    I did say marginally, didn't I? And yes, it probably won't matter - but hey, there are no downsides. It's free lunch! Why not take it?

    And most games' netcode, last I checked, still only ticks around 20-30 FPS.

    First person shooters run at way more than 20-30FPS. Counter-Strike: GO, for example, uses 64 for matchmaking, and most tournaments use 128. But that's for server - it's possible the client sends commands more often than that.

    MMOs are a special kind of video games in that they can actually afford abysmally low performace. The gameplay is so slow it doesn't matter whether you have ping 30 or 300. RuneScape used to have a full second delay for every user action - and it didn't matter at all because MMO is not the kind of game where this matters. Whatever you say about MMOs most likely doesn't apply to any other genre except turn-based strategies (the only genre where lag tolerance is even higher).


  • Notification Spam Recipient

    @Carnage said in Linux locks and a kinder, gentler Linus:

    @Vixen said in Linux locks and a kinder, gentler Linus:

    a massive RAID array composed of 28,000 floppy drives in a Mirror/Stripe configuration

    The sound would be interesting.

    If such a system were built, it might be comparable in performance to a typical hard drive.



  • @Groaner said in Linux locks and a kinder, gentler Linus:

    reaction time is around 200

    Reaction time matters much less than human timing accuracy, which clocks in at 10ms, and is what suffers from control feedback lag. Shooters would be impossible to play with 200ms lag (button-to-screen). You just couldn't line up a shot to a moving target. Hell, driving a car at speed gets hard with around 50ms of lag.

    Wish people didn't always trot out reaction time when discussing lag, like it had any sort of relevance.


  • Resident Tankie ☭

    These days you can assume DAWs to be comparable to Photoshop with respect to how common they are on desktop computers. All right, maybe an order of magnitude lower. Still, very common.

    That said, the use of a real-time scheduler is not strictly necessary on Linux to use audio. I've never needed it, despite the fact that many deem it essential. And you can well use the "default audio stack" for low-latency audio these days, since Ardour (leading/only FOSS DAW available, for Linux especially) has shunned JACK (the low-latency audio server) and just uses ALSA now. I don't know how they manage, but they do. I suppose they redesigned ALSA relatively recently.

    Anyway, with JACK (just a sudo {apt, yum, dnf} install away on most distros), I could get 10ms round-trip latency (input -> processing -> output) on regular hardware fifteen years ago, which was better than what Windows managed.

    All of this is beside the point anyway because there is very little audio software on Linux anyway and if you use Linux to make music¹ you are a fool and a masochist and why haven't you killed yourself already and what the fuck are you doing here wasting time and valuable oxygen² for the rest of us.

    ¹ : unless you have tonnes of hardware processors and you basically use the PC as a fancy audio router/recorder, in which case it may make sense

    ² : yes yes don't be a :pendant: and still you are depriving us of oxygen here and now when we need it the most


  • BINNED

    @loopback0 said in Linux locks and a kinder, gentler Linus:

    element of drift.

    Definitely when in Tokyo


  • Considered Harmful

    @Groaner said in Linux locks and a kinder, gentler Linus:

    @boomzilla said in Linux locks and a kinder, gentler Linus:

    @levicki said in Linux locks and a kinder, gentler Linus:

    @boomzilla said in Linux locks and a kinder, gentler Linus:

    :sigh: Is DAW anything like CP? I have no idea what you're talking about, either, unfortunately, but that's just down to unfamiliar acronyms, not shoulder aliens.

    Digital Audio Workstation?

    What does that mean you're doing? Like, a recording studio?

    A bit more than that. You might be processing input from MIDI devices like keyboards and the like as well as recording audio, but you might also have virtual instruments that work with gigabytes of samples and their own signal processing that occurs during recording/playback.

    My virtual instrument collection is pretty small, and already it takes a big chunk of the 500GB SSD it's on.

    eb37efda-9fee-47d7-b86a-f22a2562c102-image.png

    If you think that's bad, Hollywood Orchestra Diamond requires 680GB by itself. 24-bit, 96k samples with multiple microphone positions and every conceivable articulation adds up quickly.

    I'm not an expert on DSP but I've read a few articles and understand a bit of Shannon/Nyquist, so I'll go out on a limb here and say that's a crock of shit.
    While doing your calculations in >>16bit is essential for good results due to rounding, pretending that 24/96 source material was substantially better is just wankery. Doing the actual sampling in 96 kHz makes sense if you want to do the low-pass filtering digitally, but for storing the material you may just as well downsample to 44.1 because you're doing that filtering instead of leaving ultrasonic artifacts in that will actually cause distortion on playback, right?
    If you have an input signal with 1 V of amplitude, a 16-bit ADC has to be precise to doable¹ 15 µV. To get 24 usable bits it would have to be <60 nV, which is completely ridiculous unless you're cooling the whole setup (including the microphone and thus the band) in liquid nitrogen. Even 20 Bits result in <1 µV precision which I dare bet is below the noise floor even for most people who pride themselves on their audiophile setup, considering unavoidable thermal noise from a 200 Ohm microphone is somewhere around 260 nV.
    And that calculation just assumes 1 V amplitude for simplicity, while most primary sources such as microphones and guitar pickups produce much less before the preamp.

    ¹ With nicely denoised power supplies and good shielding that is, i.e. not what you get from a typical desktop PC where CPU load fluctuations cause so much noise that you can hear CPU load from the voltage regulator coils



  • @levicki said in Linux locks and a kinder, gentler Linus:

    Screen tearing is not a problem if you have variable refresh rate monitor. As for triple-buffering, I play FPS games and I notice (and i am bothered by) the lag of both v-sync and double-buffering, let alone triple-buffering. Any FPS is unplayable for me with any kind of buffering. Latest NVDIAI drivers even have ultra low latency mode nowadays where games can have 0 pre-rendered frames to reduce the lag even further.

    Vulkan has the MAILBOX present mode. IIRC you always just give the driver the latest frame you've rendered, and it will end up using the most recent one (fully) when a flip occurs. No tearing and minimal latency (0 pre-rendered frames). I assume it's been around elsewhere for a while already.

    But I've already heard years ago of doing multiple input passes during a frame and to delay final rendering until the end of the frame slice as to give the most up-to-date image for presentation. The "standard" RenderAndThenWaitUntilPresent() is a bit of a lazy option in that regard.


  • Notification Spam Recipient

    @levicki said in Linux locks and a kinder, gentler Linus:

    Screen tearing is not a problem if you have variable refresh rate monitor. As for triple-buffering, I play FPS games and I notice (and i am bothered by) the lag of both v-sync and double-buffering, let alone triple-buffering. Any FPS is unplayable for me with any kind of buffering. Latest NVDIAI drivers even have ultra low latency mode nowadays where games can have 0 pre-rendered frames to reduce the lag even further.

    Fucking check your privilege!


  • Resident Tankie ☭

    @LaoC welcome to the world of audio, where data is so small that nobody punishes you for using double the space for no reason at all¹.

    Anyway, 24 bits make sense. The user is probably going to apply processing to the samples afterwards. (eg. compression, EQ, etc). The headroom is paranoid but nice to have. The high sample rate is bollocks unless you are a sound designer and you want to pitch down recorded stuff down an octave and make it still sound rich and not bandlimited to 10kHz.

    ¹ there is an argument for certain ADCs "sounding better" at sample rates other than 44.1/48kHz just because the implementation and design of the ADC makes it better sounding. Some argue that having double the transition band (therefore, shallower filters, which are much easier to design) means having less phase distortion but there is no conclusive evidence AFAIK that humans can hear phase and anyway the distortion is mostly limited to the high treble range which we can't really hear (we say that we can hear 20Hz-20kHz but it's just a convenient rule of thumb; in the real world the figure is closer to 50Hz-15kHz, and gets worse as you get older, I can't hear CRT whine anymore for instance 😢 ). In other words it's tedious nitpicking. Anyway converters are all high speed (like 2MHz) 1-bit delta-sigma devices these days and apply antialiasing digitally to the data stream.


  • Banned

    @levicki said in Linux locks and a kinder, gentler Linus:

    @Gąska said in Linux locks and a kinder, gentler Linus:

    Screen tearing is only a problem if you have higher FPS than your refresh rate, and you can eliminate it by using triple buffering, in which case disabling V-sync has no downsides while marginally decreasing input lag (you<=>server, not you<=>display).

    Screen tearing is not a problem if you have variable refresh rate monitor.

    It is if FPS goes beyond the variableness range of your variable refresh rate monitor. As far as screen tearing goes, a monitor that goes up to 200Hz is the same as a monitor that's always 200Hz.

    As for triple-buffering, I play FPS games and I notice (and i am bothered by) the lag of both v-sync and double-buffering, let alone triple-buffering.

    Your computer is shit. Get a better one. 🚎

    And if you think triple buffering has any more lag than double buffering, it's a clear sign you have no idea what you're talking about.

    Any FPS is unplayable for me with any kind of buffering.

    Sucks to be you. AFAIK all modern games make extensive use of vertex buffers.


  • ♿ (Parody)

    @levicki said in Linux locks and a kinder, gentler Linus:

    @boomzilla said in Linux locks and a kinder, gentler Linus:

    YMBNH. We give everyone shit who posts obscure stuff like that and doesn't explain it.

    But I did explain it when you asked, didn't I? So why am I an asshole then and not the people who still complained after that?

    You kept complaining and telling us that we were dumb or whatever for not being familiar with the same stuff as you.


  • ♿ (Parody)

    @levicki said in Linux locks and a kinder, gentler Linus:

    Because Linux (and I mean desktop distributions here) default scheduler sucks compared to Windows default scheduler for this niche application that I'm into?

    I'm just saying that for normal desktop usage (web browsing, office apps, listening to music, watching video, etc) my experience is reversed. There are often weird and inexplicable hiccups and pauses in doing stuff on Windows. And this is on a six core machine!

    OK, fine, maybe that's not the scheduler's fault. I really don't have the low level insight here, but the experience suggests that Windows can't handle simple desktop stuff as well as Linux.



  • @levicki said in Linux locks and a kinder, gentler Linus:

    @acrow said in Linux locks and a kinder, gentler Linus:

    But the musicians don't need effects applied to their feedback,

    You can't play Wah without feedback, same for tempo-based delays.

    Never heard of them. Then again, my experience on the subject is strictly limited to church events. And even those only because the mixing table team of my church is shorthanded. So I'll just have to take your word for it.

    @acrow said in Linux locks and a kinder, gentler Linus:

    Real-time scheduler with tweakable parameters is readily available.

    But it is not default.

    Why should it be? Few people need it, and it has a negative influence on throughput, which most Linux-users care more about.

    So why use and/or complain about the default scheduler?

    Because Linux (and I mean desktop distributions here) default scheduler sucks compared to Windows default scheduler?

    No, it certainly doesn't suck. Quite the opposite. I get much faster compilations on Linux, using the same version of GCC, when compiling large projects like, e.g. Qt 5. Similar results on converting large batches of video via ffmpeg. On the very same machine (dual-boot). So, seems to me that the Linux scheduler was chosen for a reason.

    I'm not complaining about the unsuitability of the default Windows scheduler for server use, am I?

    You are not because in Windows it is a matter of toggling this radio button:

    That button changes the priority level of foreground tasks, nothing else. It does not actually alter the scheduler's decision tree.

    BTW, the first Google link seems to indicate that your radio button works counter-intuitively:

    But you are making disingenious comparisons while I was comparing desktop to desktop for same purpose.

    Fine, the examples presented above are, this time, of advantages for desktop use-cases. Happy now?


  • Banned

    @acrow said in Linux locks and a kinder, gentler Linus:

    I get much faster compilations on Linux, using the same version of GCC, when compiling large projects like, e.g. Qt 5. Similar results on converting large batches of video via ffmpeg. On the very same machine (dual-boot). So, seems to me that the Linux scheduler was chosen for a reason.programs are fine-tuned to Linux system and nothing else, just like Windows programs are fine-tuned to Windows system and nothing else.

    FTFY



  • @Gąska said in Linux locks and a kinder, gentler Linus:

    @acrow said in Linux locks and a kinder, gentler Linus:

    I get much faster compilations on Linux, using the same version of GCC, when compiling large projects like, e.g. Qt 5. Similar results on converting large batches of video via ffmpeg. On the very same machine (dual-boot). So, seems to me that the Linux scheduler was chosen for a reason.programs are fine-tuned to Linux system and nothing else, just like Windows programs are fine-tuned to Windows system and nothing else.

    FTFY

    A bit of that, and don't forget that GCC-on-Windows is probably having to pass through Cygwin or some equivalent mess of (departs on a 742-page rant about Cygwin), while on Linux it is not.


  • Resident Tankie ☭

    @Gąska AFAIK Linux consistently beats Windows on server and developer workstation loads 🤷


  • Considered Harmful

    @admiral_p said in Linux locks and a kinder, gentler Linus:

    @LaoC welcome to the world of audio, where data is so small that nobody punishes you for using double the space for no reason at all¹.

    Anyway, 24 bits make sense. The user is probably going to apply processing to the samples afterwards. (eg. compression, EQ, etc). The headroom is paranoid but nice to have.

    If the lower ~6 bit contain garbage anyway, the same headroom could be had by simply say scaling it up to 24bit and calculating with 32.

    The high sample rate is bollocks unless you are a sound designer and you want to pitch down recorded stuff down an octave and make it still sound rich and not bandlimited to 10kHz.

    That use case doesn't make much sense either. Either you record way beyond the hearing range and without low-pass filtering, then a) pitching down those ultrasonic components will probably not sound "natural" and b) it can cause intermodulation in filters when you're not downpitching, which is what the vast majority of users will do, or you sample at the high rate and filter anyway, then you might just as well use a lower sampling rate.

    ¹ there is an argument for certain ADCs "sounding better" at sample rates other than 44.1/48kHz just because the implementation and design of the ADC makes it better sounding. Some argue that having double the transition band (therefore, shallower filters, which are much easier to design) means having less phase distortion

    Yeah, that's pretty much what I meant by rather doing low-pass filtering digitally—a very steep filter is far simpler to do like that.

    (we say that we can hear 20Hz-20kHz but it's just a convenient rule of thumb; in the real world the figure is closer to 50Hz-15kHz, and gets worse as you get older, I can't hear CRT whine anymore for instance 😢 ).

    I'll be there soon. It used to be a really annoying tone for me but only the faintest hint of it is left.


  • Banned

    @admiral_p said in Linux locks and a kinder, gentler Linus:

    @Gąska AFAIK Linux consistently beats Windows on server and developer workstation loads 🤷

    I bet $10 that whatever benchmarks you're basing your claim on (if any exist) used programs that have Linux as their primary target.


  • Resident Tankie ☭

    @Gąska I don't care enough to actually stand by it. I'm just relaying what I've read on various blogs and shit (Phoronix for example). The performance difference is bound to be negligible anyway. Even though, apparently, NTFS really is shit performance-wise compared to ext4. But again, I don't care that much to delve into it.


  • Banned

    @admiral_p said in Linux locks and a kinder, gentler Linus:

    Phoronix for example

    40e9d076-55ff-4ec2-abcd-77289d31c2b2-image.png

    Yeah... let's just say I'm not entirely convinced they'd take every precaution necessary to be as fair to Windows as possible.


  • BINNED

    @Steve_The_Cynic said in Linux locks and a kinder, gentler Linus:

    @Gąska said in Linux locks and a kinder, gentler Linus:

    @acrow said in Linux locks and a kinder, gentler Linus:

    I get much faster compilations on Linux, using the same version of GCC, when compiling large projects like, e.g. Qt 5. Similar results on converting large batches of video via ffmpeg. On the very same machine (dual-boot). So, seems to me that the Linux scheduler was chosen for a reason.programs are fine-tuned to Linux system and nothing else, just like Windows programs are fine-tuned to Windows system and nothing else.

    FTFY

    A bit of that, and don't forget that GCC-on-Windows is probably having to pass through Cygwin or some equivalent mess of (departs on a 742-page rant about Cygwin), while on Linux it is not.

    Whatever POSIX layer it uses, it probably relies on Unix stuff like forking all over the place. And the IO is presumably suboptimal on Windows, too. (And shouldn’t IO be the much bigger bottleneck than all this uninformed speculation about schedulers, anyway?)
    A fair comparison would probably be the best you can get on Linux vs. the best you can get on Windows. Linux might still win, but for different reasons.



  • @Gąska Fine-tuned how, exactly? The compiler itself is a single-thread load that spends most of its time parsing and compiling data. The only things I can think of that affect the compilation speed are memory handling and scheduler shenanigans. And I doubt malloc/free patterns get much tuning.

    If I had to guess (and we're already ranting on the subject, so hey), I'd say that Windows scheduler's tendency to plop the compiler's process to the back of the back of the queue after every full time-slice slows it down. You know, that same tendency that gets extolled for making Windows more responsive, and allows those brain-dead homegrown userspace spinlocks to work in the first place.

    And, yes, most people run their compilation multi-threaded. But in gcc's case, the threading is done by the IDE/make. The compiler executable itself is single-thread (AFAIK).


  • Banned

    @topspin said in Linux locks and a kinder, gentler Linus:

    A fair comparison would probably be the best you can get on Linux vs. the best you can get on Windows.

    Except it would only be fair if both used the same big-picture algorithms, and we know it's not true for C++ compilers.

    The only way to actually test it is to start with the same codebase then put hundreds of hours into optimizing it for each platform. And nobody cares enough about it to actually do it.


  • Banned

    @acrow said in Linux locks and a kinder, gentler Linus:

    @Gąska Fine-tuned how, exactly?

    @topspin gave a few examples in his post.


  • Considered Harmful

    @Gąska said in Linux locks and a kinder, gentler Linus:

    @acrow said in Linux locks and a kinder, gentler Linus:

    I get much faster compilations on Linux, using the same version of GCC, when compiling large projects like, e.g. Qt 5. Similar results on converting large batches of video via ffmpeg. On the very same machine (dual-boot). So, seems to me that the Linux scheduler was chosen for a reason.programs are fine-tuned to Linux system and nothing else, just like Windows programs are fine-tuned to Windows system and nothing else.

    FTFY

    I admit it's been many years since I last looked at any gcc guts, but I don't think they're optimizing anything for a specific scheduling or somesuch OS specifics. As @Steve_The_Cynic has said, Cygwin is a more likely candidate (should be possible to forego that now with that Linux subsystem, no?)
    Also, it may have to do with the mway make works, creating a new compiler process for every compilation unit which is a very fast operation on Linux but expensive on Windows. If someone turned that into a bunch of compiler worker threads so it would perform better under Windows, that wouldn't hurt performance on Linux though, just isolation.



  • @Carnage said in Linux locks and a kinder, gentler Linus:

    @Vixen said in Linux locks and a kinder, gentler Linus:

    a massive RAID array composed of 28,000 floppy drives in a Mirror/Stripe configuration

    The sound would be interesting.

    i have half a mind to try to cobble (a smaller) one together.....


  • Resident Tankie ☭

    @LaoC said in Linux locks and a kinder, gentler Linus:

    @admiral_p said in Linux locks and a kinder, gentler Linus:

    @LaoC welcome to the world of audio, where data is so small that nobody punishes you for using double the space for no reason at all¹.

    Anyway, 24 bits make sense. The user is probably going to apply processing to the samples afterwards. (eg. compression, EQ, etc). The headroom is paranoid but nice to have.

    If the lower ~6 bit contain garbage anyway, the same headroom could be had by simply say scaling it up to 24bit and calculating with 32.

    Mostly true, but (and this is a big but, and maybe a big butt) most users of this kind of software is hopeless when it comes to digital data theory. So the 24 bits actually are a selling point. Anyway, believe me when I say that working with 24 bits is much better regardless of whether the lowest 4-6 bits are noise, simply because it's advantageous at the other end of the scale. You can (and are expected to) record tracks peaking at -18/-12dBFS on average (it depends on the audio interface). This means that you can safely approximate the input gain to something that looks good instead of having to painstakingly set preamps and use outboard gear to maximise SNR.

    The high sample rate is bollocks unless you are a sound designer and you want to pitch down recorded stuff down an octave and make it still sound rich and not bandlimited to 10kHz.

    That use case doesn't make much sense either. Either you record way beyond the hearing range and without low-pass filtering, then a) pitching down those ultrasonic components will probably not sound "natural"

    It's not expected to. I'm talking about creative sound design, sound FX and weird noises or sounds, not realistic sound capturing.

    and b) it can cause intermodulation in filters when you're not downpitching, which is what the vast majority of users will do

    It shouldn't. Why should it cause intermodulation at the filter stage anyway?¹ Anyway, if you are actually outputting a signal which is high in 20kHz+ frequency components which can technically cause intermodulation distortion in the amplifier or the speaker, more realistically, I'd say that it's not the converter's job to avoid distortion when it is introduced by the equipment down the chain.

    or you sample at the high rate and filter anyway, then you might just as well use a lower sampling rate.

    Which is why I record at 44.1kHz. That's the Red Book standard (which resonates with me for more than one reason :half-trolling: ) and that's what users are presented with. Some people make the argument that recording at higher sample rates is future proofing, as we may move to a better standard, but I don't think the stuff I make merits future proofing and 44.1kHz isn't crippling my music for future listeners anyway. 😀

    ¹ there is an argument for certain ADCs "sounding better" at sample rates other than 44.1/48kHz just because the implementation and design of the ADC makes it better sounding. Some argue that having double the transition band (therefore, shallower filters, which are much easier to design) means having less phase distortion

    Yeah, that's pretty much what I meant by rather doing low-pass filtering digitally—a very steep filter is far simpler to do like that.

    The thing is, according to the info I've gathered, that low pass filtering is already usually done digitally. That's how converters can work at more than one sample rate easily. You have one shallow antialiasing filter before the (very fast delta-sigma) converter, which is good for the highest final sample rate, then you downsample to the desired sample rate by filtering again.

    ¹ I think I understood what you're trying to say. The thing is that even at 96kHz sampling rates there should be lowpass filtering at 20kHz to make it useful to humans, but it's going to be smoother because you "only" need to get (ideally, I have no idea how steep the filters are in practice) 120dB down in an octave instead of a tone or so. So you still get some high frequency content. If these 96kHz are achieved in such a way that filtering is still as steep as @ 44.1kHz, but moving f3 upwards, then you do get much more high frequency content.



  • @Vixen said in Linux locks and a kinder, gentler Linus:

    @Carnage said in Linux locks and a kinder, gentler Linus:

    @Vixen said in Linux locks and a kinder, gentler Linus:

    a massive RAID array composed of 28,000 floppy drives in a Mirror/Stripe configuration

    The sound would be interesting.

    i have half a mind to try to cobble (a smaller) one together.....

    Do eeet!!


  • Resident Tankie ☭

    @Gąska still, I don't find it implausible that Linux is at least a bit faster in server and developer workstation loads. That's where the money (or desire to iterate) is when it comes to Linux, and Linux has the advantage of being both open source and relatively cavalier when it comes to backwards compatibility (as long as it doesn't break userspace). Windows is a general purpose OS (that's what it is sold as), not FOSS, and quite more backwards compatible compared to Linux.



  • @admiral_p said in Linux locks and a kinder, gentler Linus:

    Mostly true, but (and this is a big but, and maybe a big butt) most users of this kind of software isare hopeless when it comes to digital data theory.

    Fixed that for you, since it's not just DAWTFBBQ users that are carp at digital data theory. ay user of a digital anything is carp at digital data theory..... up to but not quite including the likes of Ronald Knuth, Alice Turring, Kelly Thompson, Betty Kernighan, and the authors of IEC 60908


Log in to reply