Out of Memory, kill the process or sacrifice your firstborn!



  • I thought you guys would appreciate "amusing error message of the day". This did make me giggle.

    (click for full version if this is hurting your eyes)

    Developers have a wonderful penchant for writing unfriendly error messages. This is pretty standard stuff, but it's amusing how harsh the wording is.

    @Oh dear said:

    Kill process

    @That can't be good said:

    Sacrifice child

    @The coup de grace said:

    Kernel panic



  • I'm more worried that the os just starts killing processes when it's out of memory, instead of... just being out of memory and stopping the operation.



  • That's because of over-committing memory. Basically, the system pretends to have more memory than it actually has so that when a process asks for X bytes of memory it usually gets it.

    This works reasonably well since programmers are lazy and usually request much more memory than they actually need. However when programs actually start to use all that memory the system is screwed, because there is no way it can provide the necessary memory. It will then start killing processes to fix this problem.



  • @martijntje said:

    That's because of over-committing memory. Basically, the system pretends to have more memory than it actually has so that when a process asks for X bytes of memory it usually gets it.

    This works reasonably well since programmers are lazy and usually request much more memory than they actually need. However when programs actually start to use all that memory the system is screwed, because there is no way it can provide the necessary memory. It will then start killing processes to fix this problem.

    Sounds like a bank to me.. People have a certain amount of money in their account and they usually get that when they ask for it.. Unless of course everyone want their money at the same time, since the banks haven't enough money to fullfill all their obligations.. Then they will start to kill themselves :-)


  • Discourse touched me in a no-no place

    @dhromed said:

    I'm more worried that the os just starts killing processes when it's out of memory, instead of... just being out of memory and stopping the operation.

    As with most things arcane in Linux, it's configurable.



    My worry is reserved for the OP who appears to know enough to be dangerous (i.e. if you're (able to be) digging around enough on a system to find those messages, the messages themselves shouldn't have elicited the response it did.)



  • @PJH said:

    @dhromed said:

    I'm more worried that the os just starts killing processes when it's out of memory, instead of... just being out of memory and stopping the operation.

    As with most things arcane in Linux, it's configurable.



    My worry is reserved for the OP who appears to know enough to be dangerous (i.e. if you're (able to be) digging around enough on a system to find those messages, the messages themselves shouldn't have elicited the response it did.)

    It's not my screenshot, I saw it online and found it amusing. That's why I'm more interested in the silly error messages than the connotations of what's happening.



  • @dhromed said:

    I'm more worried that the os just starts killing processes when it's out of memory, instead of... just being out of memory and stopping the operation.

    Yeah that is quite worrying. We're out of memory so let's play process Russian roulette until we have enough!



  • @PJH said:

    As with most things arcane in Linux, it's configurable.

    Is this behaviour the default? Or has some nutbar actually configured it to do this?


  • Discourse touched me in a no-no place

    @keigezellig said:

    Sounds like a bank to me.. People have a certain amount of money in their account and they usually get that when they ask for it.. Unless of course everyone want their money at the same time, since the banks haven't enough money to fullfill all their obligations.. Then they will start to kill themselves :-)
    No, then you just get your kernel to ask for a bailout.



  • @DoctaJonez said:

    @dhromed said:

    I'm more worried that the os just starts killing processes when it's out of memory, instead of... just being out of memory and stopping the operation.

    Yeah that is quite worrying. We're out of memory so let's play process Russian roulette until we have enough!

    Except in this case, the bullet was never in the chamber for any of the processes, so nothing got killed. Then it was the kernel's turn to play, and it got the bullet.


  • Discourse touched me in a no-no place

    @DoctaJonez said:

    @PJH said:
    As with most things arcane in Linux, it's configurable.

    Is this behaviour the default? Or has some nutbar actually configured it to do this?

    Why don't you RTFA and find out from the first paragraph, instead of treating TDWTF as your own personal Eliza-based search-engine?


  • @PJH said:

    As with most things arcane in Linux, it's configurable.
    "The process to be killed in an out-of-memory situation is selected based on its badness score."

     

    The Linux Colonel kills your children when they are bad




  •  Was the process named Abraham?



  • I've been speaking to the guy that posted the screenshot. Here's the full story for anyone who wants a bit of extra context...

    All my machines are setup with Hyper-V, a Windows 8 host (with cygwin installed) and an Ubuntu VM. That's how I get to enjoy both worlds of a great Windows UI and an awesome Ubuntu command line.

    I couldn't connect to my VM yesterday, so I checked the Hyper-V console to find this. This doesn't seem like something that should happen on a machine which rarely gets to 1% memory usage, and I have no idea how it could stay out of memory after killing all my processes.

    Also, this is the first Kernel panic I've seen since I've been using this setup, which is interesting because I've been using it for over a year and haven't seen the Windows 8 BSOD yet.



  • @El_Heffe said:

    The Linux Colonel kills your children when they are bad

    Nice :-)


  • ♿ (Parody)

    @PJH said:

    Why don't you RTFA and find out from the first paragraph, instead of treating TDWTF as your own personal Eliza-based search-engine?

    The Eliza thing usually gets more amusing (not to mention hostile) responses. Right or wrong, I like that.



  • @martijntje said:

    That's because of over-committing memory. Basically, the system pretends to have more memory than it actually has so that when a process asks for X bytes of memory it usually gets it.

    This works reasonably well since programmers are lazy and usually request much more memory than they actually need. However when programs actually start to use all that memory the system is screwed, because there is no way it can provide the necessary memory. It will then start killing processes to fix this problem.

    Right but other OSes like, say, Windows will (shockingly) just never over-commit in the first place. These OSes are known as "good OSes".


  • Considered Harmful

    I thought the problem of allocating more memory than you actually have was solved more than half a century ago.



  • @joe.edwards said:

    I thought the problem of allocating more memory than you actually have was solved more than half a century ago.

    Swap space can also be full or not configured (linux installers usually make a swap space automatically just like Windows).
    The difference between Windows and Linux is that Windows will never allocate a process more (virtual memory) than it has while Linux will.

    E.g. on Linux you can request 10TB of memory (on a machine with 8GB RAM and 1TB disk)... And you'll get it. At least you think so, because you have a valid pointer. Once you actually start using that memory you and/or other process(es) will get killed.



  • @dtech said:

    The difference between Windows and Linux is that Windows will never allocate a process more (virtual memory) than it has while Linux will.
    This seems to me like a WTF on the part of Linux (and I'm not a hater; in fact, I've been a user and fan for mumble-mumble years). Is this true of other *NIX, too? What, pray tell, is the rationale behind it?

     



  • @HardwareGeek said:

    What, pray tell, is the rationale behind it?
    fork().



  • @ender said:

    @HardwareGeek said:
    What, pray tell, is the rationale behind it?
    fork().
    Um, I still don't get it. The only thing I see that seems somewhat relevant is "[t]he entire virtual address space of the parent is replicated in the child." I don't see a benefit from letting a process think it has more memory available that it could ever possibly use.



  • @HardwareGeek said:

    @ender said:

    @HardwareGeek said:
    What, pray tell, is the rationale behind it?
    fork().
    Um, I still don't get it. The only thing I see that seems somewhat relevant is "[t]he entire virtual address space of the parent is replicated in the
    child." I don't see a benefit from letting a process think it has more memory available that it could ever possibly use.

    Because fork() duplicates the address space as copy-on-write, the total amount of committed memory doesn't increase initially. But as you start modifying the COW pages, it causes increase in committed memory, because instead of one shared page you'll get two non-shared pages. You cannot predict how much memory you'll need as a result of that. One approach is to reserve maximum amount of memory you may ever need after fork. This may be time-consuming.



  • @dtech said:

    @joe.edwards said:
    I thought the problem of allocating more memory than you actually have was solved more than half a century ago.

    Swap space can also be full or not configured (linux installers usually make a swap space automatically just like Windows).
    The difference between Windows and Linux is that Windows will never allocate a process more (virtual memory) than it has while Linux will.

    E.g. on Linux you can request 10TB of memory (on a machine with 8GB RAM and 1TB disk)... And you'll get it. At least you think so, because you have a valid pointer. Once you actually start using that memory you and/or other process(es) will get killed.

     

    Thus, the difference is that Windows kills the process that is unluck enough to ask for more memory just as soon as the system runs out of it, while Linux will explicitly chose one to kill (or sacrifice descendent).

     



  • @alegr said:

    Because fork() duplicates the address space as copy-on-write, the total amount of committed memory doesn't increase initially. But as you start modifying the COW pages, it causes increase in committed memory, because instead of one shared page you'll get two non-shared pages. You cannot predict how much memory you'll need as a result of that. One approach is to reserve maximum amount of memory you may ever need after fork. This may be time-consuming.
    Ah! It's more efficient to allocate a big chunk of memory the child may never need than to allocate it piecemeal when it does need it. And if it ever needs more than is really available, well, too bad for some unlucky process(es). One can argue about whether this is the optimum solution (obviously, since other OSs made other choices), but at least I now understand the reason. Thank you.

     



  • @Mcoder said:

    Thus, the difference is that Windows kills the process that is unluck enough to ask for more memory
    Which explains why Memoryhog Firefox keeps crashing while I'm trying to post here,

     



  • @HardwareGeek said:

    @Mcoder said:
    Thus, the difference is that Windows kills the process that is unluck enough to ask for more memory
    Which explains why Memoryhog Firefox keeps crashing while I'm trying to post here,

    It doesn't kill the process, it fails the allocation.

    The process dies because 99% of apps don't check for and/or handle failed memory allocations.



  • @blakeyrat said:

    The process dies because 99% of apps don't check for and/or handle failed memory allocations.
     

    In Windows, short of preallocating memory so you can handle an OOM situation, how do you respond to an OOM situation? You can't open a new window to inform the user and you don't have any memory to translate data to normal disk formats. I don't that the C library or kernel will let necessarily let you even open a file to dump data to without memory, and you have to make sure any libraries you use don't secretly allocate anything. If you're allocating a large chunk of memory, it's probably a good thing to handle a failure, but if you're making a routine allocation, the amount of work to handle it seems disproportionate to what it takes to handle it and the effectiveness you can respond to it.

     



  • @David1 said:

    In Windows, short of preallocating memory so you can handle an OOM situation, how do you respond to an OOM situation?

    "Short of doing the only thing you can to handle an OOM situation, how do you respond to an OOM situation?"

    "Short of calling the fire department, how do you get help putting out a fire in your house?"

    "Short of using an engine lift, how do you replace the engine in your car?"

    "Short of joining the military, how do you get a military rank of Sergeant?"



  • @blakeyrat said:

    @David1 said:
    In Windows, short of preallocating memory so you can handle an OOM situation, how do you respond to an OOM situation?

    "Short of doing the only thing you can to handle an OOM situation, how do you respond to an OOM situation?"

    In which blakeyrant™ tells us the only way to handle running out of memory on Windows is to allocate all the memory your program could ever use at startup.



  • @Ben L. said:

    In which blakeyrant™ tells us the only way to handle running out of memory on Windows is to allocate all the memory your program could ever use at startup.

    I see Ben L. still can't fucking comprehend anything more complicated than pointing and grunting.

    No; you allocate enough to handle the OOM condition at startup. Say, 32k or so. And just hold on to it until/if you need it.

    I never said the program should allocate all the memory it could ever use because that's fucking idiotic and only an idiot would say that.



  • @blakeyrat said:

    the program should allocate all the memory it could ever use



  • @Ben L. said:

    @blakeyrat said:
    the program should allocate all the memory it could ever use


  • Discourse touched me in a no-no place

    @blakeyrat said:

    I never said the program should allocate all the memory it could ever use because that's fucking idiotic and only an idiot would say that.
    Which is a weird thing to deny because lots of programs effectively do exactly that…



  • @dkf said:

    @blakeyrat said:
    I never said the program should allocate all the memory it could ever use because that's fucking idiotic and only an idiot would say that.
    Which is a weird thing to deny because lots of programs effectively do exactly that…

    should


  • Discourse touched me in a no-no place

    @blakeyrat said:

    should
    You're assuming that I would also pass a reading comprehension test I see…



  • @dkf said:

    @blakeyrat said:
    should
    You're assuming that I would also pass a reading comprehension test I see…
     

    Assumptions are the mother of all OOM situations.

    But on Linux a little moreso than on Windows.



  • @HardwareGeek said:

    @ender said:

    @HardwareGeek said:
    What, pray tell, is the rationale behind it?
    fork().
    Um, I still don't get it.

    Try this answer: fork() followed immediately by exec(), which probably represents 99% of the uses of fork().

    The real problem with over-commit is that you get (in practice) "any process can get killed in the middle of any operation and leave something in an inconsistent state" instead of the normal "any memory allocation can fail, as expected by good programmers".

    So the difference is mostly theoretical since good programmers don't exist.



  • @Planar said:

    theoretical
     

    The real question is: what tints did you use to produce your oddly dithered icon? It's not the greys of the standard icon, is it?



  • @Planar said:

    The real problem with over-commit is that you get (in practice) "any process can get killed in the middle of any operation and leave something in an inconsistent state" instead of the normal "any memory allocation can fail, as expected by good programmers".
     

    It seems that the first would be better in some situations. A program that has a persistent memory leak on the order of GB per hour, whose response to a failed memory allocation is to wait and try it again later, can kill or cripple any program that depends on forking or allocating memory in the latter situation; in the first, it will probably be the first thing targetted when the OOM killer runs. Or it's not even bothering to initalize the memory, in the first case it may have no effect at all.

     



  • @Planar said:

    Try this answer: fork() followed immediately by exec(), which probably represents 99% of the uses of fork().

    So another question is, why not have a fork_and_exec() call?



  • @anonymous234 said:

    @Planar said:

    Try this answer: fork() followed immediately by exec(), which probably represents 99% of the uses of fork().

    So another question is, why not have a fork_and_exec() call?

    Because calling two functions is slightly less expensive than calling one function that calls the same two functions.



  • @Ben L. said:

    Because calling two functions is slightly less expensive than calling one function that calls the same two functions.

    Except it's not when the implementation of that function is different so that it doesn't have the same ridiculous overhead the other two functions have by necessity.



  • @Salamander said:

    @Ben L. said:
    Because calling two functions is slightly less expensive than calling one function that calls the same two functions.

    Except it's not when the implementation of that function is different so that it doesn't have the same ridiculous overhead the other two functions have by necessity.

    Except the two functions are just syscalls. So you can't combine them without modifying every OS that C runs on.


  • @Ben L. said:

    Except the two functions are just syscalls. So you can't combine them without modifying every OS that C runs on.

    So you're saying that Linux OSes never get updated, ever?



  • @Salamander said:

    @Ben L. said:
    Except the two functions are just syscalls. So you can't combine them without modifying every OS that C runs on.

    So you're saying that Linux OSes never get updated, ever?

    No. I am not saying that. Where did you get that from?


  • If you weren't, then why even bother mentioning that they are implemented as syscalls in the first place?
    If you're introducing a new function, something needs to be updated somewhere. That's pretty much required.



  • @Ben L. said:

    @Salamander said:
    @Ben L. said:
    Because calling two functions is slightly less expensive than calling one function that calls the same two functions.

    Except it's not when the implementation of that function is different so that it doesn't have the same ridiculous overhead the other two functions have by necessity.

    Except the two functions are just syscalls. So you can't combine them without modifying every OS that C runs on.
    You can add syscalls without touching the existing ones.


  • @anonymous234 said:

    @Ben L. said:
    @Salamander said:
    @Ben L. said:
    Because calling two functions is slightly less expensive than calling one function that calls the same two functions.

    Except it's not when the implementation of that function is different so that it doesn't have the same ridiculous overhead the other two functions have by necessity.

    Except the two functions are just syscalls. So you can't combine them without modifying every OS that C runs on.
    You can add syscalls without touching the existing ones.
    Quite, and in any case fork() and exec() aren't "C" functions, they're POSIX functions. There's not even any guarantee that they're even syscalls, as such.

    The reason POSIX/SuS doesn't specify a fork_and_exec() function is that it just duplicates two existing functions and is therefore redundant.

    If you don't like the fact that fork() clones the entire memory space, then good news: vfork() already exists.



  • The reason POSIX/SuS doesn't specify a fork_and_exec() function is that it just duplicates two existing functions and is therefore redundant. If you don't like the fact that fork() clones the entire memory space, then good news: vfork() already exists.
    POSIX does specify a fork_and_exec() function, it's called posix_spawn() and nobody knows about it because fork()-then-exec() has been in UNIX since the before times so that's what everyone uses.

Log in to reply