The law of large numbers





  • ...is "learn how to use the right types for integers, you morons"?



  • @asuffield said:

    ...is "learn how to use the right types for integers, you morons"?

    No, this is 'Look how great our (Mac ?) OS is , it is generating negative errors!!! We are working against errors all the time, we are great, aren't we !!!1!one!1!111!!eleven$oops!!!!' 



  • @asuffield said:

    ...is "learn how to use the right types for integers, you morons"?

    If that were in fact the case, then he's had a metric crapload of page faults, which probably means his box has been thrashing like hell. Negative 1 billion can only mean signed 32bit, which if it overflows, means he's already had over 3 billion pagefaults. I doubt any programmer can predict a system being that thoroughly messed up (especially with 2GB RAM, though according to wikipedia, pagefaults also can result in in-memory page moving) and I wouldn't personally use a 64bit (though I would have the common sense to use unsigned) unless it was the platform preferred type (== 64-bit system).

    Far more likely, the counter got mucked up. Still silly that it's signed. Then again, maybe a negative page fault is when the OS moves a page away from where the program wants it (eg swap it out, etc) right upon access (but then that'd immediately lead to a positive pagefault).



  • @aquanight said:

    @asuffield said:
    ...is "learn how to use the right types for integers, you morons"?

    If that were in fact the case, then he's had a metric crapload of page faults, which probably means his box has been thrashing like hell. Negative 1 billion can only mean signed 32bit, which if it overflows, means he's already had over 3 billion pagefaults.

    It's not so bad as that.

    A page fault is the event that occurs every time the underlying hardware MMU calls back to the CPU to do some work. Retrieving data that has been swapped out is one cause of this, but there are many more. The most common cause is allocation: when most modern kernels allocate memory on behalf of a process, they don't actually assign any physical memory pages to the process right away. Instead, they repeatedly map the zero page (one page of memory filled entirely with zeros) into the process's virtual address space, and tag those virtual pages as read-only. When the process first tries to write to the page, the MMU raises a page fault and calls back to the kernel - and the kernel then allocates a physical page to the process and maps it into that space.

    The second most common cause of page faults is loading object code. Binaries on unix platforms aren't read into memory when a program starts, they're just mapped into the virtual address space, and those pages are marked as unreadable and unwriteable. When something tries to read the page, the MMU raises a page fault, and the operating system loads the page from disk before resuming the process.

    If the system has been up for a while, it is easy for these two things alone to account for more than 3 billion events. We can tell that it's been up a while and doing a lot of work by the copy-on-write count of .1 billion (which also generates a page fault for each event). It would be quite reasonable for there to be 30 times more zero-allocations than COW-allocations.

     

    Then again, maybe a negative page fault is when the OS moves a page away from where the program wants it (eg swap it out, etc) right upon access (but then that'd immediately lead to a positive pagefault).

    No, the count of page faults is purely the count of calls from the MMU to the CPU. It is not related to what the OS does to handle the page fault.



  • unsigned int i = 4294967295;

    printf("%i\n", i);

     

    Outputs "-1". Doesn't have to be a type mistake, can be a simple display mistake. Next to that, it's a statistics counter. I guess I would have used an int myself, because it's just a silly counter that doesn't do anything. SillyCount++;


Log in to reply