Hyperactive sleep



  •  While investigating where all the time was going in some dog-slow code I have the misfortune to have to debug, I encountered this gem. Who can offer a more CPU intensive way of pausing a proccess?

     

     

     



  • At least they seem to know that it's bad, based on the deprecated annotation. Now, is that being called by the library, or by something in your code base?



  • @cjl said:

     Who can offer a more CPU intensive way of pausing a proccess?

     

     

     

    Let's see.. you could spawn N new threads, each with realtime priority, which would hog the CPU so that the original process pauses :)



  • @jpa said:

    Let's see.. you could spawn N new threads, each with realtime priority, which would hog the CPU so that the original process pauses :)

    Why stop at N threads? Use a fork bomb.

    Each process would allocate a 1 gig array, FFT whatever random data is in it, then discard it.



  • Back in my classic ASP (which has no sleep()) and Jet/Access days, I had a sleep function that would ping localhost, which was used to wait and retry once more when it couldn't open the database, despite being opened with adModeShareDenyNone. It seemed better than grinding at the cpu.



  • As I have posted previously on CodeProject:

    
    Trying to translate a winforms app into silverlight, which doesn't have blocking socket operations:
     
    private void ReceiveTelnetDataRepeatedly()
    {
    	while (connection != null && connection.isconnected)
    	{
    		connection.CheckForNewBytes();
    		ProcessData(connection.GetIncomingStrings());
    	}
    }
     
    This was executing in a thread, and the "check for new bytes" was allocating an 8K array every time... within 30 seconds, I'd allocated 1.5GB of RAM and caused an OutOfMemoryException...  
    
    


  • @arotenbe said:

    @jpa said:
    Let's see.. you could spawn N new threads, each with realtime priority, which would hog the CPU so that the original process pauses :)

    Why stop at N threads? Use a fork bomb.

    Each process would allocate a 1 gig array, FFT whatever random data is in it, then discard it.

    Bonus if you write it to disc and then delete it right away. Multiple times.



  • Before PHP 5 you couldn't use usleep() on Windows. (TRWTF, BTW)



  • Back in the single-process days of MS-DOS, this was a bog-standard way to implement a pause. There was no delay() function; if I recall correctly there was a delay shell command but no delay function. You couldn't "hand the CPU over to the operating system to do something else useful", because the OS couldn't do anything else anyway. Interrupts would take place to handle what little multitasking MS-DOS supported.

    You didn't happen to stumble on some of my old code, did you?

     



  • @arotenbe said:

    @jpa said:
    Let's see.. you could spawn N new threads, each with realtime priority, which would hog the CPU so that the original process pauses :)

    Why stop at N threads? Use a fork bomb.

    Each process would allocate a 1 gig array, FFT whatever random data is in it, then discard it.

    Bah!  The sloppy programmers these days!  You don't need a fork bomb, and it's very difficult to get one of those to dynamically allocate the right amount of memory for the task to take the appropriate amount of time, because they always go infinite (until, of course, they either kill the system or run into the limits the system set to protect itself from fork bombs.  On a related note, boy was my coworker disappointed when he tried to show me my unix system was vulnerable to fork bombs.  Yeah, it was, out of the box. I bet there was even something Windows admins could do like that, even 15 years ago when that happened.  It's just about who's more likely to *do* it...)

    Let's be reasonable.  Keep in mind, these days, GPUs and hardware random number generators are all the rage (well, at least in my neck of the woods).  So make sure we use them.

    Make one thread to fill the gig array with random data, one thread FFT the random data, one thread to hand the random data to the GPU to FFT, one thread to compare the outputs from the CPU FFT and the GPU FFT, and one thread to run this entire <p> again, including this point.

    500 bonus points for the implementation that leaves one set of threads, minus the last one, running after the timing loop is supposed to end, and 7953.2 bonus points for implementing a 'no wait mutual exclusion mechanism' to prevent the thread writing the random data from scrawling all over the FFT input while the other two threads are doing nothing which looks decent but actually does nothing, and a completely different 'no wait mutual exclusion mechanism' to prevent the two threads doing FFTs from mucking up the data for the thread comparing the results, which also doesn't work.  Oh, and the implementation should handle those systems which don't actually have a gig of video memory.



  • @tgape said:

    7953.2 bonus points for implementing a 'no wait mutual exclusion mechanism' to prevent the thread writing the random data from scrawling all over the FFT input while the other two threads are doing nothing which looks decent but actually does nothing, and a completely different 'no wait mutual exclusion mechanism' to prevent the two threads doing FFTs from mucking up the data for the thread comparing the results, which also doesn't work.

    I'm going to concede defeat before the plausibility of this starts depressing me.



  • @tgape said:

    Make one thread to fill the gig array with random data, one thread FFT the random data, one thread to hand the random data to the GPU to FFT, one thread to compare the outputs from the CPU FFT and the GPU FFT, and one thread to run this entire <p> again, including this point.

    No, you should use an SFT - that will make it even more inefficient!

    *disclaimer: I have no idea if there even is such a thing as a "slow Fourier transform" :P



  • @ekolis said:

    *disclaimer: I have no idea if there even is such a thing as a "slow Fourier transform" :P
     

    There is: the Discrete Fourier Transform is the basis for the optimised Fast version.



  •  I heard there's this new experimental method for an FT that's even faster.

     

     We should probably not use that one.


Log in to reply
 

Looks like your connection to What the Daily WTF? was lost, please wait while we try to reconnect.