Speaking of unstable databases....



  • @superjer said:

    I'm not even sure what to say to you guys... but multithreading can vastly improve "perceived performance" in certain situations. Right now I'm working on an app that completes 3.99 times faster with 4 threads than with just one. We're talking 4 hours down to 1 hour.

    This indicates a bug or design flaw in the application, if all those threads are running on the same processor. You could have done it even faster with a single thread. The fact that you didn't do this does not mean anything.

    @LoztInSpace said:

    Multithreading works well when you have blocking IO combined with CPU work.  Like, for example, a database, an operating system, a web server to name a few.  Any application that exhibits the IO+CPU+IO will typically benefit from MT.

    This one falls under the heading of "popular but wrong beliefs". Any application of this form will benefit vastly more from being written by somebody who understands how to do it right in a single thread. This is because the number of threads required in order to unjam the system when using braimdamaged IO will always vastly exceed the number of available processors by several orders of magnitude.

    Early versions of Java are largely responsible for propagating this belief, because their OS interface library was too broken to support doing it right. This was fixed years ago.

     

    There is precisely one situation in which threading is fundamentally the right solution, and that is to utilise multiple processors in an SMP system. Under any other circumstances there is a faster and better way, even if you don't know what it is.



  • @superjer said:

    Automatic parallelization is very limited in scope. In a ray-tracer, for example, which is highly parallelizable, the compiler is not going to make it parallel for you. You have to do it yourself with threads. It's the only way to do it.

    Actually, researchers have been producing compilers that can do just this for the past couple of decades - it's quite possible (and ray tracers are a popular example for them to demonstrate it with). It's just that people don't use those compilers.

    There is no particularly good reason for this state of affairs, although everybody involved can come up with a plausible-sounding explanation of why it is not their fault.



  • @Outlaw Programmer said:

    Am I missing something here?  What does this have to do with multithreading?  If he needed locks to make it threadsafe then he could have used Java's built-in locking mechanism.  Seemed to me like he felt the database itself was going to be used by more than 1 program (maybe more than 1 instance of this very program) at a given time, which is why he needed file locks to keep the seperate processes from stepping on each other's toes. 

     

     Thats what I thought. Why would you not use synchronized blocks in java if you are using a single-threaded application. The only possible reason for using a lock file is because of multi-threading or multi-processing applications. Hence my missconceptions why he shoulda gone for multithreading... O well... Guess well all implement single-threading lock file applications like MasterPlanSoftware

    Edit: Why does he even need locks? If he is completely single threaded there are no reason for locks/synchronizations. Locks are for multi-processes. Synchronizations are for multi-threading... So go xplain that one away my friends.



  • Ok I wrote an app called multicore_test to prove my point. And guess what... it works.  Surprise.

    The app sums the first 40,000,000 terms of: 1/sqrt(1) - 1/sqrt(2) + 1/sqrt(3) - 1/sqrt(4) ...

    With threads it can run on multiple cores and complete in parallel, much faster. The numeric param is the number of threads. Look:

     

    $ time ./multicore_test 1
    Thread0:         1 to 40000000    Sum is 6.048196e-01
    Total sum: 6.048196e-01
    

    real 0m2.271s
    user 0m2.265s
    sys 0m0.001s

    $ time ./multicore_test 4
    Thread2: 20000001 to 30000000 Sum is 2.051631e-05
    Thread0: 1 to 10000000 Sum is 6.047405e-01
    Thread1: 10000001 to 20000000 Sum is 4.631048e-05
    Thread3: 30000001 to 40000000 Sum is 1.223015e-05
    Total sum: 6.048196e-01

    real 0m1.474s
    user 0m2.259s
    sys 0m0.006s  

     

    As you can see it is faster (1.474s < 2.271s) with four threads than one. This happens consistently. If you watch the processes in top it even shows CPU use at 199% to indicate that it's using both of my cores. I'm on Fedora 8, btw.

    assuffield: even if you could make the algorithm faster (admittedly the best strategy in general) you can still always double or quadruple the speed by utilizing your bored, idling cores. In some apps this is crucial, like games and bulk number-crunching. And boinc!!

    Here is my code:

    (uses SDL)



  • Much more impressive on the Intel Core 2 Extreme (4x3.0Ghz):

    ubuntu@ubuntu:~$ time ./multicore_test 1
    Thread0:         1 to 40000000    Sum is 6.048196e-01
    Total sum: 6.048196e-01
    
    real    0m1.403s
    user    0m1.404s
    sys     0m0.000s
    
    ubuntu@ubuntu:~$ time ./multicore_test 4
    Thread0:         1 to 10000000    Sum is 6.047405e-01
    Thread2:  20000001 to 30000000    Sum is 2.051631e-05
    Thread3:  30000001 to 40000000    Sum is 1.223015e-05
    Thread1:  10000001 to 20000000    Sum is 4.631048e-05
    Total sum: 6.048196e-01
    
    real    0m0.406s
    user    0m1.396s
    sys     0m0.004s

    1.403s down to just 0.406s!

    (Ubuntu 7.10)



  • $ time ./multicore_test 1
    real	0m1.344s
    
    $ time ./multicore_test 4
    real	0m1.351s
    
    $ time ./multicore_test 80
    real	0m1.490s
    

    Exactly as predicted. Threading makes performance slightly worse except in one specific scenario, as I described.



  • Well it appears you only have 1 CPU core. Correct?



  • I'm not an expert on multithreading, but I think the actual question is: Can you translate a non-parallel algorithm into a parallel algorithm, so that the parallel version runs faster on a dual-core with 1 GHz each than the non-parallel version runs on a single-core with 2 GHz? And in fact that's possible for some algorithms, isn't it?



  • Ok, I know this will sound like trolling, but please for the love of god listen...TRWTF is that some people here seem to have forgotten about what multi-threading is about.

    Multi threading is necessary even on single-core cpus... The theory behind multiple threads is "lightweight processes", so then the question is: Why multiprocess? Many on this thread believe it is because more cores = more parrallel processing, the truth is yes that helps but the reason for multi-threading is more basic than that.

    If you have multiple independent threads then they can all act at the same time thus more cores=faster processing. HOWEVER many programs simply won't get that benefit. The goal of multi threading is that if one thread is waiting due to I/O of any sort, another thread is running and doing all it needs to do while the frist thread is waiting.

    In other words if I have a program that reads a file, counts to 20, then uses result of both operations, its more efficient to have 1 thread read a file, the other count to 20, and then once they are both done use result. It will work faster because the counting will happen while the file is being read vs having to wait for completion. A log reader that reads multiple log files is a great multi-threading candidate. You set every log file to a separate reader thread. This way you get as much I/O queued up as possible and have an independent thread take all that I/O and process whatever it has.

    Now having said that, a program made for parallel execution which does not need to constantly stop and wait for other threads to be done will work better if you give it more cores. In such a case an extra core can cut down execution time. But many times that is not the case for many programs.

    So yes, a 4-thread program on a 4 core cpu which does nothing but computation will run faster than using 1 core. You don't need to prove that to anyone.



  • @dlikhten said:

    Ok, I know this will sound like trolling, but please for the love of god listen...TRWTF is that some people here seem to have forgotten about what multi-threading is about.

    Multi threading is necessary even on single-core cpus... The theory behind multiple threads is "lightweight processes", so then the question is: Why multiprocess? Many on this thread believe it is because more cores = more parrallel processing, the truth is yes that helps but the reason for multi-threading is more basic than that.

    If you have multiple independent threads then they can all act at the same time thus more cores=faster processing. HOWEVER many programs simply won't get that benefit. The goal of multi threading is that if one thread is waiting due to I/O of any sort, another thread is running and doing all it needs to do while the frist thread is waiting.

    In other words if I have a program that reads a file, counts to 20, then uses result of both operations, its more efficient to have 1 thread read a file, the other count to 20, and then once they are both done use result. It will work faster because the counting will happen while the file is being read vs having to wait for completion. A log reader that reads multiple log files is a great multi-threading candidate. You set every log file to a separate reader thread. This way you get as much I/O queued up as possible and have an independent thread take all that I/O and process whatever it has.

    Now having said that, a program made for parallel execution which does not need to constantly stop and wait for other threads to be done will work better if you give it more cores. In such a case an extra core can cut down execution time. But many times that is not the case for many programs.

    So yes, a 4-thread program on a 4 core cpu which does nothing but computation will run faster than using 1 core. You don't need to prove that to anyone.

     

    I can't believe I am going to say this... but thank you. You have gone back to my original point and I agree.



  • @dlikhten said:

    @Outlaw Programmer said:

    Am I missing something here?  What does this have to do with multithreading?  If he needed locks to make it threadsafe then he could have used Java's built-in locking mechanism.  Seemed to me like he felt the database itself was going to be used by more than 1 program (maybe more than 1 instance of this very program) at a given time, which is why he needed file locks to keep the seperate processes from stepping on each other's toes. 

     

     Thats what I thought. Why would you not use synchronized blocks in java if you are using a single-threaded application. The only possible reason for using a lock file is because of multi-threading or multi-processing applications. Hence my missconceptions why he shoulda gone for multithreading... O well... Guess well all implement single-threading lock file applications like MasterPlanSoftware

    Edit: Why does he even need locks? If he is completely single threaded there are no reason for locks/synchronizations. Locks are for multi-processes. Synchronizations are for multi-threading... So go xplain that one away my friends.

     

    And just for reference... I wasn't arguing about how to multithread or anything else. The OP pointed out that in HIS CASE there was no demand to multithread. Therefore a lock was unnecessary. Somehow everyone else got rolled up in this idea of multithreading. I only argued when it was brought up that if he was to use more threads, he would gain performance. That simple statement is not true. It is more complex than that.



  • @MasterPlanSoftware said:

    @dlikhten said:
    ...

    So yes, a 4-thread program on a 4 core cpu which does nothing but computation will run faster than using 1 core. You don't need to prove that to anyone.

    I can't believe I am going to say this... but thank you. You have gone back to my original point and I agree.

     

    I've been arguing that multithreaded apps can use multiple cores to complete computations faster, because they're in parallel, of course.

    You, MasterPlan, have repeated over and over that you can't do parallel computing with multithreading, when in fact it is commonly done on all major, modern OSes. 

    Are you trying to steal my argument now?

    @MasterPlanSoftware said:

    Multithreading cannot make a CPU execute two instructions at the same time.

    @MasterPlanSoftware said:

    Throwing an extra thread in your program does not make anything run parallel or concurrent.



  • @superjer said:

    Are you trying to steal my argument now?

    @MasterPlanSoftware said:

    Multithreading cannot make a CPU execute two instructions at the same time.
     

     

     Emphasis mine. Reading comprehension my friend.



  •  Care to add emphasis here?

    @MasterPlanSoftware said:

    @superjer said:
    ...

    Multithreading when the threads run in parallel on multiple cores is parallel computing.

    ...

    Sorry. It is NOT parallel computing.



  • @superjer said:

     Care to add emphasis here?

    @MasterPlanSoftware said:

    @superjer said:
    ...

    Multithreading when the threads run in parallel on multiple cores is parallel computing.

    ...

    Sorry. It is NOT parallel computing.

    (Original emphasis included.) 

     

    Nope. Throwing threads at a thread scheduler is not parallel computing. Just because there is a chance two threads might get a cpu time slice at the same time does not make this parallel computing.



  • There is not just a chance. Given multiple cores, threads will run in parallel practically every time. My program confirms it.  It works every time I run it on 2 linuxes, Mac OS X, XP & Vista. It is supposed to work that way.

    Your only mistake is underestimating the design of modern operating systems.



  • @superjer said:

    There is not just a chance. Given multiple cores, threads will run in parallel practically every time. My program confirms it.  It works every time I run it on 2 linuxes, Mac OS X, XP & Vista. It is supposed to work that way.

    Your only mistake is underestimating the design of modern operating systems.

     

    The thread scheduler in Windows will choose a processor to run a new thread on depending on it's current load. If one of your processors is a lot more idle than the other, both threads will run on the same processor.



  • I have tested that extensively (mostly on Linux though) and I have found that even if one core is 95% busy, the OS will still use that 5% for my second thread. Every time. Try it yourself.

    Then when the 1st thread completes (20x faster I suppose) the OS moves my second thread to the now-free core. The result? 10% of the job was completed in parallel. And that's the best it could do!

    Modern operating systems do an excellent job of using all available resources. For God's sake just try it and see.



  •  @superjer said:

    I have tested that extensively (mostly on Linux though) and I have found that even if one core is 95% busy, the OS will still use that 5% for my second thread. Every time. Try it yourself.

    Then when the 1st thread completes (20x faster I suppose) the OS moves my second thread to the now-free core. The result? 10% of the job was completed in parallel. And that's the best it could do!

    Modern operating systems do an excellent job of using all available resources. For God's sake just try it and see.

    That is a form of load balancing. If it was truly executing the two threads in parallel, and the threads were doing the same amount of work, they would complete at the same time. Everytime.

     



  • If they aren't running in parallel, then how do you propose the computation completes approximately four times faster with four threads than it does with one thread?

    Keep in mind that the one thread uses "100%" of the CPU and the four use "399%" of the CPU according to top.



  • @superjer said:

    If they aren't running in parallel, then how do you propose the computation completes approximately four times faster with four threads than it does with one thread?

    Keep in mind that the one thread uses "100%" of the CPU and the four use "399%" of the CPU according to top.

     

    I cannot argue about your specific code or results. I am simply stating how these things work. If you achieve different results, it sounds like YOU should investigate. I am sure we would all love to hear the concrete results. Most of your assertions so far have added up to "Every thread you add increases performance.". Unfortunately you are just flat wrong. I have no other answer for you.



  • "Every thread you add increases performance." is a ridiculous statement and I did not make it.

    As I have been saying all along, every thread you can run in parallel on multiple cores increases performance. And OSes allow you to do so easily with multithreading.

    My program is NOT a special case. It is the norm. Talk to the people who make Photoshop filters, games like Crysis and Supreme Commander, ray-tracers and BSP compilers.



  • Although this was a big argument over a moot point, I agree with superjer. In the case of file io and similar processing another thread usually always helps. I also agree with Master Plan that multithreading isn't the answer to everything and shouldn't be overused. And the scheduler is beyond our control, the threads just open us up to the possibility of parallel processing. Even with single core proccessors, we are given the perception of simultaneous running threads, even though windows swaps between them very fast. In the context of the beginning of this thread, you never know when windows is going to preempt a thread, and the file locking is a good thing. Although he could have locked without doing so many deletes and should have used a critical section or something.



  • @pitchingchris said:

    Although this was a big argument over a moot point, I agree with superjer. In the case of file io and similar processing another thread usually always helps. I also agree with Master Plan that multithreading isn't the answer to everything and shouldn't be overused. And the scheduler is beyond our control, the threads just open us up to the possibility of parallel processing. Even with single core proccessors, we are given the perception of simultaneous running threads, even though windows swaps between them very fast. In the context of the beginning of this thread, you never know when windows is going to preempt a thread, and the file locking is a good thing. Although he could have locked without doing so many deletes and should have used a critical section or something.

    That's just crazy talk.  How can someone actually see both sides of an issue and remain sane?  ;)



  • @MasterPlanSoftware said:

    That is a form of load balancing. If it was truly executing the two threads in parallel, and the threads were doing the same amount of work, they would complete at the same time. Everytime.
     

    Why did you have wait for so long just to prove a meaningless academic point? And who wants true parallell processing anyway? I rather have a load balancer figuring out the optimal usage of cpu:s than try to do it myself in code. 



  • I agree with the vast majority of what MasterPlanSoftware has to say about threads. They are not good in every situation, they do add overhead, their most important use is for dealing with blocking I/O, and threads by themselves do not add any speed to anything. All completely true.

    MasterPlanSoftware just has an overly pessimistic view of how operating systems handle threads on multi-core systems. And I think it is important for people who read a site like this to know that parallel computing with threads is not only possible, but very common.



  • @Obfuscator said:

    I rather have a load balancer figuring out the optimal usage of cpu:s than try to do it myself in code. 
     

    Me too.

    @Obfuscator said:

    @MasterPlanSoftware said:

    That is a form of load balancing. If it was truly executing the two threads in parallel, and the threads were doing the same amount of work, they would complete at the same time. Everytime.
     

    Why did you have wait for so long just to prove a meaningless academic point? And who wants true parallell processing anyway? I rather have a load balancer figuring out the optimal usage of cpu:s than try to do it myself in code. 


    I agree it is also meaningless. But when people state something incorrectly as fact, it is generally good to set the record straight. It is meaningless in that you don't need to be concerned about it. But you cannot go around telling everyone their performance will increase for every thread they run, and that they will all run in parallel. There is a lot more to consider than that.

    In this case, I met someone who decided to argue against the very nature of the subject. Oh well.

    You could have joined in to make the point clearer earlier if you felt I was not getting it done quickly enough... I would have GLADLY stepped away.



  • @superjer said:

    I agree with the vast majority of what MasterPlanSoftware has to say about threads. They are not good in every situation, they do add overhead, their most important use is for dealing with blocking I/O, and threads by themselves do not add any speed to anything. All completely true.

    MasterPlanSoftware just has an overly pessimistic view of how operating systems handle threads on multi-core systems. And I think it is important for people who read a site like this to know that parallel computing with threads is not only possible, but very common.

     

    HAHAHA. Yep. As long as you redefine parallel.



  • @MasterPlanSoftware said:

    HAHAHA. Yep. As long as you redefine parallel.

    Here's my definition:

    parallel computing n 

    The method of running two or more parts of a computation at the same time on multiple processors or cores. For example, if a task requires three parts A, B and C to be computed, doing all three simultaneously (in parallel) is faster than doing A, then B, then C.

    I feel like I agree with Wikipedia:

    @http://en.wikipedia.org/wiki/Parallel_computing said:

    Parallel computing is a form of computing in which many instructions are carried out simultaneously. Parallel computing operates on the principle that large problems can almost always be divided into smaller ones, which may be carried out concurrently ("in parallel"). Parallel computing exists in several different forms: bit-level parallelism, instruction level parallelism, data parallelism, and task parallelism. ... Parallel computing has recently become the dominant paradigm in computer architecture, mainly in the form of multicore processors.

    What definition would you prefer?



  • @superjer said:

    @MasterPlanSoftware said:

    HAHAHA. Yep. As long as you redefine parallel.

    Here's my definition:

    parallel computing n 

    The method of running two or more parts of a computation at the same time on multiple processors or cores. For example, if a task requires three parts A, B and C to be computed, doing all three simultaneously (in parallel) is faster than doing A, then B, then C.

    I feel like I agree with Wikipedia:

    @http://en.wikipedia.org/wiki/Parallel_computing said:

    Parallel computing is a form of computing in which many instructions are carried out simultaneously. Parallel computing operates on the principle that large problems can almost always be divided into smaller ones, which may be carried out concurrently ("in parallel"). Parallel computing exists in several different forms: bit-level parallelism, instruction level parallelism, data parallelism, and task parallelism. ... Parallel computing has recently become the dominant paradigm in computer architecture, mainly in the form of multicore processors.

    What definition would you prefer?

     

    I am done arguing with you until you understand thread scheduling and how threads work.



  • @MasterPlanSoftware said:

    I am done arguing with you until you understand thread scheduling and how threads work.

    Why don't you just share your definition of parallel? Is it a secret? Would it be damaging to you if I knew what it was?

    Do you agree that a computation is parallel if and only if 2+ parts of it run literally simultaneously, or not?

    This has nothing necessarily to do with threads. Although I do understand thread scheduling just fine.

     

    (If you truly do not respond I will assume the root of this entire disagreement was simply your artificially-narrow definition of parallel.)



  • @dlikhten said:

    The goal of multi threading is that if one thread is waiting due to I/O of any sort, another thread is running and doing all it needs to do while the frist thread is waiting.

    <font size="+2">NO. THIS IS WRONG. WHERE DID YOU GET THIS IDEA? STOP IT.</font>

    Threading is not, never has been, and never will be any kind of solution to IO problems. Threading for IO will make your application slower than doing it right.

     

    There is one and only one reason for threads, and that is SMP.



  • @superjer said:

    Here's my definition:

    parallel computing n 

    The method of running two or more parts of a computation at the same time on multiple processors or cores. For example, if a task requires three parts A, B and C to be computed, doing all three simultaneously (in parallel) is faster than doing A, then B, then C.

     

    Of course, you're forgetting that your trivial little inverse-root example has each thread essentially being independent of each other, except for the final summation.

    If you have multiple processors (discrete CPU chips, multi-core, pentium 4 hyperthreading, whatever), then yes, any app which can fire up multiple threads WHICH ARE INDEPENDENT OF EACH OTHER, will generally run faster. That much is obvious. We'll ignore I/O overhead and, memory bus contention, etc... We'll just assume that the physical resources needed for the thread to run are 100% available.

    But if the threads are even somewhat inter-dependent, then multithreading will probably end up being worse. What if you were computing something like a Fibonnaci sequence? You can't split that into multiple threads, because each term depends on the sum of the calculations that came before it. Every thread 'n' would block until the 'n-1' completed. You'd end up with no speed increase at all, and most likely a speed DECREASE because of the overhead of spawning all the threads in the first place.

    In other words, multi-threading/parallel computing is fine for running a SETI screensaver, SHA1 hash buster, a ray tracer, or an mp3 encoder, because those tasks can all be split up into smaller portions which are more or less completely independent of each other.

    But let's see you try and write a multi-threaded encryption engine. Especially one using a stream cipher. At most you could split it into 2 or 3 threads. Bit-stream generator, reader, and writer. Spawn as many extra threads as you want, you can't use them to jump ahead and have each thread encrypt a different section of the data, because stream ciphers are UTTERLY and COMPLETELY linear and non parallelizable.

    So you can throw out your definition above, because it's only one minor special case of a vastly larger body of problems. If it's still not clear to you, then try assuming A = jump into mud puddle, B = washing clothes, C = taking a shower. Try and parallelize that without ending up dirty in the end. At best you'll end up with A + B|C.



  • MarcB, you are mostly correct but I think you are a victim of not reading my previous posts... 

    @MarcB said:

    Of course, you're forgetting that your trivial little inverse-root example has each thread essentially being independent of each other, except for the final summation.

    I'm not forgetting anything. I picked that computation precisely because it is an excellent candidate for parallel processing. Precisely because the threads don't have to communicate except at the beginning and end.

    And I know that parallel computing can only be used for certain kinds of problems. I've stated that several times in previous posts.

    @MarcB said:

    So you can throw out your definition above, because it's only one minor special case of a vastly larger body of problems.

    My definition does not claim that parallization is possible for all problems! It just says if you do things in parallel (which obviously requires that it be logically possible for the problem at hand) then it will be faster. 

     



  • @superjer said:

    My definition does not claim that parallization is possible for all problems!
     

    Actually, yes it does claim exactly that. It flat out says "doing all three (simultaneously) ... IS FASTER". It doesn't say "in some cases", or "in most cases", and it definitely doesn't say "... requires that it be logically possible ...".



  • @MarcB said:

    @superjer said:

    My definition does not claim that parallization is possible for all problems!
     

    Actually, yes it does claim exactly that. It flat out says "doing all three (simultaneously) ... IS FASTER". It doesn't say "in some cases", or "in most cases", and it definitely doesn't say "... requires that it be logically possible ...".

     

    I am sure you have the best intentions here, and I know he sounds coherent... but this is one of those losing battles with someone that has convinced themselves that reality is wrong.

    Do yourself a favor, just take the high road. I wish I had figured that out sooner.



  • MasterPlanSoftware: You continually tell me I'm wrong, but you fail to provide any examples, or your preferred definition of parallel, or links to any reputable sources backing up your point of view that threads can not -- or should not -- be used for parallel computing on multi-core systems with common OSes. You fail.

    MarcB: If I tell you that putting on running shoes will help you run faster, do I really have to mention that you must have feet and legs for it to work?

    I certainly hope not.

    My definition simply stated that parallel computing is "the method of running two or more parts of a computation at the same time on multiple processors or cores." And then I provided an example. Examples are not all-encompassing claims, they are specific by definition



  • (On second thought...)

    If you don't like examples in definitions then fine, let's throw it out. I've already provided a more to-the-point, strict IFF definition (which MasterPlanSoftware refuses to acknowlege because he already knows he was wrong):

    @superjer said:

    Do you agree that a computation is parallel if and only if 2+ parts of it run literally simultaneously, or not?



  • I can't believe this entire thread is all about semantics. From my reading of the thread, everyone actually agrees on the key points - you've got this concept of running different bits of a task on different processors or cores, and multithreading usually allows you to take advantage of that to reduce the total elapsed time, as long as you have multiple processors/cores. But MPS is refusing to allow superjer to use the word "parallel" to describe this, having some other (and unstated) definition of "parallel". In a sense, maybe you're both right - there probably is a technical definition of parallel that implies the different bits of the task are proceeding in lockstep, starting and finishing simultaneously, but equally, 95% of people will not only know what superjer means by "parallel", but will use it exactly the same way - making it perfectly valid in my book. What a waste of a thread! :)



  • Doesn't java use software threads anyway?



  • @XIU said:

    Doesn't java use software threads anyway?

    As opposed to...  ???  Linen?  Extra-absorbent cotton?



  • @GalacticCowboy said:

    @XIU said:

    Doesn't java use software threads anyway?

    As opposed to...  ???  Linen?  Extra-absorbent cotton?

     

    No, no, no, no, NO! Its not true! Let me give you the history lesson:

    Back in the 12th century, there was a man who decided that all threading is to be done on wooden tables using nothing but his own hair... Long story short Java uses brick threads. If you want to check you need to open up the java c-code and look for the "createThread" function. You will notice that that function actualy materiallizes a brick somewhere in indonesia to be used by your process. Don't believe me? See for yourself.



  • As opposed to using the actual threading functions of the operating system. But from what I've heard on this board, those are merely horrors of the past.



  • @dlikhten said:

    Long story short Java uses brick threads. If you want to check you need to open up the java c-code and look for the "createThread" function. You will notice that that function actualy materiallizes a brick somewhere in indonesia
     

    heh lolz 



  •  @dhromed said:

    @dlikhten said:

    Long story short Java uses brick threads. If you want to check you need to open up the java c-code and look for the "createThread" function. You will notice that that function actualy materiallizes a brick somewhere in indonesia
     

    heh lolz 

    I didn't know about this either, but one day I hit a Java bug and suddenly a brick materialized behind me, which caused me to stub my toe. Instantly realizing what has happened I killed my process which caused no less than three cats to come into my appartment and drag the brick away. I immediately dove into the source code which was littered with security measures causing me to break my left pinky. Later I found out that I was actually trippin' on Acid, but thats beyond the point.

     

    Where were we? O yea threads come from gekkos!

     



  • Just to add more fuel to the fire...

    Performance issues aside, sometimes multithreading might actually decrease the complexity of your code.  Say, for example, you want some task to be executed every 5 minutes.  It's probably conceptually simpler to spawn a new thread (or use something like java.util.Timer that uses a separate thread behind the scenes) than try to shoehorn this into your main work thread.  No real performance gain or loss, here.  The idea is that it's easier to understand that 2 seperate things are going on at the same time.  Of course, now you'll have to make sure your objects are thread-safe, but... 



  • @PSWorx said:

    As opposed to using the actual threading functions of the operating system. But from what I've heard on this board, those are merely horrors of the past.

    The language doesn't care; Sun Java can do it either way nowadays, and other implementations vary. 



  • @Outlaw Programmer said:

    It's probably conceptually simpler to spawn a new thread (or use something like java.util.Timer that uses a separate thread behind the scenes) than try to shoehorn this into your main work thread.  No real performance gain or loss, here.  The idea is that it's easier to understand that 2 seperate things are going on at the same time.  Of course, now you'll have to make sure your objects are thread-safe, but... 

    While in theory this should work, and you can contrive examples of it, I have never seen a real-world case where it was simpler to make the application thread-safe than it was to implement an event dispatch loop. 



  • @asuffield said:

    While in theory this should work, and you can contrive examples of it, I have never seen a real-world case where it was simpler to make the application thread-safe than it was to implement an event dispatch loop.

    I have one real-world example. In my ray-tracer, I didn't want the interface to be blocked by the rendering process, which can take a long time. The render thread only writes to the image buffer and the interface only reads from it. They don't share any other objects, or use any non-reentrant functions, so I didn't have to do anything to make them thread-safe. The render thread is actually a thread-manager itself that spawns sub-threads to draw each section of the image (allowing parallelization where supported). The nature of the ray-tracing leads to little thread complexity -- it's all just a bunch of totally thread-safe math and reading a lot of const data.

    Disclaimer: threads are going to make your program much more complicated 99% of the time. 100% of the time if you don't understand them very well.

    If anyone can find a reputable source that defines parallel computing as requiring any kind of "lock-step" instructions or "finishing at exactly the same time" I would like to see it.



  • @asuffield said:

    Threading is not, never has been, and never will be any kind of solution to IO problems. Threading for IO will make your application slower than doing it right. 

    For the benefit of those who are not doing it right, could you please share what the right way is?

    As far as I know, generic PC hardware doesn't provide any means of interfacing directly with an application's message queue.  And even if it did, the concepts of worker threads and callbacks were devised precisely for the purpose of [I]eliminating[/I] the need for idle-event processing and spaghetti code from thousands of polling calls.  Raw performance isn't the only issue at hand here; when the performance difference becomes negligible (as it generally is for long-running worker threads) then good design becomes another very important factor.  Besides which, a design that uses worker threads will automatically get the benefits of SMP if you do happen to be on a multi-core machine.

    But seeing as how you're making such an issue of this, I'm sure that can't be what you had in mind.  So please, educate us. 


Log in to reply