Speaking of unstable databases....



  • Speaking of unstable databases... Last week, I was doing a code review of a modest, stand-alone system for someone in another group. At the top of a file called: Database.java, I found the following command:

    // The database group is too hard to deal with and too slow to respond, and I
    // have a deadline, so I'm going to roll my own textfile-db and just get on with it.
    

    Ok, to be fair, his system really only required a simple flat file with a couple of rows each containing a couple of fields, and our DB group IS a WTF in and of itself, and they take forever to make even the simplest changes, so I can understand the guys' frustration. However, upon perusing inside this file, I find stuff like this:

    /**
     * Lock the database
     */
    public static void lock() {
      File file = new File(Globals.DB_LOCK_FILE);
      file.deleteOnExit();
      while (!file.createNewFile()) {
         try { Thread.sleep(50); } catch (InterruptedException ie) {}
      }
    }
    
    public static void unlock() {
      File file = new File(Globals.DB_LOCK_FILE);
      boolean ignoreResult = file.delete();
    }
    

    Mind you, the system processes commands serially, so parallel accesses can never ever occur. In spite of this, virtually every single "DB" call is wrapped in lock/unlock, sometimes amounting to hundreds of create/delete pairs per logical action.

    I ran it through JProbe and found that about 60% of the total processing time is used in accessing the file system in this manner. He claimed that he knew the application was single threaded (he shoud as he wrote it), but that he just wanted to do a good job on the database-class in case someone ever wanted to re-use it.

    sigh



  • What's the plural for TRWTF?

    @snoofle said:

    parallel accesses can never ever occur

    Which guarantees that it will.

    @snoofle said:

    virtually every single "DB" call is wrapped in lock/unlock, sometimes amounting to hundreds of create/delete pairs per logical action

    Why not lock/unlock around an entire, oh, let's call it a "transaction"?



  • @GalacticCowboy said:

    @snoofle said:

    virtually every single "DB" call is wrapped in lock/unlock, sometimes amounting to hundreds of create/delete pairs per logical action

    Why not lock/unlock around an entire, oh, let's call it a "transaction"?

    That would have made sense...

    @GalacticCowboy said:

    @snoofle said:
    parallel accesses can never ever occur

    Which guarantees that it will.

    Since you haven't seen the rest of the system, that's a perfectly reasonable assumption. However, the core code looked something like this:

    // MainServer.java
    //pseudo code
    while (true) {
      try {
          get client connection request
          serially read request from socket (no threads to handle the load)
          serially process the request (again, no threads)
          respond to client
          close socket
      } catch (...) {
        ...
      }
    }
    
    In fact, going into more detail, it would have made more sense to... I don't know, encapsulate each request in a command with some local flag variables, perhaps even keeping the state in-memory for the duration of the sequential serialized activity, and completely eliminate the need for any kind of DB altogether.


  • Most databases do the following: Have a lock table. Since tables don't get written to hard drive until the system deems it necessary most of the locks may very well be in memory, sometimes pushed to hard drive via. inserts. He could have just used some mutexes.

    I mean eclipse makes a .lock file when you start the workbench, but its done once and only during startup. Actaully it just ensures that certain permissions are available... Because in windows (if running on windows) you might not be able to delete a file if say a virus scanner has it open :( There might be a case when he can't delete his lock.

    To defend him, he may have hacked this up quickly and due to time constrains not have time to multi-thread it all. Unfortunately I had to do these sorts of restrictions on myself because I didnt have the spare time to write multithreading and test it out, sure i wanted to but when manager demands that the project is finished at a certain point, sometimes u just have to give in. Plus maybe that solves the issue fine for him, if performance becomes an issue he can fix it up then. 

     

    Edit:

    However single threading all client requests is pretty undefendable.



  • @dlikhten said:

    Most databases do the following: Have a lock table. Since tables don't get written to hard drive until the system deems it necessary most of the locks may very well be in memory, sometimes pushed to hard drive via. inserts. He could have just used some mutexes.
     

    Thanks for the education into how databases work. I am sure no one else here knew this. I know I am more enlightened!

    @dlikhten said:

     

    However single threading all client requests is pretty undefendable.

    snoofle specifically said this is only a single threaded application. From the very minimal examples he has provided I don't see why you would want it to be multithreaded. Unless you are 'one of those people' who equate multi threaded with speed...

    @dlikhten said:

    To defend him, he may have hacked this up quickly and due to time constrains not have time to multi-thread it all. Unfortunately I had to do these sorts of restrictions on myself because I didnt have the spare time to write multithreading and test it out, sure i wanted to but when manager demands that the project is finished at a certain point, sometimes u just have to give in. Plus maybe that solves the issue fine for him, if performance becomes an issue he can fix it up then. 

    I don't think you read the OP. But you did step right up and post. Yay!

     

    Actually, from reading your post, I am starting to think you are equating multithreading with speed... Wow.



  • In the interest of stating the facts, I looked in the system that stores the output of the application I reviewed. It logged 44 transactions last month. That's about 2 per day. Going back 2 more months, I find 91 transactions , and slightly more than 2 per day. I'm thinking you don't really need multithreading for that, so I don't fault him for saving time and using single threading for such a low volume application.



  •  @snoofle said:

    In the interest of stating the facts, I looked in the system that stores the output of the application I reviewed. It logged 44 transactions last month. That's about 2 per day. Going back 2 more months, I find 91 transactions , and slightly more than 2 per day. I'm thinking you don't really need multithreading for that, so I don't fault him for saving time and using single threading for such a low volume application.

    And even then, volume/performance should not be the deciding factor in multithreading. Multithreading is not going to make anything faster or lower resources. Given one large task done in a single thread, or ten tasks done in individual threads, they should complete about the same time. In fact the multiple threads would likely have more overhead and complete later. Not to mention the inherent ease of complete and total WTFery when made by a low quality programmer.

    Multithreading would be useful for an application where the client connections could come at the same time, and if the single thread didn't respond, the client would never try again.This would be a basic criteria of the design. Not a performance enhancement.



  • @MassaPanSofwah said:

    Given one large task done in a single thread, or ten tasks done in individual threads, they should complete about the same time.

    Oh so very wrong if you have multiple processors or cores. (And who doesn't nowadays?)



  • @superjer said:

    @MassaPanSofwah said:

    Given one large task done in a single thread, or ten tasks done in individual threads, they should complete about the same time.

    Oh so very wrong if you have multiple processors or cores. (And who doesn't nowadays?)

     

    Multi core/ multi CPU  considerations should not change your design and whether or not your application uses multiple threads.

    Yes, mulitple cores will provide a multithreaded application better performance, but that doesn't mean you should throw multithreading into everything you do. 



  • @GalacticCowboy said:

    What's the plural for TRWTF?

    TRWTFSA



  • Am I missing something here?  What does this have to do with multithreading?  If he needed locks to make it threadsafe then he could have used Java's built-in locking mechanism.  Seemed to me like he felt the database itself was going to be used by more than 1 program (maybe more than 1 instance of this very program) at a given time, which is why he needed file locks to keep the seperate processes from stepping on each other's toes. 



  • @superjerliketoplaywithquotetagssomaybeishouldtooweeeethisisfun said:

    @MassaPanSofwah said:

    Given one large task done in a single thread, or ten tasks done in individual threads, they should complete about the same time.

    Oh so very wrong if you have multiple processors or cores. (And who doesn't nowadays?)

     

    In fact, just because your application has multiple threads, doesn't guarantee the multiple cores/cpus will even be used.

     



  • @superjer said:

    @MassaPanSofwah said:

    Given one large task done in a single thread, or ten tasks done in individual threads, they should complete about the same time.

    Oh so very wrong if you have multiple processors or cores. (And who doesn't nowadays?)

     <hints id="hah_hints"></hints>
    Because most servers only ever run one application at a time, right?

    Simply having multiple cores is not a reason for [I]your[/I] application to use them all.  And if you have a bunch of individual threads all sitting around doing not much of anything, the scheduler might just as well run them all on the same core anyway. 

    Multithreading ensures that an application can consistently respond to requests.  Databases are multithreaded because you might get 50 or 100 people trying to run queries at once, and there's no reason to make those people sit around waiting if nobody else's transaction is going to affect them (i.e. a lock).  It doesn't make sense to sit around waiting several milliseconds for a slow hard drive to spit out some data while you could be interpreting other queries and creating more execution plans.

    There are certain situations when multithreading can improve the [I]perceived[/I] performance of an entire [I]system[/I].  One example is if you have to query a remote database or web service and wait several seconds for the results.  If you have local processing that you can do between these calls, that's a great time to do it.  Or if you're in the middle of serving a web page, you can have a separate thread notify you when that long-running query is done so you can temporarily yield control to other requests.  But if you want to get technical, this isn't actually improving [I]performance[/I] in the sense of using fewer CPU cycles, fewer I/O operations, or less memory.  It's a question of throughput in a distributed system.

    In fact, multi-threading almost always degrades performance, by definition.  If your multi-threaded app is doing the exact same work as a single-threaded app but incurring the additional overhead of context switches and paging, then it's hurting the raw performance.  Most of the time, threading is an exercise in trading raw performance for something else (i.e. responsiveness or total throughput).

    And incidentally, this post wasn't really about multi-threaded vs. single-threaded, it was about creating a serialized thread-safe implementation of something that the system constraints specify can never possibly be hit by two threads at once.  You don't need thread-safe classes in a single-threaded application.



  • I'm not even sure what to say to you guys... but multithreading can vastly improve "perceived performance" in certain situations. Right now I'm working on an app that completes 3.99 times faster with 4 threads than with just one. We're talking 4 hours down to 1 hour. So reading that "they should complete about the same time" just set off my "Like, Not Even!" alarm.



  • I am happy you had a good experience with multithreading, but don't advocate it's usage for performance. That is simply not what multithreading is for.

    If you aren't careful, you will end up breeding more dlikhten's who run around making posts on forums like: "Teh appz are slowz! They should use multithreading!!! noobz! LOLz!"



  • Multithreading works well when you have blocking IO combined with CPU work.  Like, for example, a database, an operating system, a web server to name a few.  Any application that exhibits the IO+CPU+IO will typically benefit from MT.



  • @LoztInSpace said:

    Multithreading works well when you have blocking IO combined with CPU work.  Like, for example, a database, an operating system, a web server to name a few.  Any application that exhibits the IO+CPU+IO will typically benefit from MT.

     

    Very true. This is one of the only things it is good for.



  • @SlavePlanSoftware said:

    I am happy you had a good experience with multithreading, but don't advocate it's usage for performance. That is simply not what multithreading is for.

    blah blah dlikhten blah blah


    Firstly, I do not appreciate you associating me with dlikhten.

    Secondly, avoiding multithreading and thereby not taking advantage of available (and mostly unused) resources for a processing-intense task is just silly. I'm sure my users would love to wait an additional 3 hours because "multithreading shan't be used for performance."

    You are correct in the sense that performance is not the primary advantage of multithreading in general. But it is a huge advantage where appropriate.



  • @MasterPlanSoftware said:

    I am happy you had a good experience with multithreading, but don't advocate it's usage for performance. That is simply not what multithreading is for.

    If you aren't careful, you will end up breeding more dlikhten's who run around making posts on forums like: "Teh appz are slowz! They should use multithreading!!! noobz! LOLz!"

    I don't know if I agree 100% with this.  Multithreading complicates things tremendously.  One of the only times you'd want to go this route is when performance is an issue.  For example, when making a sandwich,  its easier to understand:

    1) Get Bread
    2) Get Turkey
    3) Put Turkey on bread
    4) Eat

    Than it is to understand:

    1a) Get Bread --- 1b) Get Turkey
    2) When you have both, put them together
    3) Eat

    Most of the time you'll want to keep things simple, but if you really need that damn sandwich, it can pay off to break up the task into smaller operations that can be done in parallel.



  • @Outlaw Programmer said:

    @MasterPlanSoftware said:

    I am happy you had a good experience with multithreading, but don't advocate it's usage for performance. That is simply not what multithreading is for.

    If you aren't careful, you will end up breeding more dlikhten's who run around making posts on forums like: "Teh appz are slowz! They should use multithreading!!! noobz! LOLz!"

    I don't know if I agree 100% with this.  Multithreading complicates things tremendously.  One of the only times you'd want to go this route is when performance is an issue.  For example, when making a sandwich,  its easier to understand:

    1) Get Bread
    2) Get Turkey
    3) Put Turkey on bread
    4) Eat

    Than it is to understand:

    1a) Get Bread --- 1b) Get Turkey
    2) When you have both, put them together
    3) Eat

    Most of the time you'll want to keep things simple, but if you really need that damn sandwich, it can pay off to break up the task into smaller operations that can be done in parallel.

     

    Threads will only help in a situation where you have to wait on something. Perhaps you want to make a tuna sandwich. Instead of waiting for the bread to toast, and then getting the tuna out of the can and mixed, you can do both at the same time. That would be two threads. That will help the overall time of the process.

    In your example, if the bread was on one side of the kitchen, the turkey on the other, you could not get both at the same time. Just like a CPU is not going to be able to process your two instructions at the same time. Multithreading cannot make a CPU execute two instructions at the same time. It simply avoids waiting when one thread is waiting for a blocking operation to complete.

    It is not done for performance. It is done for good design. If you have a blocking operation, and something else could be responding or processing, multithreading is the way to go.

    If you need to crunch a large amount of data, say sorting two arrays, and you break each sort into a thread, and sort, there is not going to be a [consistent] performance gain. Those two sorts will require the same amount of CPU labor to work.You wouldn't save anything, and the context switching at times can have a negative impact in performance.



  • Sorry, thought we were talking about picking a multithreaded design when you know you have 2+ cores/CPUs, in which case you could perform 2 operations at the same time.

    I thought it was pretty obvious that breaking apart tasks when you only have 1 CPU couldn't possibly give you any performance gain.  My mistake! 



  • @superjer said:

    Firstly, I do not appreciate you associating me with dlikhten.
     

    I don't care. You make the same kind of ignorant comments he does, and you do the same immature type of crap. (Like constantly changing the quoted user's name. Do you think that accomplishes something?) If you wanted to be taken seriously you wouldn't do that kind of stuff. Therefore, the best I can tell superjer == dlikhten == spectateswamp.@superjer said:

    Secondly, avoiding multithreading and thereby not taking advantage of available (and mostly unused) resources for a processing-intense task is just silly. I'm sure my users would love to wait an additional 3 hours because "multithreading shan't be used for performance."

    You are correct in the sense that performance is not the primary advantage of multithreading in general. But it is a huge advantage where appropriate.

    If I thought for a minute that you had any idea of what you were talking about I might argue with this. But I don't think so. So I won't.



  • @Outlaw Programmer said:

    Sorry, thought we were talking about picking a multithreaded design when you know you have 2+ cores/CPUs, in which case you could perform 2 operations at the same time.

    I thought it was pretty obvious that breaking apart tasks when you only have 1 CPU couldn't possibly give you any performance gain.  My mistake! 

     

    Just for reference as well, most OS'es threading schemes (that I know of) won't necessarily use the other CPU/core just because you start another thread. In fact, the OS can also delegate part of the same thread off to the other core/cpu anyway. The concepts of parallel computation (and the direct programmatic control over it) are a ways away in most environments AFAIK.



  • @MasterPlanSoftware said:

    Just like a CPU is not going to be able to process your two instructions at the same time. Multithreading cannot make a CPU execute two instructions at the same time.

    Yes it can. My processor can execte FOUR instructions at one time. You need to go learn about multi-core processors.

    @MasterPlanSoftware said:

    ...sorting...

    You simply do not know what you are talking about. You need to go learn about multithreaded sorting algorithms. For example: http://en.wikipedia.org/wiki/Quick_sort#Parallelizations

    And sorting isn't even close the best example of how much faster you can number-crunch with MT.



  • @superjer said:

    @MasterPlanSoftware said:

    Just like a CPU is not going to be able to process your two instructions at the same time. Multithreading cannot make a CPU execute two instructions at the same time.

    Yes it can. My processor can execte FOUR instructions at one time. You need to go learn about multi-core processors.

    @MasterPlanSoftware said:

    ...sorting...

    You simply do not know what you are talking about. You need to go learn about multithreaded sorting algorithms. For example: http://en.wikipedia.org/wiki/Quick_sort#Parallelizations

    And sorting isn't even close the best example of how much faster you can number-crunch with MT.

     

    Actually this is what I was referring to:http://en.wikipedia.org/wiki/Parallel_computing

    Yes, you can optimize sorting algorithms. However, unfortunately for your argument you DONT have direct control over which CPU executes which instruction.

    @superjer said:

    Yes it can. My processor can execte FOUR instructions at one time. You need to go learn about multi-core processors.

    All processors can execute multiple instructions at one time. They are called multiple pipelines. Unfortunately this is not what I was referring to. When you actually understand the subject at hand, feel free to come back with an intelligent argument. I know all this computer lingo can be confusing.

     



  • @MasterPlanSoftware said:

    If I thought for a minute that you had any idea of what you were talking about I might argue with this. But I don't think so. So I won't.

    Backing down then? It's ok to be wrong sometimes. Get over it. If you really think you are right, argure your point logically.

    @MasterPlanSoftware said:

    Just for reference as well, most OS'es threading schemes (that I know
    of) won't necessarily use the other CPU/core just because you start
    another thread.

    You are correct that there is [i]no guarantee[/i] that the OS will put each thread on a new core. But have you tested it? My MT app runs on Windows and Linux and it uses all the cores every time I run it. I don't need it to be guaranteed. I need it to work most of the time and when available. It does. You can't argue with reality, Master.

     



  • Your argument is shifting. Observe: 

    @MasterPlanSoftware said:

    Those two sorts [MT and single-threaded] will require the same amount of CPU labor to work.You wouldn't save anything, ...

    @MasterPlanSoftware said:

    Yes, you can optimize sorting algorithms [with MT].

    See? Now you are agreeing with me, but you won't admit I'm right. People like you are terribly annoying to argue with.

    @MasterPlanSoftware said:

    However, unfortunately for your argument you DONT have direct control over which CPU executes which instruction.

    I DON'T NEED IT.

    Unless the OS is very busy handling another intense task, it will put my threads on separate cores. Your new, weaker argument is not relevant to my point. Seriously, implement an MT sort or something and run it on a modern OS and see for youself. We can do it. We have the technology.



  • @superjer said:

    Your argument is shifting. Observe: 

    @MasterPlanSoftware said:

    Those two sorts [MT and single-threaded] will require the same amount of CPU labor to work.You wouldn't save anything, ...

    @MasterPlanSoftware said:

    Yes, you can optimize sorting algorithms [with MT].

    See? Now you are agreeing with me, but you won't admit I'm right. People like you are terribly annoying to argue with.

    @MasterPlanSoftware said:

    However, unfortunately for your argument you DONT have direct control over which CPU executes which instruction.

    I DON'T NEED IT.

    Unless the OS is very busy handling another intense task, it will put my threads on separate cores. Your new, weaker argument is not relevant to my point. Seriously, implement an MT sort or something and run it on a modern OS and see for youself. We can do it. We have the technology.

     

    I am sorry that your opinions don't match reality. Your argument basically consists of "parallel computing is faster". We can all agree. However, unfortunately for you, multithreading is not parallel computing.

    I don't really feel like arguing anymore with someone who is lacking a basic understanding of fundamentals. You go ahead though. Maybe you will find a way to argue that won't boil down to you not understanding thread scheduling.

    You may feel you have found a performance improvement. You may have even found one (I am sure we will see this code you refer to on our front page someday too). But you are failing to understand the concepts, and why threading is used.



  •  Then don't argue with him! My, these threads get annoying quick (Then don't read them?? touche'. )

     Protecting a file-based database with a lock file is a very good idea, and requires disk activity to do it.  And not only if you are considering multi-threading this task - you may think of a great idea to use that file somewhere else - although, if you did that, you'd probably be better dealing with the DB guys. 

    He should have been more efficiant with his locking. Bonus points for using similar constructs as your DB libraries, so the switch can be made. (Maybe he should have pulled in something like SQL-lite?)



  • @robbak said:

     Then don't argue with him! My, these threads get annoying quick (Then don't read them?? touche'. )
     

    Took the words from my finger tips...



  • @MasterPlanSoftware said:

    Your argument basically consists of "parallel computing is faster". We
    can all agree. However, unfortunately for you, multithreading is not
    parallel computing.

    Multithreading when the threads run in parallel on multiple cores is parallel computing.

    @MasterPlanSoftware said:

    You may feel you have found a performance improvement. You may have even found one...

    I have not "found" a magical performance improvement. It is no mystery that you can use multiple-cores to do parallel computing on one CPU. It's pretty common knowlege. It's even a selling-point of many applications.

    You accuse me of not understanding the "fundamentals" but you won't name them. You are the one who doesn't believe you can do parallel computing on a multi-core CPU. You'd better alert Intel. They are banking on multitheading and many, many cores. They even have a prototype with 80 cores. Check out Intel's ray-tracing demos.



  • @superjer said:

    @MasterPlanSoftware said:

    Your argument basically consists of "parallel computing is faster". We
    can all agree. However, unfortunately for you, multithreading is not
    parallel computing.

    Multithreading when the threads run in parallel on multiple cores is parallel computing.

    @MasterPlanSoftware said:

    You may feel you have found a performance improvement. You may have even found one...

    I have not "found" a magical performance improvement. It is no mystery that you can use multiple-cores to do parallel computing on one CPU. It's pretty common knowlege. It's even a selling-point of many applications.

    You accuse me of not understanding the "fundamentals" but you won't name them. You are the one who doesn't believe you can do parallel computing on a multi-core CPU. You'd better alert Intel. They are banking on multitheading and many, many cores. They even have a prototype with 80 cores. Check out Intel's ray-tracing demos.

     

    Sorry. It is NOT parallel computing. Multiple CPUs/cores currently provide no better advantage than drastically increasing the clock speed would.

    Hint: Try actually reading the article on parallel computing. 

    ...maybe even reading how thread scheduling works as well.



  • @MasterPlanSoftware said:

    Multiple CPUs/cores currently provide no better advantage than drastically increasing the clock speed would.

    Theoretically, yes. In practice it is not feasible to run a CPU at 12GHz. A quad-core 3GHz processor is feasible. That is the advantage.

    Tell me, if multiple cores are not for parallel computing... what in the FUCK are they for?

    From the first paragraph on http://en.wikipedia.org/wiki/Parallel_computing

    @Wikipedia said:

    Parallel computing has recently become the dominant paradigm in computer architecture, mainly in the form of multicore processors.

    I suppose that's a lie? 



  • From my chat window:

    <superjer> "Tell me, if multiple cores are not for parallel computing... what in the FUCK are they for?"
    <Justin> REDUNDANCY! 

     



  • @superjer said:

    @MasterPlanSoftware said:

    Multiple CPUs/cores currently provide no better advantage than drastically increasing the clock speed would.

    Theoretically, yes. In practice it is not feasible to run a CPU at 12GHz. A quad-core 3GHz processor is feasible. That is the advantage.

    Tell me, if multiple cores are not for parallel computing... what in the FUCK are they for?

    From the first paragraph on http://en.wikipedia.org/wiki/Parallel_computing

    @Wikipedia said:

    Parallel computing has recently become the dominant paradigm in computer architecture, mainly in the form of multicore processors.

    I suppose that's a lie? 

     

    Nope. It has nothing to do with  hardware. It has to do with software. And the concept that multithreading is NOT parallel computing. No matter how much you argue this fact will not change.

    If you want to actually read this somewhere, read the whole article, like where it  talks about Automatic Parallelization. Or you can go directly to the article: http://en.wikipedia.org/wiki/Automatic_parallelization



  • @MasterPlanSoftware said:

    ... multithreading is NOT parallel computing.

    Correct. Until you run the threads on different cores at the same time. Then it is. That's what my app and MANY MANY others do. On purpose!

    Automatic parallelization is very limited in scope. In a ray-tracer, for example, which is highly parallelizable, the compiler is not going to make it parallel for you. You have to do it yourself with threads. It's the only way to do it.



  • @superjer said:

    @MasterPlanSoftware said:

    ... multithreading is NOT parallel computing.

    Correct. Until you run the threads on different cores at the same time. Then it is. That's what my app and MANY MANY others do. On purpose!

    Automatic parallelization is very limited in scope. In a ray-tracer, for example, which is highly parallelizable, the compiler is not going to make it parallel for you. You have to do it yourself with threads. It's the only way to do it.

     

    Naturally, you are right. Everyone else is wrong.

     

    I define the sky as the grass, and I proclaim "The sky is green!"



  • Lets see if I remember my C skills from back in the day.

     #define GRASS "Green";

    #define Sky GRASS;

    printf(Sky);

    Ok, now back to your bickering. 



  • @Jonathan Holland said:

    Lets see if I remember my C skills from back in the day.

     #define GRASS "Green";

    #define Sky GRASS;

    printf(Sky);

    Ok, now back to your bickering. 

     

    Yeah I thought of writing something like that, but then I figured he was going to argue we could multithread and increase the performance.



  • @MasterPlanSoftware said:

    Naturally, you are right. Everyone else is wrong.

    No, only you are wrong. I agree with everyone else. Examples:

    <font face="Arial">

    Consequently, the multiple tasks of a multithreaded
    application also can progress in parallel within the environment of
    multithreaded operating systems.</font><font face="Arial">
    </font>

    -- http://archive.evaluationengineering.com/archive/articles/0298pcni.htm

     

    <font face="Arial">

    </font>Multi-core architectures have a single processor
    package that contains two or more processor "execution cores," or
    computational engines, and deliver—with appropriate software—fully
    parallel
    execution of multiple software threads<font face="Arial">
    </font>

    -- http://www.intel.com/intelpress/sum_mcp.htm 

     

     <font face="Arial">

    </font>On a ... multi-core system ... threading can be achieved via multiprocessing, wherein different threads ... can run literally simultaneously on different ... cores<font face="Arial">
    </font>

    --  http://en.wikipedia.org/wiki/Thread_%28computer_science%29



  • Whoops,

    http://www.intel.com/intelpress/sum_mcp.htm

    had a space on the end.



  • @superjer said:

    @MasterPlanSoftware said:

    Naturally, you are right. Everyone else is wrong.

    No, only you are wrong. I agree with everyone else. Examples:

    <font face="Arial">

    Consequently, the multiple tasks of a multithreaded
    application also can progress in parallel within the environment of
    multithreaded operating systems.</font><font face="Arial">
    </font>

    -- http://archive.evaluationengineering.com/archive/articles/0298pcni.htm

     

    <font face="Arial">

    </font>Multi-core architectures have a single processor
    package that contains two or more processor "execution cores," or
    computational engines, and deliver—with appropriate software—fully
    parallel
    execution of multiple software threads<font face="Arial">
    </font>

    -- http://www.intel.com/intelpress/sum_mcp.htm 

     

     <font face="Arial">

    </font>On a ... multi-core system ... threading can be achieved via multiprocessing, wherein different threads ... can run literally simultaneously on different ... cores<font face="Arial">
    </font>

    --  http://en.wikipedia.org/wiki/Thread_%28computer_science%29

     

    As I have said before... Yes, two cores can execute parallel (who the hell is even arguing that?). But with a multithreaded application you are not ensuring parallel performance. I don't know what you don't get here. It is not parallel computing. 

    There is no way to execute two threads and have them execute instruction for instruction on the two different cores. It is simply not going to happen. (not in any major OS I have used anyway, I am sure there are plenty of specialized OSes that could do this)

    Proving that you CAN execute two instructions at the same time is different than proving that a multithreaded app will benefit computationally intensive applications over I/O intensive applications.

    Also note, even in YOUR quotes:  @superjer said:

    and deliver—with appropriate software—fully
    parallel execution

    As I have said... you are proving the hardware can do it. No one is arguing against that.



  •  "with appropriate software" just refers to multithreaded applications.

    These "highly specialized OSes" you theorize about do exist. Here are the ones I use: Fedora 8 Linux, Windows XP & Windows Vista. All of them will run your threads in parallel on multicore processors under normal system load.

    Don't believe me? TRY IT:

    @http://endlos.sourceforge.net/ said:

    Multithreaded: uses any amount of threads for faster calculation on computers with multiple cores/CPUs.

    If you have 2+ cores you will see with 2 threads it runs 2x as fast as with just one. This is well known, common knowlege. Apparently somehow you just missed it. Sorry it turned into such a big deal. No hard feelings. :)



  • @superjer said:

     "with appropriate software" just refers to multithreaded applications.

    These "highly specialized OSes" you theorize about do exist. Here are the ones I use: Fedora 8 Linux, Windows XP & Windows Vista. All of them will run your threads in parallel on multicore processors under normal system load.

    Don't believe me? TRY IT:

    http://endlos.sourceforge.net/

    @http://endlos.sourceforge.net/ said:

    Multithreaded: uses any amount of threads for faster calculation on computers with multiple cores/CPUs.

    If you have 2+ cores you will see with 2 threads it runs 2x as fast as with just one. This is well known, common knowlege. Apparently somehow you just missed it. Sorry it turned into such a big deal. No hard feelings. :)

     

    Alright, go ahead and post this magic code that will run one thread on one cpu core and the other thread on the other core everytime.

    I would love to see it. Then you can show me a unicorn.

    Unfortuntely on this plane of existence, thread schedulers do not work like this.



  • After further research...

    It turns out there are many more apps that run in parallel on multi-core CPUs than I would have assumed:

    -MP3 encoders (ex. LAME MT)
    -Image filters (ex. Adobe Photoshop)
    -serious ray-tracing apps
    -Fractal generation apps
    -PC games (ex. Crysis, Quake 4, Supreme Commander)
    -Console games, too
    -serious compilers, and level-editing compile tools for games!

     You probably use several such apps on your computer without knowing it. And if you have 2+ cores you've got parallel multithreading happening under your nose while you deny its existence!



  • @Master said:

    Alright, go ahead and post this magic code that will run one thread on
    one cpu core and the other thread on the other core everytime.

    I already posted the link to endlos. It's open source so go look.

    I have posted links to several sources that confirm what I'm arguing. Go read about Photoshop's multithreaded image filters for example.

    I have my own code that does the same thing for several work-related apps and one for-fun app: http://www.superjer.com/pixelmachine

    (Disclaimer: messy hackish 2-day-old for-fun-only code) 

    I haven't released the multithreaded version of pixelmachine yet because I'm converting the project to SDL and it's not done. But I will post it anyway when I get home.

    @Master said:

    ...other core everytime.

    It's not guaranteed to happen everytime as we already agreed earlier, but it's worked every time I've tried it on several different operating systems. I'd say it happens at least 99% of the time. Probably 100% of the time if there are no competing high-load processes running. The fact that the OS does not guarantee it will run your thread on a new core is should be no more worrying than the fact that the OS can not guarantee that the CPU is 100% or even 10% available for your app. It's not a serious design issue.



  • @superjer said:

    It's not guaranteed to happen everytime as we already agreed earlier
     

    Then it is not parallel computing, and is not guaranteed to occur everytime. 

    Like I said, post code that proves it runs in parallel all the time, or the topic is over. No matter what you do, and what you say, thread schedulers don't do what you claim

    And no, the endlos program doesn't claim this anywhere. It claims you can keep spawning threads. That does NOT increase performance. Faulty, just like your original argument.

    If an application is running slow, simply adding another thread is not necessarily the answer.

    You do get extra points for throwing down any possible relevant statement and claiming it is factual evidence against reality. It is funny.

     



  • First, endlos does claim what I said. So do most modern fractal generators. Have you TRIED it? As I already posted:

    @endlos.sourceforge.net said:
    Multithreaded: uses any amount of threads for faster calculation on computers with multiple cores/CPUs.

     

    @You said:

    Then it is not parallel computing, and is not guaranteed to occur everytime.

    You are making up your own requirements for parallel computing. No one else uses such ridiculous requirements. You might as well require that it runs in parallel when the machine is OFF.

    Multithreaded code will run parallel on multiple cores so long as:

    - the OS supports it, and

    - each core is not too busy for the OS to assign another thread to it.

    Under normal circumstances, on a modern OS, the code will run in parallel. Everyone seems to know it but you. It may not run in parallel if there is hot lava in the computer or if the OS is broken, or has bad drivers, or if the system load is already way too high. SO WHAT?

    Would you at least say that when and if, although it is not techincally guaranteed, the OS does decide to run two threads from one app simultaneously on two cores... is that parallel?




  • You seem to be confusing vector processing (SIMD) with parallel computing.   

     Vector processing != Parallel computing; You can parallel compute without a vector processor. SIMD is merely one type of parallel computing.



  • I have explained my point enough already. This is getting old. I standby my point. All that is happening in your examples is load balancing across cores. This is taken care of by the scheduler.

    Throwing an extra thread in your program does not make anything run parallel or concurrent. Making threads run in parallel reliably requires a lot more work than what you are describing.


Log in to reply
 

Looks like your connection to What the Daily WTF? was lost, please wait while we try to reconnect.