How much can MySQL handle?



  •  I'm working on a project that may call a lot of queries. This project is a small intranet that will be hosted on a crappy IIS university server. I'm worried that one of the pages might take too much time to load because of a load on the database. The page can possibly make about 50 SELECT queries to a InnoDB database that has 7 fields, 5 integer rows, and 2 varchar (8) rows, per page load. It will be searching based on an index. How many queries can this database take before it slows down (say, takes over 3 seconds to load). The development server (a.k.a. my laptop) has a dual-core 1.5GHz with 1GB or RAM running (ick) WindowsXP, I'm sure the server will be slightly better. How many queries do you think the development server could take before it bogged down.

     I also have one more question. Foreign Keys. I believe InnoDB has support for them. I know that Foreign Keys are really helpful but I don't know exactly how to handle it. This page is basically searching for what would be a foreign key (not unique though). How do you go about implementing that? I know how to do it on Access but Access doesn't count :P and that was from years ago. How would I handle insertions for the foreign key?

     

    One last question. Well, two. Why does this editor this form uses sometimes double space (as in &nbsp) and why does it sometimes delete two letters on a backspace (particularly when that second deletion is a space)?

     Thanks for any help!

    Malfist



  • Short of executing a whole heap of cross joins on each query, MySQL will likely handle more queries than you can expect to send it. I wouldn't even worry about it for what you're doing.



  •  Nope, I rarely use joins. I find it easier to write the database in such a way that you don't need them very often (but they are useful).



  •  Ignore my question about forign keys. It's for a different topic.



  • @malfist said:

     Nope, I rarely use joins. I find it easier to write the database in such a way that you don't need them very often (but they are useful).

     

     

    This comment is a WTF in and of itself. WTF???

     

    The fact that each page view is resulting in 50 queries to the database means something is wrong with your app. I at first wondered what the hell you could be doing that each page view required 50 db hits but after seeing your comment about joins I now understand that your DB schema sucks.



  • @malfist said:

    Nope, I rarely use joins. I find it easier to write the database in such a way that you don't need them very often (but they are useful).
    A database that doesn't need to be joined would be like ... just ... one big fucking table.  0th normal form.  Not saying you need to go all the way to fifth, but c'mon

    I bet you'd like teradata.  I hear it doesn't like the 3rd+ normal form either.



  • @belgariontheking said:

    @malfist said:

    Nope, I rarely use joins. I find it easier to write the database in such a way that you don't need them very often (but they are useful).
    A database that doesn't need to be joined would be like ... just ... one big fucking table.  0th normal form.  Not saying you need to go all the way to fifth, but c'mon

    I bet you'd like teradata.  I hear it doesn't like the 3rd+ normal form either.

    You might use subqueries on a 3rd+ normal form database to cut down on JOINs if you don't really need the related table data. Like:

    select * from invoice WHERE customer_id IN (SELECT customer_id FROM customer WHERE state = 9)

    which should run faster than

    select invoice.* FROM invoice INNER JOIN customers ON invoice.customer_id = customer.customer_id WHERE customer.state = 9

    However, there are things where you definitely will want to do joins, so DB design should allow for this. But some stuff can be solved by subqueries.

    For very much non-normal databases, there's that thing called MUMPS out there...



  •  

     I said it could execute up to 50. Which is the maximum I ever see it getting to. What happens is there is a offer (an auction if you will) or a time period someone is working and wants to get off.

    I query for all those that are current. For each one I display I query the database again to see how many bids there are. I do a SELECT id FROM table WHERE request = auction LIMIT 1, where id is an indexed integer. Then I use the php function mysql_num_rows and it will ignore the Limit 1 and tell me how many there are.

    It is highly unlikely that there will be 49 auctions going on for a 18 man deparment.

     

    Edit: keep in mind, I work for a university and the version of MySQL we use is 3.23. I don't think it supports sub queries.



  •  Here's the database structure for those who say they don't like it without seeing it first.

    Pastebin



  • @malfist said:

    I do a SELECT id FROM table WHERE request = auction LIMIT 1, where id is an indexed integer. Then I use the php function mysql_num_rows and it will ignore the Limit 1 and tell me how many there are.

    Do "SELECT COUNT(*) AS cnt FROM table WHERE request = auction" instead.  That will do the count on the DB and just return a single row with the field 'cnt' holding the count.  The way you are doing it every single row is pulled back to the webserver and counted.



  •  I thought it was the otherway around. I thought count took longer and required more CPU. I'll change it.

     

    Edit: I was checking on MySQL's documentation to see if MySQL 3.23 support InnoDB with Foriegn Keys and I found this:

    InnoDB does not keep an internal count of rows in a table. (In practice, this would be somewhat complicated due to multi-versioning.) To process a SELECT COUNT(*) FROM t statement, InnoDB must scan an index of the table, which takes some time if the index is not entirely in the buffer pool. To get a fast count, you have to use a counter table you create yourself and let your application update it according to the inserts and deletes it does. If your table does not change often, using the MySQL query cache is a good solution. SHOW TABLE STATUS also can be used if an approximate row count is sufficient

     Should I still use count instead of mysql_num_rows?

    I wish somebody would update the db to atleast 5.1



  • @malfist said:

     Should I still use count instead of mysql_num_rows?
    Yes. MySQL is much smarter about its data than PHP can be. Just make sure you a have an index (primary key!) on the table, and even InnoDB will still be far faster than you need.



  • @malfist said:

    Should I still use count instead of mysql_num_rows?

    COUNT(*) on InnoDB is slower than COUNT(*) on MyISAM but both will still be faster than num_rows.  Think of it this way: the DB has to find all matching rows no matter what.  When doing a count, it can just keep a counter of the matching rows and send you back an int.  When doing num_rows it has to pull all the data for each row.

     

    @malfist said:

    I wish somebody would update the db to atleast 5.1

    That would be nice, but it won't make the COUNT(*) situation any different.  It doesn't sound like your site will be high-traffic, though, so MySQL should be able to handle whatever you throw at it just fine so long as you index. 



  • @malfist said:

    Edit: keep in mind, I work for a university and the version of MySQL we use is 3.23. I don't think it supports sub queries.

    It doesn't. Every MySQL release < 5.0 is lacking in one thing or another, but MySQL releases before 4.0 have a lot of WTFs by themselves. Shame on them for pushing the non-transactional "RDBMS" and trying to get away with it. InnoDB (and Sleepycat BDB before that) filled the gap, but MySQL still favors the MyISAM tables. In fact, the 3.23 documentation has a section on "Why transactions suck" and actively tell you NOT to use ROLLBACK/COMMIT on your apps!!! That is a WTF in itself...



  •  I learn a lot by coming here.

    Thanks!



  • I said it could execute up to 50. Which is the maximum I ever see it getting to.

    And in two years' time, they decide to extend it to a tonne of new users and you're looking at a rewrite. Easy mistake to make to be fair.

     In this kind of situation, joins are your friend. You can get all the info you need in a single query regardless of how many auctions are actually going on, e.g.

     SELECT a.*, count(b.id) AS `num_bids` FROM auction a LEFT JOIN bid b ON b.auctionid = a.id WHERE NOW() BETWEEN a.start_date AND a.end_date GROUP BY a.id

    In the event that there are no bids, the "num_bids" column will be zero.



  •  @malfist said:

     Nope, I rarely use joins. I find it easier to write the database in such a way that you don't need them very often (but they are useful).

    Yikes... this indicates to me that you've already built a WTF.

    MySQL is very fast at SELECTing a lot of rows frequently when you use a MyISAM table. InnoDB is less efficient but supports transactions and foreign keys (and other features) that you might find useful.

    You'll find that most of the overhead has to do with query startup. If you're sending 50 queries for page, then it's going to be slow... no matter what. I'd focus on designing a schema that allows you to get the information more efficiently, using fewer queries, and returning fewer rows.


Log in to reply