Databases: delete at once, not at all or in batch?



  • I'm writing a media encoding app which uses a database as a temporary storage for jobs that are in queue, their dependencies etc. I need a db since some of the logic requires less than trivial queries, the app itself supports dependent jobs, share source datas and some other neat features.

     I really only need the database for the jobs that are active, and I'm now asking myself if I should delete entries when the client reports that he is happy with the results. I've heard that db:s usually are quite bad at deleting, and that you should just flag entries for deletion. Is this just a myth, or is it relevant in this case? I obviously want the app to scale well, so I'm thinking that if I leave all entries, my queries will eventually slow down the app, which will require a drop database and restart, and that is just silly I think. 

    The alternative is to delete old entries. The question is, should I mark them for deletion and have a reaper task, delete them at once, or just ignore deletion and do the "drop database" task once in a while? What are the implications of sequential deletion? Will I screw up indexes, or lock tables more than necessary?

     I'm interrested in a general answer (if there is one), I'm using MySql and innodb at the moment if anyone is interrested. 

    Thanks 



  • Well certain databases handle things differently but generally deletions and updates can cause performance problems by fragmenting indexes, pages, and datafiles.  Back in the day we had to do reorgs every three months or so or you would get performance problems.  The reorgs took hours to complete causing downtime.  In a general sense, reorgs reclaim unused space, rebuild/defrag indexes and align pages. You still need to do those things at least in MsSQL and it's probably the same for other DB engines as well.  The only nice thing now is that you can separate the tasks out so that the downtime is minimized.  Also a lot of DB engines allow these operations to be done online without requiring downtime at all - although they tend to be slower and not as thorough.

    In any case, I wouldn't be too concerned with either deleting or keeping the data.  The real question should be is "How long do I need the data?" and then implement an appropriate maintenance strategy.  If you go with the deletion method, you'll need to compact datafiles, reorg indexes, etc which may require downtime - I don't know the specific capabilities of MySQL and INNODB.  If you do a delete, either a batch or realtime is my preferred method.  I certainly would not recommend doing a drop and recreate as a standard business practice for a production system.  I reserve that type of action for restores, schema moves, and hardware swaps.  

    If you go with the logical delete (marking as deleted) and keeping all the data, you shouldn't have query problems unless you either have 1) a really poor data model, 2) less than ideal hardware, 3) bad indexes, 4) a crapload of data (millions of records), and/or 5) crappy queries or query optimizer.  The nice thing about having all the data is that you'll have a complete history of what happened and you never know when you might need it.  




  • Thank you for the thorough explanation, it's really helpful.

    My queries and the data model are both fine, but I'll have to work on the indexes, which I've always found to be quite a pain :D. I will need to save the data for just a couple of hours, maybe overnight as a service if a client has hardware problems. The number of jobs coming in will probably be ten per second or more, so the data rate is quite high, and I thought that since that is the case it might be smarter not to leave all data in the database. Downtime is not anything that I've planned on while writing this app, since I've never managed a big live database, but this system will be quite critical, so of course that is an issue. 

    I think that I will first implement the realtime delete, so that I know that no logic really depends on the data being saved forever, and then swap the hard deletes for logical ones, to keep the data. I agree with you that keeping the data is good for analyzing failures, but I have this alarm going off in my head, telling me that it's bad design :D Since I'm writing to logs as well, they will hopefully help reconstructing any mishap. But as you say, you never know :D



  •  The general answer is, "it depends."  It depends on the use of your database: the size, the frequency of queries, etc.  Each delete strategy has its advantages and disadvantages, and the key is to pick the one that best fits your goals.  It's not a useful answer, granted, but it's the general answer.  If you want to discuss your case specifically, give us some more details.



  • What sort of details are relevant? There will be about as many deletes as writes, and around 10-20 deletes and writes per second. There will maybe be tops 1000 rows in each table at a time (if i delete). Anything else?



  •  By the way, are there any books or online resources on the subject? I'm not aiming for becoming a DBA, but it would be great to have some insight in this area, in order to avoid writing WTF database apps :)



  • @Obfuscator said:

    What sort of details are relevant? There will be about as many deletes as writes, and around 10-20 deletes and writes per second. There will maybe be tops 1000 rows in each table at a time (if i delete). Anything else?
     

    Also depends on what kind of data you're storing. If it's just some fixed-length fields, then the delete overhead is going to be utterly minimal - most DBs will just flag the data block as available, and it'll get overwritten by the next insert. As long as your tables are properly indexed, and the indexes can fit into available memory, then you most likely won't notice any kind of slowdown. MySQL's very good about selecting, speedwise. You'll most likely find that inserts take up the bulk of consumed cpu time, especially if the insert touches a lot of indexes.

    Of course, if the indexes can't fit into memory, then you're going to hit a performance wall no matter what you do. Hitting the disk is the most expensive thing a database can do, and indexes are one thing you absolutely MUST keep in memory for good performance. 

    If data or index fragmentation is a concern, then you can do a scheduled dump/restore of the DB, or use the null-alter documented here to internally defrag the table. Of course, the alter (and mysqldump + restore) will lock the table while the DB engine is doing its work, so your app will be dead in the water until the rebuild is done.

    You mentioned it's a media encoding app - are you storing the media data in the DB as well? That will suck up a huge amount of db filespace, and definitely leave some huge holes in the data files when you delete.

    How many rows will there be if you don't delete?



  • OK, the key things here are to decide when a) an item is 'deleted' i.e. finished processing and mostly not relevant any more (though might be useful in an emergency), and when b) it can be actually deleted (for real) off the database.
    If (a) isn't worth bothering with, then just delete items as you process them - you'll presumably have some key value which you are indexing on, so that's the time to do it. If, however, you do want to do a 'mark for delete' and delete later, then you'll want some form of status flag (Active/Deleted) and some form of datetime column to indicate how old an item is. Then you can have an intermittent/constant/nightly batch to delete e.g. anything where Status =Deleted and Datetime>1 day or whatever. Put an index covering Datetime and Status fields, so that you hit the index when deleting, that way you shouldn't get performance issues.
    The only other issue is really that, with a high turnover of records, your indexes may get out-of-date. This depends on the DBMS you are using. I'm not very up on MySql, but you might need to do occasional rebuilds of the indexes on the tables where you have big turnover. In Sybase, for example, you'd probably want to do 'update statistics' to keep the indexes efficient.
    I don't know why you talk about doing 'drop database' as a regular task; if you want to clear data from tables, just do 'delete from <table>', and some form of index rebuild (as I say, DBMS-specific) .
    Hope this helps. 



  •  MarcB: I'm only using fixed-length fields, and I have tried to add indexes in the most ovious places. The stuff I'm storing is jobs (basically just an id), requests (just fixed size), source datas(ids again), results( some fix-length text for detailed info, but mostly just ints). I have six tables, all quite minimal. I think that the data is going to grow quickly, since this is just a job dispatcher, there might be up to 20-30 compression workers. Since this app deals with images as well as videos, and images are really quick, there will be a lot of writing. My estimate was 10-20 jobs per second, the goal being more, but I tried to keep it realistic, 20 per second adds up to about 2 million per day. I will try to stick to fixed-length and thanks for the tip about defrag and overwrite strategies. I'm not storing the data in the db, we're using NFS to share the space between compression workers and dispatcher, with ext3 as backing fs. Any tips on that is also appreciated :D

     vr602: I appreciate your help, but I don't really understand how scenario b) differs from scenario a), can you elaborate? In this case, I have a very clear a) limit, since clients report when they are done with the jobs, by reporting "releaseResources". If that doesn't happen, I have a timed task that looks for done jobs that releases resources based on a timeout, very similar to the same strategy that you described. The problem is if I should really delete when "releaseResources" happen. Right now, I'm leaning towards hard delete.



  • @Obfuscator said:

    20 per second adds up to about 2 million per day
     

    Either way you look at it, 20 per second isn't that high for a few fixed-length records. Let's say it's 10 bigints per record, and you're writing to 4 tables. That's  4(tables) x 10(bigints) x 8(bytes each) x 20(per second) = 6400 bytes. Let's throw in another 2000 bytes for indexes and the text data, and you're at 8400 bytes per second. A modern SATA drive can undoubtedly do at bare minimum a sustained 10,000,000 bytes per second. So in this wildly simplified scenario, your writes are sucking up around 0.084% of the drive's throughput.

    I think you'll run into more problems from lock contention from all the concurrent inserts/deletes, and that's not a disk-bound problem: it's table structure and database usage.



  •  I see your point. But asides from the disk throughput, there's the obvious problem with keeping the indexes small enough, while having really big tables. Previously you suggested that the performance hit from deleting entries would be minimal. Are you suggesting that considering the low throughput, it is better to just keep the data and maybe delete in batch? Lock contention could very well be a problem, but the question is if a batch delete would improve overall performance (say processed entities/second on average).



  • @Obfuscator said:

    Are you suggesting that considering the low throughput, it is better to just keep the data and maybe delete in batch? Lock contention could very well be a problem, but the question is if a batch delete would improve overall performance (say processed entities/second on average).
     

    I can't tell you yea/nay without a lot of "depends on..." qualifiers. Just go ahead and try the system both ways. Set up some fake jobs that start slamming the db with your 20 insert/deletes per second, and see which goes faster, and see how it changes as the load scales.

    Let's say you end up going with a mark-and-delete-later system - even though those records are still present, you'll have to include extra logic in your queries to exclude them (or start querying against a view that handles the exclusion). Is the time spend deleting each individual record, or deleting in batches of X records/Y minutes more efficient than doing a bulk delete once a day? Does that time exceed the extra overhead of having to ignore the 'dead' records? The only way to figure it out is do some tests. 



  •  Ok cool. I also noticed that moving to hard delete required quite a lot of code change, but at least now I know what problems that will arise. I'll do some testing as soon as I'm finished with this version. Thanks for all the tips and pointers!


Log in to reply