@Cap'n Steve said:
That's something I've been wondering, actually. Is there any benefit to using one of the date types instead of just storing the timestamp as an INT? The only difference I could find in the MySQL documentation is that some types will automatically format it for you when it's returned.
With TIMESTAMP column, you get to use stuff like ON UPDATE CURRENT_TIMESTAMP, so whenever any data in the row is updated, the database automatically updates the timestamp column for you (e.g. last_modified_date)
Another benefit is being able to do certain date operations on values directly in the database, thus saving a roundtrip to the client and improving performance. An example:
You have a master row in a one-to-many relationship that contains a certain date you're about to go off of. The date might be Feb 15th. You want to do an INNER JOIN from that record to another table and select the records that contain a corresponding date value that is within 1 month, but for simplicity to your users, you want "1 month" to be a calendar month, i.e. March 15th. So you can do something like:
SELECT * FROM master_table INNER JOIN users_table USING (some_id)
WHERE master_table.master_timestamp + INTERVAL 1 MONTH < users.some_timestamp
That will return to you all the records where users.some_timestamp is within one calendar month. You couldn't do that using pure unix timestamps without selecting the timestamp, processing it in your application to find out the unix timestamp that is one calendar month later, and then using it in your query. If master_table.master_timestamp is going to be different for each row depending on the join condition, then you'd have to do processing for each row in your application. It's a very contrived example but surely there are real world uses involving this theory; I just can't think of any.