Saving timestamps in the database, displaying local time



  • We're rolling out a new app that saves various events to a database. In some spots the app displays an event, and in some but not all of those spots it gets the local time display wrong.

    A group of developers was discussing whether the time should be stored to the database in UTC or local time. I chimed in that time should be saved in a timezone-independent format such as UNIX epoch time or Java's internal timestamp implementation (e.g., the number of milliseconds since 1 Jan 1970 GMT), and that the timezone displayed could be handled by the presentation layer. If we need to know the actual local time of an event, we could store a timezone representation and the presentation could use that.

    I stopped talking, there was a pause for about a second, then conversation resumed with "if we store the time as UTC...".

    So I'm at the awkward place of suspecting that they don't really understand what I'm saying, but being pretty sure that they don't give a fuck.



  • Neither of those are timezone-independent. They both use UTC.

    I would avoid timestamps and just store as a UTC DATETIME (or whatever the equivalent is in your environment). It really doesn't matter how you store it, though, because you should be storing it with TZ info and your presentation layer is going to convert it from one TZ to another.

    Sorry, I think your co-workers might be right on this one (although it's entirely possible they really don't understand the issue at all and are just stumbling into the right solution). However, it sounds like you also don't understand it.



  • @rstinejr said:


    We're rolling out a new app that saves various events to a database. In some spots the app displays an event, and in some but not all of those spots it gets the local time display wrong.

    A group of developers was discussing whether the time should be stored to the database in UTC or local time. I chimed in that time should be saved in a timezone-independent format such as UNIX epoch time or Java's internal timestamp implementation (e.g., the number of milliseconds since 1 Jan 1970 GMT), and that the timezone displayed could be handled by the presentation layer. If we need to know the actual local time of an event, we could store a timezone representation and the presentation could use that.

    I stopped talking, there was a pause for about a second, then conversation resumed with "if we store the time as UTC...".

    So I'm at the awkward place of suspecting that they don't really understand what I'm saying, but being pretty sure that they don't give a fuck.

    UTC vs local time doesn't really matter that much (unless you have your app using multiple servers in different time zone). Either way you will still have to convert to the client's local time. What really matters is that you are consistant in which method you use.



  • @this_code_sucks said:

    (unless you have your app using multiple servers in different time zone).

    I doesn't even (at a technical level) matter then. The TZ should be stored with the DATETIME; implicit TZs suck. Where it would matter is the human aspect, for example, administration. If you have sysadmins logging into servers with different TZs, it might get confusing. Better to just use a single TZ then, either local time of the HQ or UTC.



  • @morbiuswilters said:

    Neither of those are timezone-independent. They both use UTC...

    They use UTC in the sense that GMT is specified as the start point. But a legit UNIX epoch time value is 1347982. What would its timezone be?

    (Added edit: my answer is "any timezone you like".)



  • @this_code_sucks said:

    @rstinejr said:


    We're rolling out a new app that saves various events to a database. In some spots the app displays an event, and in some but not all of those spots it gets the local time display wrong.

    A group of developers was discussing whether the time should be stored to the database in UTC or local time. I chimed in that time should be saved in a timezone-independent format such as UNIX epoch time or Java's internal timestamp implementation (e.g., the number of milliseconds since 1 Jan 1970 GMT), and that the timezone displayed could be handled by the presentation layer. If we need to know the actual local time of an event, we could store a timezone representation and the presentation could use that.

    I stopped talking, there was a pause for about a second, then conversation resumed with "if we store the time as UTC...".

    So I'm at the awkward place of suspecting that they don't really understand what I'm saying, but being pretty sure that they don't give a fuck.

    UTC vs local time doesn't really matter that much (unless you have your app using multiple servers in different time zone). Either way you will still have to convert to the client's local time. What really matters is that you are consistant in which method you use.

    If all the servers for this application are in the same timezone both can work. Otherwise UTC is really better suited; you don't want some parts of the app logging in Eastern and other in Pacific timezones.



  • @rstinejr said:

    Java's internal timestamp implementation (e.g., the number of milliseconds since 1 Jan 1970 GMT)
    Maybe you have forgotten that GMT is UTC*.  So suggesting Java's internal timestamp implementation is suggesting the use of UTC.

     

     

     

    *Except for pedantic dickeeds



  • @rstinejr said:

    @morbiuswilters said:
    Neither of those are timezone-independent. They both use UTC...

    They use UTC in the sense that GMT is specified as the start point. But a legit UNIX epoch time value is 1347982. What would its timezone be?

    (Added edit: my answer is "any timezone you like".)

    I already answered this: UTC. It's the number of seconds since a particular time in UTC. It can be converted to any TZ you wish, but the implicit TZ is UTC.



  • @El_Heffe said:

    Maybe you have forgotten that GMT is UTC

    Not at all.

    Neither Unix epoch time nor the Java millisecond variant (nor whatever database it was that used a nanosecond equivalent with zero set somewhere around the turn of the 20th century....) has a month, day, hour, or minute, although they can readily be converted into a normal time-of-day with these values.



  • This reminds me of a WTF with the suite of software we use and I develop against.

    <vendor>'s server software can store DateTimes in SQL Server. <vendor>'s Silverlight API that communicates with the server suite via web services always converts DateTimes from UTC to the user's time zone when reading. However, it does no conversion when saving, meaning if the user edits a field and submits a DateTime I had to convert it to UTC in my code before passing it on to <vendor>'s API or else it's wrong once they look at it again. To make matters worse, <vendor>'s desktop suite of software does no time zone conversions of any kind and so the times you see there are generally in the time zone of whoever submitted the date. So for our clients that use both <vendor>'s desktop suite and our custom web suite against the same instance of data, you never really know what time zone the DateTimes are in. And yes we have clients in multiple time zones so it matters. Basically the dates are generally accurate but times will not be, unless you exclusively use our web suite and never use <vendor>'s desktop suite, or you only use the desktop suite and not our web suite AND you only care about one time zone.

    <vendor>'s suite appears to be built on the .NET Framework to some degree.



  •  This reminds me of my Y2K project.  Stratus is very nice about storing and handling dates in integer-ticks-since-epoch form, but management decided that anything stored in media needed to be in human-readable format, because someone reading a dump shouldn't have to convert what they find to something you'd see on a wall calendar.



  • @da Doctah said:

    This reminds me of my Y2K project.  Stratus is very nice about storing and handling dates in integer-ticks-since-epoch form, but management decided that anything stored in media needed to be in human-readable format, because someone reading a dump shouldn't have to convert what they find to something you'd see on a wall calendar.

    I'm betting that management did not ask the people who would actually be doing the work whether or not their idea would be helpful.



  • @morbiuswilters said:

    (It's) UTC. It's the number of seconds since a particular time in UTC. It can be converted to any TZ you wish, but the implicit TZ is UTC.

    It's my fault, but the entire issue of whether or not UNIX Epoch time or Java milliseconds-since-the-Epoch is UTC is a red herring. 

    The important point is that for items with various possible and required display formats, it's a good idea, where possible, to use a canonical representation internally, and to defer committing to a format until the item is displayed.

    If you want to call a long int that holds the number of milliseconds since the Epoch "UTC", I'll learn that's just the way you talk and I'll deal with it, no harm to the app or to the design. :-)



  • @rstinejr said:

    @da Doctah said:

    This reminds me of my Y2K project.  Stratus is very nice about storing and handling dates in integer-ticks-since-epoch form, but management decided that anything stored in media needed to be in human-readable format, because someone reading a dump shouldn't have to convert what they find to something you'd see on a wall calendar.

    I'm betting that management did not ask the people who would actually be doing the work whether or not their idea would be helpful.

     

    Not as such.

    They simply knuckled under to the business-majors-turned-coders who complained that they couldn't tell whether some ten-digit number was in the past or the future.

     



  • @rstinejr said:

    The important point is that for items with various possible and required display formats, it's a good idea, where possible, to use a canonical representation internally, and to defer committing to a format until the item is displayed.

    If you're using a date library, then you shouldn't need to worry about most of this shit. I think storing timestamps in a DB is just bad design in this day and age, unless you have a very good reason to do so. And if you're using a date library, you can store however you like. Although I do agree it's best to use a consistent TZ. In order of preference it would be: DATETIME as UTC, DATETIME as localtime, timestamp.

    @rstinejr said:

    If you want to call a long int that holds the number of milliseconds since the Epoch "UTC", I'll learn that's just the way you talk and I'll deal with it, no harm to the app or to the design. :-)

    My point was really more along the lines of "every stored datetime has a TZ, whether implicit or explicit". I don't think ragging on your coworkers makes any sense if their proposal was to use a DATETIME as UTC or localtime; I think that's a better decision than using timestamps.



  • @rstinejr said:

    If you want to call a long int that holds the number of milliseconds since the Epoch "UTC", I'll learn that's just the way you talk and I'll deal with it

    It's not Epoch UTC, it's actually UTC's Epoch because Epoch is a reference point, not a date.



  • @Speakerphone Dude said:

    @rstinejr said:

    If you want to call a long int that holds the number of milliseconds since the Epoch "UTC", I'll learn that's just the way you talk and I'll deal with it

    It's not Epoch UTC, it's actually UTC's Epoch because Epoch is a reference point, not a date.

    Read that again. He's saying if you want to call something "UTC". I figured the quotes were a dead giveaway. As for the rest of your statement: WTF? It's not "UTC's epoch". An epoch is a particular date used as a reference point, and for computing UTC is almost always used. So an epoch is a date and there's no "UTC's epoch", except, arguably, the birth of Christ.



  • @morbiuswilters said:

    @Speakerphone Dude said:
    @rstinejr said:

    If you want to call a long int that holds the number of milliseconds since the Epoch "UTC", I'll learn that's just the way you talk and I'll deal with it

    It's not Epoch UTC, it's actually UTC's Epoch because Epoch is a reference point, not a date.

    Read that again. He's saying if you want to call something "UTC". I figured the quotes were a dead giveaway. As for the rest of your statement: WTF? It's not "UTC's epoch". An epoch is a particular date used as a reference point, and for computing UTC is almost always used. So an epoch is a date and there's no "UTC's epoch", except, arguably, the birth of Christ.

    In the context of this discussion, epoch refers to "unix epoch", and this means the number of seconds since Jan 1 1970 UTC (see the Universal Truth). The original poster was saying that epoch is independent from the timezone, which is correct, because it is not a date, it is an interval. And since UTC is a timescale, not a date either, the correct way to express this specific interval (if needed) is to say UTC's epoch and not the other way around.

    Personally I don't see what would be the value of storing a non-UTC-based interval, but this was a valid question and the original poster was correct to raise the issue as this must be addressed during the design phase. I am not a fan of timestamps because they leave too much to interpretation but again it's my own preference, nothing else.



  • @Speakerphone Dude said:

    The original poster was saying that epoch is independent from the timezone, which is correct, because it is not a date, it is an interval.

    The epoch is a goddam date. The timestamp is a goddamn interval. The timestamp is meaningless without an epoch and an epoch has a timezone, hence a unix timestamp still has a timezone associated with it (UTC).

    @Speakerphone Dude said:

    Personally I don't see what would be the value of storing a non-UTC-based interval, but this was a valid question and the original poster was correct to raise the issue as this must be addressed during the design phase.

    Nobody suggested storing a non-UTC-based interval. Where the fuck are you getting this shit? Why does it seem like every comment you make has nothing to do with what anyone was talking about?



  • @morbiuswilters said:

    The epoch is a goddam date. The timestamp is a goddamn interval.

    This is basically the opposite, and Wikipedia agrees with me:

    • Timestamp: a sequence of characters or encoded information identifying when a certain event occurred, usually giving date and time of day, sometimes accurate to a small fraction of a second.
    • epoch: the number of seconds elapsed since midnight Coordinated Universal Time (UTC), January 1, 1970, not counting leap seconds. [...] It is neither a linear representation of time nor a true representation of UTC

    @morbiuswilters said:

    Why does it seem like every comment you make has nothing to do with what anyone was talking about?.

    Because your understanding of what those concepts are is wrong.



  • @this_code_sucks said:

    UTC vs local time doesn't really matter that much (unless you have your app using multiple servers in different time zone). Either way you will still have to convert to the client's local time. What really matters is that you are consistant in which method you use.

    You must work for my company. Nothing in our products handle internationalization and multiple time zones correctly.


  • @Speakerphone Dude said:

    This is basically the opposite, and Wikipedia agrees with me:

    • Timestamp: a sequence of characters or encoded information identifying when a certain event occurred, usually giving date and time of day, sometimes accurate to a small fraction of a second.
    • epoch: the number of seconds elapsed since midnight Coordinated Universal Time (UTC), January 1, 1970, not counting leap seconds. [...] It is neither a linear representation of time nor a true representation of UTC

    No it doesn't, you illiterate piece of shit.

    @The same article on Unix time you just fucking linked said:

    The Unix epoch is the time 00:00:00 UTC on 1 January 1970 (or 1970-01-01T00:00:00Z ISO 8601). ... The Unix time number is zero at the Unix epoch, and increases by exactly 86 400 per day since the epoch.

    Which proves you didn't read a goddamn thing you linked. And all you'd have to do is look up epoch to see that it means a reference date.

    As for timestamp: yes, a timestamp tells you when a certain event occurred. A Unix timestamp accomplishes this by representing the number of seconds since the Unix epoch.

    This isn't difficult stuff. This is the first fucking thing someone who learns about Unix timestamps learns. You literally don't have the first fucking idea what you are talking about. Shut your goddamn whore mouth when men are talking.



  • @morbiuswilters said:

    No it doesn't, you illiterate piece of shit. [...] You literally don't have the first fucking idea what you are talking about. Shut your goddamn whore mouth when men are talking.

    When you start cranking out insults all I hear is the sound of Charlie Brown's teacher. If you notice that sometimes people around you tend to stop paying attention it's because obscenity can be amusing coming from a 12yo but from a grown-up this noise becomes boring very quickly.



  • @Speakerphone Dude said:

    In the context of this discussion, epoch refers to "unix epoch", and this means the number of seconds since Jan 1 1970 UTC (see the Universal Truth).

    The way I read that Wikipedia link is that these days Unix time is the number of seconds since December 31, 1969 UTC minus the twenty four leap seconds not counted. It gives the example of 1998-12-31T23:59:60 and 1999-01-01T00:00:00 being represented by the same Unix time even though they are a second apart. It also says that before 1972, UTC seconds were longer than they are today.


  • @Speakerphone Dude said:

    @morbiuswilters said:
    No it doesn't, you illiterate piece of shit. [...] You literally don't have the first fucking idea what you are talking about. Shut your goddamn whore mouth when men are talking.

    When you start cranking out insults all I hear is the sound of Charlie Brown's teacher. If you notice that sometimes people around you tend to stop paying attention it's because obscenity can be amusing coming from a 12yo but from a grown-up this noise becomes boring very quickly.

    That was a very long-winded way to say "I was wrong. I beg your forgiveness."



  • @Jonathan said:

    The way I read that Wikipedia link is that these days Unix time is the number of seconds since December 31, 1969 UTC minus the twenty four leap seconds not counted.

    Well, midnight on January 1st 1970 UTC. But at least you read the article and understood that epoch is a date and the unix time (basically) the number of seconds since the epoch (minus the leap seconds, which I didn't mention in the interests of keeping things simple).



  • @morbiuswilters said:

    @Jonathan said:
    The way I read that Wikipedia link is that these days Unix time is the number of seconds since December 31, 1969 UTC minus the twenty four leap seconds not counted.

    Well, midnight on January 1st 1970 UTC. But at least you read the article and understood that epoch is a date and the unix time (basically) the number of seconds since the epoch (minus the leap seconds, which I didn't mention in the interests of keeping things simple).

    Sixty-two thousand four hundred repetitions make one truth. Keep going!



  • I hate to say it, but Oracle has two good solutions to this problem. The first is the "TIMESTAMP WITH LOCAL TIME ZONE" data type. You store a value with local time, but the db records the connection's timezone when it is stored. Any app that fetches the value will see it in their time zone automatically. The other is "TIMESTAMP WITH TIME ZONE", which requires a value to have an explicit time zone when inserted.

    To emulate these in databases that don't have similar data types, just do what the sane people here have suggested - always store UTC in the database. It's not a bad idea to store the time zone offset in another field in case you want to do analysis based on client time of day (e.g. What time of day does a typical user first log on?).



  • @morbiuswilters said:

    @Jonathan said:
    The way I read that Wikipedia link is that these days Unix time is the number of seconds since December 31, 1969 UTC minus the twenty four leap seconds not counted.

    Well, midnight on January 1st 1970 UTC. But at least you read the article and understood that epoch is a date and the unix time (basically) the number of seconds since the epoch (minus the leap seconds, which I didn't mention in the interests of keeping things simple).

    Isn't the number of seconds since midnight on January 1, 1970 the same as the number of seconds since December 31, 1969?



  • @Jaime said:

    I hate to say it, but Oracle has two good solutions to this problem. The first is the "TIMESTAMP WITH LOCAL TIME ZONE" data type. You store a value with local time, but the db records the connection's timezone when it is stored. Any app that fetches the value will see it in their time zone automatically. The other is "TIMESTAMP WITH TIME ZONE", which requires a value to have an explicit time zone when inserted.

    To emulate these in databases that don't have similar data types, just do what the sane people here have suggested - always store UTC in the database. It's not a bad idea to store the time zone offset in another field in case you want to do analysis based on client time of day (e.g. What time of day does a typical user first log on?).

    Playing with timezones alway backfire because of the daylight saving time, especially since it's not all countries (or even states) that use the same cutoff date.



  • @Speakerphone Dude said:

    @Jaime said:

    I hate to say it, but Oracle has two good solutions to this problem. The first is the "TIMESTAMP WITH LOCAL TIME ZONE" data type. You store a value with local time, but the db records the connection's timezone when it is stored. Any app that fetches the value will see it in their time zone automatically. The other is "TIMESTAMP WITH TIME ZONE", which requires a value to have an explicit time zone when inserted.

    To emulate these in databases that don't have similar data types, just do what the sane people here have suggested - always store UTC in the database. It's not a bad idea to store the time zone offset in another field in case you want to do analysis based on client time of day (e.g. What time of day does a typical user first log on?).

    Playing with timezones alway backfire because of the daylight saving time, especially since it's not all countries (or even states) that use the same cutoff date.

    Proper time zones account for DST (for example, it's EDT in New York right now, not EST). Both suggestions above put the time offset and DST determination in the database's hands, so there is no risk of getting it wrong. You've been wrong a lot in this thread, maybe you should slow down the posting a bit.



  • @Jonathan said:

    Isn't the number of seconds since midnight on January 1, 1970 the same as the number of seconds since December 31, 1969?

    Yes and no. From the point of view of calculating a timestamp, the last discrete second of Dec 31, 1969 would be 23:59:59, so if you calculate from there you'd end up with an extra second.



  • (Two cents from Thailand) - In My Humble Opinion any string that has both a date and a time should include a time zone. Conventions are forgotten. 



  • @Jaime said:

    You store a value with local time, but the db records the connection's timezone when it is stored. Any app that fetches the value will see it in their time zone automatically.

    This is how MySQL's TIMESTAMP type works. I'd rather do the conversion in code, though.

    @Jaime said:

    The other is "TIMESTAMP WITH TIME ZONE", which requires a value to have an explicit time zone when inserted.

    Postgres has the same type, which is how I prefer to do things. I wish MySQL supported it, but MySQL is kind of crappy when it comes to dates (and, really, most types.. no IP address type? srsly?)

    @Jaime said:

    To emulate these in databases that don't have similar data types, just do what the sane people here have suggested - always store UTC in the database. It's not a bad idea to store the time zone offset in another field in case you want to do analysis based on client time of day (e.g. What time of day does a typical user first log on?).

    Yeah, if your DB doesn't support proper date types, you should definitely use UTC. Storing a separate TZ column is pretty gnarly. You're right that DST still works, but the hassle doesn't seem worth it.



  • @morbiuswilters said:

    @Jonathan said:
    Isn't the number of seconds since midnight on January 1, 1970 the same as the number of seconds since December 31, 1969?

    Yes and no. From the point of view of calculating a timestamp, the last discrete second of Dec 31, 1969 would be 23:59:59, so if you calculate from there you'd end up with an extra second.

    The last second of December 31, 1969 did start at 23:59:59 but it ended at midnight.



  • @Jaime said:

    @Speakerphone Dude said:
    Playing with timezones alway backfire because of the daylight saving time, especially since it's not all countries (or even states) that use the same cutoff date.
    Proper time zones account for DST (for example, it's EDT in New York right now, not EST). Both suggestions above put the time offset and DST determination in the database's hands, so there is no risk of getting it wrong.

    To the contrary, there is a clear risk because the cutoff date has been changing frequently in many timezones over the years, and not on a fixed schedule; once in a while local authorities will decide that the "DST" will start on such or such date, or decide to align with their neighbours the following year, etc. These changes cannot be guessed in advance so your "proper" database solution won't account for that unless there is a regular maintenance process for this data in the organization. As an example, these changes occurred in 2011 in many Canadian provinces, or more famously in Australia in 2006 (the year of the Olympics).

    @Jaime said:

    You've been wrong a lot in this thread, maybe you should slow down the posting a bit.

    Maybe you could print this suggestion in two copies: one for your files and the other for the recycle bin.



  • @Jonathan said:

    @morbiuswilters said:

    @Jonathan said:
    Isn't the number of seconds since midnight on January 1, 1970 the same as the number of seconds since December 31, 1969?

    Yes and no. From the point of view of calculating a timestamp, the last discrete second of Dec 31, 1969 would be 23:59:59, so if you calculate from there you'd end up with an extra second.

    The last second of December 31, 1969 did start at 23:59:59 but it ended at midnight.

     

    And the last second of December 31, 1972 started at 23:59:60.



  • @Jonathan said:

    @morbiuswilters said:

    @Jonathan said:
    Isn't the number of seconds since midnight on January 1, 1970 the same as the number of seconds since December 31, 1969?

    Yes and no. From the point of view of calculating a timestamp, the last discrete second of Dec 31, 1969 would be 23:59:59, so if you calculate from there you'd end up with an extra second.

    The last second of December 31, 1969 did start at 23:59:59 but it ended at midnight.

    Yes, but you don't measure from the end of a second, you measure from the beginning.



  • @Speakerphone Dude said:

    To the contrary, there is a clear risk because the cutoff date has been changing frequently in many timezones over the years, and not on a fixed schedule; once in a while local authorities will decide that the "DST" will start on such or such date, or decide to align with their neighbours the following year, etc. These changes cannot be guessed in advance so your "proper" database solution won't account for that unless there is a regular maintenance process for this data in the organization. As an example, these changes occurred in 2011 in many Canadian provinces, or more famously in Australia in 2006 (the year of the Olympics).

    Wrong. Stop and think this through--you're not calculating timezone cutoffs, you're using the current timezone at the time the datetime is stored. EDT is always 4 hours behind UTC, EST is always 5 hours behind. November 4, 2012 01:59:59 EDT is followed by November 4, 2012 01:00:00 EST. If you store the TZ, there is no risk of ambiguity (although there may be a different ambiguity if the underlying representation doesn't support leap seconds, which unix timestamps do not.)

    @Speakerphone Dude said:

    @Jaime said:
    You've been wrong a lot in this thread, maybe you should slow down the posting a bit.

    Maybe you could print this suggestion in two copies: one for your files and the other for the recycle bin.

    Did you stop to think that maybe the reason people are giving you shit is because you are lazy, sloppy, arrogant and wrong? (Whereas I'm just lazy and arrogant.)



  • our client on west-coast of us of a is store all time in date column in oracle as california time.

    i learn about daylight save time and now wonder what is real time in database. also changing time in same geography is stupid.



  • @Speakerphone Dude said:

    ...once in a while local authorities will decide that the "DST" will start on such or such date, or decide to align with their neighbours the following year, etc.
    ... in Australia in 2006 (the year of the Olympics).

    Wow, that must have been some adjustment, to turn 2000 into 2006.
    I smell troll-poo.



  • @Speakerphone Dude said:

    @morbiuswilters said:
    The epoch is a goddam date. The timestamp is a goddamn interval.

    This is basically the opposite, and Wikipedia agrees with me:

    • Timestamp: a sequence of characters or encoded information identifying when a certain event occurred, usually giving date and time of day, sometimes accurate to a small fraction of a second.
    • epoch: the number of seconds elapsed since midnight Coordinated Universal Time (UTC), January 1, 1970, not counting leap seconds. [...] It is neither a linear representation of time nor a true representation of UTC

     

    Nope. Try this link instead: Epoch 

     



  • Jesus I was hoping overnight this thread would die.

    Stop. Just stop. Everybody.


  • ♿ (Parody)

    @blakeyrat said:

    Jesus I was hoping overnight this thread would die.

    In which timezone were you thinking about?



  • CET. There was some confusion with DST so it's an extra two hours.


  • Discourse touched me in a no-no place

    @boomzilla said:

    @blakeyrat said:
    Jesus I was hoping overnight this thread would die.
    In which timezone were you thinking about?
    Probably circa it changed from BC to AD. Or CE and BCE. Or whatever the religious nuts are having the rest of us want to use these days, this week.



  • @morbiuswilters said:

    @Jaime said:
    You store a value with local time, but the db records the connection's timezone when it is stored. Any app that fetches the value will see it in their time zone automatically.

    This is how MySQL's TIMESTAMP type works. I'd rather do the conversion in code, though.

    For dealing with a lot of timestamps, it's quicker to use Oracle's inbuilt methods to perform the conversation as part of the query. Their way of handling it is quite handy.

    @morbiuswilters said:

    but MySQL is kind of crappy when it comes to dates (and, really, most types.. no IP address type? srsly?)

    INET_ATON and INET_NTOA are your friend, although I'd prefer it if the type was native rather than having to inline-mangle it.



  • @Cassidy said:

    For dealing with a lot of timestamps, it's quicker to use Oracle's inbuilt methods to perform the conversation as part of the query. Their way of handling it is quite handy.

    I'm not sure I see how this would make a significant performance difference. And I would consider timezone conversion to be a presentation issue, not something that should be embedded in the data layer (and would require me setting the DB connection TZ to the user's on each connection).

    @Cassidy said:

    INET_ATON and INET_NTOA are your friend, although I'd prefer it if the type was native rather than having to inline-mangle it.

    Doesn't work with IPv6, either. I consider it basically useless.


  • Discourse touched me in a no-no place

    @morbiuswilters said:

    @Cassidy said:
    INET_ATON and INET_NTOA are your friend, although I'd prefer it if the type was native rather than having to inline-mangle it.

    Doesn't work with IPv6, either. I consider it basically useless.

    The bug was only raised 6 years ago - give them a chance...



  • I’ve been using DateTimeOffset (in .NET) and storing them in the database (MS-SQL) as DATETIMEOFFSETs. That way, the conversion happens automatically when you call the proper DateTimeOffset.ToString() overload.


Log in to reply