Javascript Date wtf



  • W(hy)TF does this code...

    <FONT size=2>

    </FONT><FONT color=#0000ff size=2>var</FONT><FONT size=2> newDate = </FONT><FONT color=#0000ff size=2>new</FONT><FONT size=2> Date();

    newDate.setFullYear(2008,1,14);

     

    ..create a date object with the date of February 14th, 2008?

    what the hell kind of sense does it make to have the day and the year be zero-based and the month be one-based

    </FONT>


  • @campkev said:

    what the hell kind of sense does it make to have the day and the year be zero-based and the month be one-based

    Easy: the month is the only number you ever want to use as the index of an array, since months have names and years and days don't. Arrays have 0-based indices. Get over it.



  • My God, inconsistency in EMCAScript? Say it isn't so! Thank god there isn't anything like this for Visual Basic!



  • @The Vicar said:

    @campkev said:
    what the hell kind of sense does it make to have the day and the year be zero-based and the month be one-based

    Easy: the month is the only number you ever want to use as the index of an array, since months have names and years and days don't. Arrays have 0-based indices.

    I agree. Years are always (theoretically) an integer greater than 0 (or less than 0). Days are always an integer between 1 and 31 inclusive. Months didn't necessarily have an official numbering before data science--only an ordering. It makes perfect sense to arbitrarily decide that the we will number months from 0 to 11 inclusive, and the first element of a "months" array is simply 0, corresponding to January.

    Keep in mind, calendars are messy. Someone made an arbitrary decision. I can't back it up right now, but I'll bet that decision was copied exactly from some other language design or API. (STL? Java?)


     



  • @Brendan Kidwell said:

    @The Vicar said:
    @campkev said:
    what the hell kind of sense does it make to have the day and the year be zero-based and the month be one-based


    Easy: the month is the only number you ever want to use as the index of an array, since months have names and years and days don't. Arrays have 0-based indices.

    I agree. Years are always (theoretically) an integer greater than 0 (or less than 0). Days are always an integer between 1 and 31 inclusive. Months didn't necessarily have an official numbering before data science--only an ordering. It makes perfect sense to arbitrarily decide that the we will number months from 0 to 11 inclusive, and the first element of a "months" array is simply 0, corresponding to January.

    Keep in mind, calendars are messy. Someone made an arbitrary decision. I can't back it up right now, but I'll bet that decision was copied exactly from some other language design or API. (STL? Java?)

    Months have had standardized numbers for thousands of years.  There are documents from ancient Rome where January was I, February II, March III, etc.  Very simply put, the months after August are named after the number the had in the pre-Julian calendar.  (September is named after "septus", Latin for "seven", because it was the 7th month till the advent of the Julian calendar.  October, November and December are, in the same manner, named after the numbers 8, 9 and 10.)  So, stating that, "months didn't necessarily have an official numbering before data science," is not accurate.

    The person who made the arbitrary decision to number January as 0, etc., obviously suffers from spending WAY too much time communicating with computers and is extremely disconnected from all forms of human communication and thought.  Also, because of the base definition of the number 0, obviously doesn't understand the purpose of 0 in mathematics.  It's an ignorant decision, badly made.



  • @Brendan Kidwell said:

    I can't back it up right now, but I'll bet that decision was copied exactly from some other language design or API. (STL? Java?)

    It looks like it's been done that way at least since the C standard library was written (see here). It does seem counter-intuitive to me, though, that <font size="2">newDate.setFullYear(2008,1,14) sets the date to 2008-02-14, when it looks like it's setting it to 2008-01-14. Java does it the same way, but at least one language - Python - numbers them 1 to 12 instead (note the sentence directly after the first table).</font>
     



  • @Gsquared said:

    numbers for thousands of years.  There are documents from ancient Rome where January was I, February II, March III, etc.  Very simply put, the months after August are named after the number the had in the pre-Julian calendar.  (September is named after "septus", Latin for "seven", because it was the 7th month till the advent of the Julian calendar.  October, November and December are, in the same manner, named after the numbers 8, 9 and 10.)  So, stating that, "months didn't necessarily have an official numbering before data science," is not accurate.

    The person who made the arbitrary decision to number January as 0, etc., obviously suffers from spending WAY too much time communicating with computers and is extremely disconnected from all forms of human communication and thought.  Also, because of the base definition of the number 0, obviously doesn't understand the purpose of 0 in mathematics.  It's an ignorant decision, badly made.

    You said it all so I don't have to.  Thanks. :D

    The names of the latter months are all based directly off of their original numbered order, so to argue that the notion of assigning numbers to calendar months is a recent convention is pretty ignorant.



  • @Saladin said:

    The names of the latter months are all based directly off of their original numbered order, so to argue that the notion of assigning numbers to calendar months is a recent convention is pretty ignorant.

    I agree again. I stand corrected. Now, if you'll excuse me, I'll take my temporary stupidity elsewhere for a while.

    But anyway, calendars have always been so nutty that the behavior of C, Java, and JavaScript on this detail never particularly bothered me.
     



  • @Gsquared said:

    The person who made the arbitrary decision to number January as 0, etc., obviously suffers from spending WAY too much time communicating with computers and is extremely disconnected from all forms of human communication and thought.  Also, because of the base definition of the number 0, obviously doesn't understand the purpose of 0 in mathematics.  It's an ignorant decision, badly made.

    Should we be criticizing the guy who made january==0, or the guy who made the first element of an array be zero.  I don't know the history of such things, but it seems like the nomenclature behind arrays may have come from the fact that they were pointers in C.  Since a pointer is just an integer telling you where to look in memory, incrementing it by one will actually make you look at the next element in the array.  So in this regard it makes sense that a+0 is the first element.

    Then again, plenty of mathematicians will use y0 to represent the first element of an array. Did they do that pre-computers? 



  • Plenty of stuff in math is zero based. For example: Any kind of polynomial representation or expression of a number involves the zeroth place or constant term. Limits of sequences and series are often calculated with zero as one of the boundaries, it's the center of a number line, etc.

    In many calculations or representations of data where the position in a structure matters to a calculation, things are significantly simpler if the first element (the one coincident with the origin) has an offset 0. Any time you use a plain old array in a programming language (as opposed to some other collection or data structure like a graph, map, queue, etc.) you are indicating you wish the elements to be adjacent and continuous, usually for the purpose of calculating some metric based on the index. A zero-based index is essential to avoid fencepost errors or mistranslating formula.

    More specifically, when are talking about series or arrays or whatever, the index (i) is a parameter. And we often represent the terms as such:

     

    a0, a1, a2, ... an

     We are usually interested in the underlying function f(a0, i) specific to the series that when given the index i (or the value of a[i]) yields a[i+1].

    As such, a1 really means "the result of the first iteration of f()", starting with the initial value, or a0.

    Arrays in computer science could be defined as functions. The value of f(a, i) is deref(pa + i *ca), where p and c are constants (base address, size of element) that depends on the properties of the array object a.



  • I would come down squarely in the middle here.  The format of the function call itself is pretty WTF-y: why have three arguments when it is only the year you are changing?  Nevertheless, I don't see why it should be that difficult for anyone who has any business programming to recgnize the potential for off-by-one errors and adjust for array indexes.  It's not that damned hard--really.



  • @rjnewton said:

    I would come down squarely in the middle here.  The format of the function call itself is pretty WTF-y: why have three arguments when it is only the year you are changing?  Nevertheless, I don't see why it should be that difficult for anyone who has any business programming to recgnize the potential for off-by-one errors and adjust for array indexes.  It's not that damned hard--really.

    Well, I have never used 0 for january in java...

    I use the constant Calendar.JANUARY instead! Its value may change in the future, my code will not be impacted.



  • @acne said:


    Well, I have never used 0 for january in java...

    I use the constant Calendar.JANUARY instead! Its value may change in the future, my code will not be impacted.

    But as it's a static final int, would the value of JANUARY change in future you will have to recompile your code (static final fields are considered constants by compiler and value is used directly in bytecode to prevent a pointless class reference :)
     



  • Simple, let it download the current value of JANUARY from the internet ;)

     


    Another thing: Who the hell decided that starting dates from zero is a good idea? what's the point? It's not saving space that's for sure, it's just retarded.

    (And please don't say "because they are used as array indexes", what's wrong with array[month-1]?)

     



  • This is a WTF that's caught me occasionally too. I can see the reason for it but it really is silly; new Date(y, m, d) really ought to create the date d/m/y. It wouldn't be hard to index a names array by [month-1].



  • @tchize said:

    But as it's a static final int, would the value of JANUARY change in future you will have to recompile your code (static final fields are considered constants by compiler and value is used directly in bytecode to prevent a pointless class reference :)

     

    You're right :-/

    But you should always do some regression testing when a new version arrives, right? These tests usualy include a full rebuild.

    By the way, at work I only deliver source code to the clients, and my personnal projects are all compiled to binary code using gcj.



  • If you are making a Calendar class or API you should not make the month 0-indexed because it is counter-intuitive and makes the programmer have to either A.  look at it to figure it out, or B.  assume something about your implementation (that you are using an array).

     

    there's nothing wrong with the following method for a Calendar class

    public Month getMonth(int month)

    {

    return months[month - 1];

    }

     

    In fact, every time I'm going to use a Calendar extensively in an application I wrap the Calendar Class in Java with my own Calendar just so that months are 1 indexed.

     PS.  It's hard to use the static constants when performing computations.



  • @rjnewton said:

    I don't see why it should be that difficult for anyone who has any business programming to recgnize the potential for off-by-one errors and adjust for array indexes.  It's not that damned hard--really.

     A friend of mine actually spent several days trying to figure out a bug caused by this one.  Until you think about the dates, it does seem pretty strange that "January" 30th all of a sudden becomes "February" 2nd.  Of course it was really turning the nonexistant February 30th into April 2nd...the closest it could get.
     



  • @CapitalT said:

    Simple, let it download the current value of JANUARY from the internet ;)

     

    But what if some jackhole decides to change the names of the months?

    /Hey, you could have said the same thing about daylight savings time a year ago.... 



  • Yup, this recalls me a minor WTF which is an API inconsistency in ruby standard library.

    You can get current day-of-week :

    Date.today.wday

    0 is sunday 

    You can build a date like this :

    Date.commercial(year,week_number,day_of_week)

    day_of_week must be 1-> 7 , 7 is sunday. Guess what, it throws an exception when you try setting day of week to 0.

    You barely need to build a Date using another Date's day-of-week, but nevertheless, when it happens, chances of random crash are 14.3% until you figured it out ;-)



  • @Brendan Kidwell said:

    I stand corrected. Now, if you'll excuse me, I'll take my temporary stupidity elsewhere for a while.

    Wow.  Someone who admits a mistake and handles it maturely.

    I bow to you, sir.
     


Log in to reply