Argument of using boolean datatype...



  • Although it sound strangle... my company don't use boolean datatype in
    database, we use char(1) and store 'Y' and 'N' and some internal
    function to return application true or false, I like to use boolean for
    boolean type, at least for new column definition. However my colleague
    think "don't fix if it not broken anything else". May be he is
    correct... what do you think about it? Besides, any agrument (or
    benefit) , for change the datatype to boolean?



  • I hope by 'boolean' in the database you mean 'bit'? I don't think boolean si an SQL field type.

    Booleans (bits) are way better than chars for this purpose. At least, you don't have to convert a string value to a boolean at code side.

     

    Drak



  • @carfield said:

    Besides, any agrument (or
    benefit) , for change the datatype to boolean?




    Ya, you can use the value directly in an "if" statement in most
    languages. Example: in MySQL, "boolean" (available since v4.1) is a
    synonim for "tinyint(1)", with the predictable convention that 0 means
    false and anything else is true. Simplicity!



  • ♿ (Parody)

    That would be all well and good except for the fact the BOOLEAN datatypes do not exist in relational databases. It is a contradiction of the tri-value logic principle.  But I don't think there's anything wrong that convention. Consider a few facts ...

    • SMALLINT is the smallest (standard) integer you can use, and that's 2-4 bytes. a CHAR(1) is 1 byte
    • T/F vs Y/N is a matter of personal preference
    • A proper column name (per ISO-11179) is "Active_Indicator" as opposed to "Is_Active"
    • Given proper column names, CHAR(1) makes more sense than a 1/0. Consider this predicate " WHERE ACTIVE_INDICATOR = 'Y' " versus "WHERE ACTIVE_INDICATOR = 1"

    Now, for your middle-tier code, booleans do exist because NULLs do not. So you have the problem of dealing with NULLS. This is easily handled with the YN enum: Yes ('Y'), No ("N'), or Null ('?'). Maybe dealing with NULL indicators are not a requirement. in which case you could use them on your classes.

    Even so, your c/w is probably right. This is not something fundamently wrong with the application, it's more cosmetic. .  You will be opening your self up for a world of hurt, trust me. I've been there, done it, felt it, and seen quite a few other programmers feel it too.



  • Yes, in fact, just know that oracle don't have datatyle boolean...



  • Hmm, I've always just used int for booleans (C background). But
    seriously, for databases it's good practice; there is no standard
    boolean SQL type and most languages are willing to treat int as boolean.



  • @Alex Papadimoulis said:

    That would be all well and good
    except for the fact the BOOLEAN datatypes do not exist in relational
    databases. It is a contradiction of the tri-value logic principle.

    ....

    Now, for your middle-tier code, booleans do exist because NULLs do not.


    Consider this:

    create table flags (name varchar(32) primary key, value boolean NOT NULL default 0);

    Where's the contradiction?



  • I don't have a lot of experience with the various databases out there,
    but I could have swore I saw at least one with a 'bit' field type. Can
    any SQL junkies comment on this?



  • I've only really dealt with SQL Server 2000 and Access. ducks



    Anyway, SQL Server 2000 has a bit datatype as does Access.



  • @carfield said:

    However my colleague think "don't fix if it not broken anything else". May be he is correct... what do you think about it? Besides, any agrument (or benefit) , for change the datatype to boolean?

    I agree with you colleague. Besides with my minimal experience with mysql i don't think it supports boolean. Unless you are using MS-Access



  • I use a BIT which can be a 0 or a 1.  The probalem with a CHAR(1) is sooner or later someone will assign a 'T' or an 'F' or maybe the char '1' or '0'.  I've even seen bugs in .NET where middle tier code in C# compares 'Y' to 'y' (that someone accidently put in the database) and returns false when they should have returned true.  Sure you can force the strings to upper or lower case or do a case insenstive compare, but that adds overhead.

    If a programmer worked for me that insisted on using the CHAR(1) instead of a BIT he/she would be looking for a new job.



  • @tedbilly said:

    I use a BIT which can be a 0 or a 1.  The probalem with a CHAR(1) is sooner or later someone will assign a 'T' or an 'F' or maybe the char '1' or '0'.  I've even seen bugs in .NET where middle tier code in C# compares 'Y' to 'y' (that someone accidently put in the database) and returns false when they should have returned true.  Sure you can force the strings to upper or lower case or do a case insenstive compare, but that adds overhead.

    If a programmer worked for me that insisted on using the CHAR(1) instead of a BIT he/she would be looking for a new job.

     

    The BIT can also be NULL, which in .Net casts to False, which is mosty OK, as long as you work on the premise that 'if it ain't filled in, it ain't on'.

    Drak



  • @Drak said:

    as long as you work on the premise that 'if it ain't filled in, it ain't on'.




    One could disallow Nulls for that table column, so that no accidental NULLs get in the table.



  • @tedbilly said:

    If a programmer worked for me that insisted on
    using the CHAR(1) instead of a BIT he/she would be looking for a
    new job.




    That's a good reason to fire someone.



  • @TType85 said:

    Isn't it more effiecent for the database to use the bit's when you are
    doing a WHERE clause on a table with 80K+ records?




    I don't think so. Columns with only two values don't make good indexing
    columns, so regardless of their type, they will likely evaluted last,
    and then by sequentially inspecting each record. And after all this
    slow I/O processing it does not really matter if you check a byte for a
    non-zero or a specific value.



    cu



  • I recommend using the 3-value boolean. That's where you can have a true, a false, or a maybe.



    It's really handy for those hard decision making routines.





  • The SQL Server BIT type actually stores it as a byte.  However, SQL isn't completely stupid.  If you have multiple BIT columns in a table, it aggregates them together within one byte of storage (obviously until you overflow the byte...  [:)]).  So if you have a single BIT column, you're using one byte of storage per row.  If you have 7 BIT columns, you're still using one byte of storage per row.



  • Why I never use TRUE/FALSE, boolean fields...

     

    First, you are always "converting" values...

    Second, some SGBDs use 0/1, 0/-1 ... which messes binary operations..

    Third, things are never what you think initially..

    Example:

    You have one field named "status", about one article.

    True: article is visible.

    False: article is hidden.

     

     

    From my experience...

    Assuming, that you are not limited to a true/false statement, now, imagine a BYTE field.

    You can assume "new commands"...

    0: article is visible

    1: article is hidden

    2: article is banned

    9: article is deleted

    Instead of delete imediatly, you can schedule a delete operation later..

    so that your database is still "space" optimized...

    Or if you still want to keep control of information... for instances, like eBay does, so that you always have the information in case of a Police request.

     

    It's your choice...

     

    @carfield said:

    Although it sound strangle... my company don't use boolean datatype in database, we use char(1) and store 'Y' and 'N' and some internal function to return application true or false, I like to use boolean for boolean type, at least for new column definition. However my colleague think "don't fix if it not broken anything else". May be he is correct... what do you think about it? Besides, any agrument (or benefit) , for change the datatype to boolean?


Log in to reply