Random line



  • Just found this while modifying the network-access code of an application:


    	long double	total_content_length = 0;
    

    total_content_length is a byte count.


  • What, you never heard of a nibble or a bit?



  • @TarquinWJ said:

    What, you never heard of a nibble or a bit?

    If you need precision to bit level, then you store the count in bits, not fractions of bytes.



  • @Carnildo said:

    Just found this while modifying the network-access code of an application:

    	long double	total_content_length = 0;
    


    total_content_length is a byte count.

    What if you have a message that's 10<FONT size=2>4932 </FONT><FONT size=3>bytes long?</FONT>



  • @Jaime said:

    @Carnildo said:

    Just found this while modifying the network-access code of an application:

    	long double	total_content_length = 0;


    total_content_length is a byte count.

    What if you have a message that's 10<font size="2">4932 </font><font size="3">bytes long?</font>

    Assuming a world where that isn't a joke, once you get to such large floating point values, the granularity of the representation is much larger than 1.0, so it isn't even possible to store the exact size precisely. The answer is an integer big-num, or just represent the value as an array of BCD digits, or even plain old ASCII-coded decimal, which isn't really that hard to work with.



  • @smxlong said:

    Assuming a world where that isn't a joke, once you get to such large floating point values, the granularity of the representation is much larger than 1.0, so it isn't even possible to store the exact size precisely. The answer is an integer big-num, or just represent the value as an array of BCD digits, or even plain old ASCII-coded decimal, which isn't really that hard to work with.

    Good try. If all the switches in the world were streaming that message in parallel using all of their ports, using gigabit ethernet, it would take more than 10 million years before you would lose a byte due to rounding. The code, although stupid, won't suffer any floating point rounding side-effects under any conceivable circumstances. long double may hold up to 34 decimal digits of precision, depending on compiler and target. Under the most common conditions, long double is actually just double, but that makes the jokes less funny.



  • Also, it would take a pile of hard drives 104836 times the size of the observable universe to store the message. The number is so incredibly large that it can't be anything but a joke.



  • @smxlong said:

    @TarquinWJ said:
    What, you never heard of a nibble or a bit?
    If you need precision to bit level, then you store the count in bits, not fractions of bytes.
    Boo, that's no fun. It's always more fun to use the worst representation you can. Varchar is best for numbers. Or blob. No worry about loss of precision there.



  • I would think that with numbers that big, it is appropriate to say that this is a "real big number". Therefore Long.MAX_VALUE is certainly good enough. Or maybe Double.POSITIVE_INFINITY.



  • Assuming long double is 80 bits, that's 64 bits of fraction, getting you 65 bits of precision (there's an implied 1 on the beginning). That means it can represent the integers 0 to 36,893,488,147,419,103,232 completely accurately. Technically representing that last number it loses the last bit and assumes a 0, but as we want a 0 that's fine. It would be able to represent the size of an up to 32 exa-byte message, which seems a lot. Wikipedia says "5 exabytes: Amount of data equivalent to all words ever spoken by humans, in text form.". I think a 64-bit int would be plenty, signed it can represent a message size of up to 8 exabytes (-1).

    If it's a compiler where long double is 64-bits (like MS's) then a 64-bit int would be more precision in the same space, and faster for the computer to work with.

    I can understand not wanting to use a 32-bit int due to possibly wanting values over 2G, but using a floating point type for integer data is almost always stupid.



  • @TarquinWJ said:

    @smxlong said:
    @TarquinWJ said:
    What, you never heard of a nibble or a bit?
    If you need precision to bit level, then you store the count in bits, not fractions of bytes.
    Boo, that's no fun. It's always more fun to use the worst representation you can. Varchar is best for numbers. Or blob. No worry about loss of precision there.

    DateTime total_content_length = 0;
    bool[] content = new bool[8 * total_content_length];
    


  • @db2 said:

    @TarquinWJ said:
    It's always more fun to use the worst representation you can. Varchar is best for numbers. Or blob. No worry about loss of precision there.
    DateTime total_content_length = 0;
    bool[] content = new bool[8 * total_content_length];
    Not a bad effort, but you could try using something other than a bool, since bools are too efficient. Maybe try a character array, or even a custom object that stores the true/false value as a property using a character with value "1" or "0". Need to think that bit further outside the box.



  • @TarquinWJ said:

    [quote user="db2"]
    DateTime total_content_length = 0;
    bool[ content = new bool[8 * total_content_length];
    Not a bad effort, but you could try using something other than a bool, since bools are too efficient. Maybe try a character array, or even a custom object that stores the true/false value as a property using a character with value "1" or "0". Need to think that bit further outside the box.[/quote]

    A custom object? No no, it's all about code reuse now. You've already got hundreds of perfectly good objects to use. Try something like this:

    Dictionary<int, SqlConnection> content = new Dictionary<int, SqlConnection>();
    content.Add(0, new SqlConnection("Just put your data in the ConnectionString!"));
    

Log in to reply