JSONx is Sexy LIESx



  • So, a fun detail of the JSON.stringify function I have been compensating for is its inability to cope with "undefined".

    http://jsfiddle.net/3csr0jot/1/ [Edited to add a line break I noticed was missing.]

    Now, I know the existence of undefined and null is a tangle, but there is at least one very good reason for having both: differentiating between empty entries in sparse arrays. Where you find an undefined, there is no entry; where you find a null, the entry itself is empty.

    When keeping indexes as arrays this is very useful because one typically wants indexes to be synchronized across initializations. In other words, if I have indexes in multiple locations, when I delete an item I want indexes that are initialized after that to behave the same as those initialized earlier when they deleted an item. My choices are to:

    1. Use them as proper, somewhat dumb, arrays, and splice the deleted elements out, then REINDEX EVERYTHING. (This is unacceptably cumbersome, at least for my current project.)

    2. Set the item to null, then handle the null everywhere it might occur. This is troublesome, but works okay. It might introduce memory leaks, though, so I now have to profile for those and make sure I haven't created any. Also, I lose the ability to differentiate between entries that were deleted (undefined) and those that are properly null. Further, and this is where the real pain comes in, this breaks all the fancy new functional looping on arrays like map and reduce, which now have to cope with null where there otherwise would not have been nulls (again, in the case of my current project). Also, some of these functions (as well as functions in jQuery) don't differentiate between null and undefined, so I can't reliably use them to populate lists with nulls. I have to assign or push them... ::sigh::

    3. Delete the index, leaving a blank (undefined), keeping all the indexes in sync no matter when they're initialized.

    However, as you can see from the JSFiddle above, 3 gets magically reduced to 2 because the JS in JSON is misleading. Why is it that the creator(s) of JSON saw fit to skip a possible value in JS?

    The real fun, BTW, begins between the client and server, where data is being converted to JSON at some point in the exchange and every undefined is being converted to null.



  • I don't know what you're griping about here, JSON is perhaps the simplest format there is.

    It's based on JavaScript object notation, it isn't the same thing as JavaScript object notation. Which IIRC doesn't even include a method of describing "undefined" in the way you want.

    ... in fact I'm going to turn this around and challenge you: how do you write a JavaScript object, using plain ol' JavaScript object notation, that includes an "undefined"? Your fiddle certainly doesn't have one; you use the delete command.


  • FoxDev

    @VaelynPhi said:

    So, a fun detail of the JSON.stringify function I have been compensating for is its inability to cope with "undefined".

    it's not an inability to cope with undefined it's that undefined IS NOT A VALID JSON VALUE

    personally i see the mapping of unknown to null poreferable than mapping unknown to the empty string.

    I would of course prefer it to simply not serialize the value at all (as it does for objects IIRC) but that doesn't wortk with arrays (otherwise indexes would change) so a mapping has to be done and the most undefined liek value that is not undefined in JS is null.

    sorry man but if i was assigned this as a bug i'd close it

    CLOSED - WONTFIX - PER SPEC



  • So... you can't google "RADIUS", but you can google "JSON"?

    @blakeyrat said:

    ... in fact I'm going to turn this around and challenge you: how do you write a JavaScript object, using plain ol' JavaScript object notation, that includes an "undefined"? Your fiddle certainly doesn't have one; you use the delete command.

    http://jsfiddle.net/moo7L1c7/1/

    For an existing array or object, the delete keyword is how you make an element undefined. For an array, any spare commas create undefined elements. We even had a whole discussion about this in another thread talking about Internet Explorer's inability to handle these elements at the ends of arrays. Did you have an aneurysm between then and now?



  • Ok well whatever, point is the JSON spec is well-defined and easy-to-understand and you're clearly wrong in your expectations.



  • @accalia said:

    it's not an inability to cope with undefined it's that undefined IS NOT A VALID JSON VALUE

    I am aware of the JSON spec. My point is that undefined is a valid value in Javascript, it is a value value for an object in Javascript, and it is a valid value for an array in Javascript. That "Javascript Object Notation" lacks an ability to handle it by specification should tell you something about whoever wrote the spec.

    @accalia said:

    personally i see the mapping of unknown to null poreferable than mapping unknown to the empty string.

    I agree. However, would it have been that difficult to map it to the keyword undefined, just like null?

    @accalia said:

    I would of course prefer it to simply not serialize the value at all (as it does for objects IIRC) but that doesn't wortk with arrays (otherwise indexes would change) so a mapping has to be done and the most undefined liek value that is not undefined in JS is null.

    Assuming that serialization is happening to data going offline, one can filter the arrays in the replacer function to shake out undefined values (or nulls, if desired). Annoyingly, the replacer function itself will not filter (you have to filter the array yourself and then return it for iteration by the serializer).

    @accalia said:

    sorry man but if i was assigned this as a bug i'd close it

    CLOSED - WONTFIX - PER SPEC

    If you didn't have anyone to escalate this to for review of the spec, I'd expect that. If you wrote the spec, I'd be sending you very unkind bug reports.

    @blakeyrat said:

    Ok well whatever, point is the JSON spec is well-defined and easy-to-understand and you're clearly wrong in your expectations.

    I agree that it is well defined and easy to understand. It is, however, insufficient for representing a Javascript Object. Just imagine that it included "undefined" under where it includes "null". Is this really that problematic of a change? Does it break existing behaviour somehow?

    I could go further and question its inability to cope with cyclic structures and functions, but that's more complicated and requires further specs like JSONPath. Adding undefined is a minute change.


  • FoxDev

    var a = {
        q: 42,
        z: undefined
    };
    var b = {
        q: 56
    };
    console.log(a.z);
    console.log(b.z);
    console.log(a.z === b.z);
    

    i'm failing to see your point. why would you serialize an undefined value when that undefined value is indistinguishable from a value defined to be undefined?

    (yes i know about object.hasOwnProperty. Yes i know it cna be used to distinguish these cases. that's not it's design intention and i think that using it in that manner is not a good idea, particularly since it's behavior can change if the rules of inheritance change. (they were proposed to do in ES6, with a decent set of logic behind it too))



  • Cope?



  • @VaelynPhi said:

    I am aware of the JSON spec. My point is that undefined is a valid value in Javascript, it is a value value for an object in Javascript, and it is a valid value for an array in Javascript. That "Javascript Object Notation" lacks an ability to handle it by specification should tell you something about whoever wrote the spec.
    A counterargument: in some ways, that's more an argument that the name is bad than that the spec is bad. JSON was made as a format for client<->server exchanges at a time when JavaScript on the server wouldn't have been a very realistic option; that means that it was meant as a format to be used in a cross-language situation. And maybe JS undefined doesn't map into JSON very nicely, but if undefined were present in JSON then that wouldn't map into non-JS users very nicely. So you're just moving the problem around.

    Edit: Actually I guess maybe this isn't as good of an argument as I initially put it. It's probably usually easier to avoid using undefined in situations where you are in other languages and don't want it than it is to compensate for not having it when you're in JS and do.



  • @accalia said:

    i'm failing to see your point. why would you serialize an undefined value when that undefined value is indistinguishable from a value defined to be undefined?

    In an object, no, because it's already undefined. It's when it's in an array that not converting it to null is important.

    [1,,3] and [1,null,3] are treated differently in loops. In particular, most functional manipulations will implicitly skip the second member of the first array and increment the index, so you go from index 0 to index 2, whereas for the second array the second index won't get skipped, and the function handling the values must deal with the null explicitly.

    So, properly, if I have an array [1,2,3] and delete the second value, it should be serialized as [1,,3], not [1,null,3].

    @blakeyrat said:

    Cope?

    I am, of course, compensating for this in my code. However, changing a bad spec is also sane. To quote one of my favourite scientists:

    An old adage says, "It is better to light a candle than to curse the darkness". I think we better do both.

    @EvanED said:

    A counterargument: in some ways, that's more an argument that the name is bad than that the spec is bad. JSON was made as a format for client<->server exchanges at a time when JavaScript on the server wouldn't have been a very realistic option; that means that it was meant as a format to be used in a cross-language situation. And maybe JS undefined doesn't map into JSON very nicely, but if undefined were present in JSON then that wouldn't map into non-JS users very nicely. So you're just moving the problem around.

    This may be the case; frustratingly, there is no documentation (at least that I can find) for this in the spec. Searching the ECMA doc for "undefined" returns nothing. My googling has failed me in that most of the results I find are about people getting undefined when they're parsing--nothing about undefined and why JSON lacks it.

    @EvanED said:

    Edit: Actually I guess maybe this isn't as good of an argument as I initially put it. It's probably usually easier to avoid using undefined in situations where you are in other languages and don't want it than it is to compensate for not having it when you're in JS and do.

    There may be equivalents in other languages, or mayhap they just have to do the null dance.


  • ♿ (Parody)

    @VaelynPhi said:

    There may be equivalents in other languages, or mayhap they just have to do the null dance.

    I think most have to treat sparse data structures specially. Native datatypes that have array-like indexing generally don't support this.



  • Assuming you have central functions that serialize and unserialize your data, you could always store undefined as {"undefined":true} or something. Better than a magic string, anyway....



  • Whine all you like, it is not defined in the spec and therefore isn't a valid case. I suppose you it must be impossible to write some code on your end to deal with this situation.

    Considering I've recently had Android 2.x bugs hit me with JSON.parse and empty string blowing up.

    We used to have some twat that used almost the same reasoning with a jQuery plugin, the conversation went thus:

    Twat - "I would expect it to work with that markup" (not markup that the plugin would expect).
    My Manager - "I would expect it to only work as documented, nothing more."


  • Discourse touched me in a no-no place

    @lucas said:

    it is not defined

    So it's...

    ... undefined



  • Q: How many non-values does a JavaScript programmer need?

    A: three: null, undefined, and FileNotFound



  • @accalia said:

    I would of course prefer it to simply not serialize the value at all (as it does for objects IIRC) but that doesn't wortk with arrays (otherwise indexes would change)

    I can see absolutely no reason why passing

    [
        "zero",
        "one",
        ,
        "three"
    ]

    to JSON.stringify should make it emit

    [
        "zero",
        "one",
        null,
        "three"
    ]

    instead. I'm with @VaelynPhi - this looks like sloppy spec writing to me.

    Edit: JSON.parse doesn't accept the first form either.

    The problem is not that the JSON spec is inconsistent, just that it seems a shame for the tidy Javascript sparse array notation to be missing from it.



  • because you have an empty array element?



  • No, you don't. You have a missing array element. Null is a value and setting an array element to null is not the same thing as failing to set it at all.



  • I believe they (whoever did the JSON spec) had to compromise this because of other languages that don't define undefined. Many C-based PLâ„¢ do have an undefined stated, but no value, and the compiler usually complains about this.
    Anyway, I've never found my self in a situation where this was a problem.



  • Sure, JSON was designed as a simple data interchange format, and trying to turn it into ASN.1 would have been a horrible mistake.

    That said, it seems to me that allowing the use of adjacent commas to skip elements in array definitions is quite tidy syntax and would never make JSON less useful.

    Obviously it would be very poor design to rely on the distinction between empty and missing array elements when designing an API that might be used via languages that don't make that distinction. But many languages do make it, and it would be nice to be able to use JSON to persist or transfer sparse arrays when working with one of those, if that's what needs to be done.

    To address @VaelynPhi 's original use case (a requirement to persist sparse arrays, some of whose elements might have null values): it seems to me that the least ugly way to press JSON into service would be to reserve some specific JSON-representable data value as a placeholder for undefined array elements, and use replacer and reviver functions passed to JSON.stringify() and JSON.parse() to deal with that as required.

    The choice of a suitable placeholder value would depend on what else might ever need to be encoded in the same chunk of JSON. In extremis you might be tempted to use a GUID, though in most cases I expect an empty but non-null object {} or array [] would be a natural fit.



  • I appreciate that but I can't see where this is going to be a massive problem. The behaviour seems sensible to me.



  • @Eldelshell said:

    Anyway, I've never found my self in a situation where this was a problem.

    @lucas said:

    I can't see where this is going to be a massive problem. The behaviour seems sensible to me.

    Filed under: Works on my machine


  • Discourse touched me in a no-no place

    @flabdablet said:

    But many languages do make it, and it would be nice to be able to use JSON to persist or transfer sparse arrays when working with one of those, if that's what needs to be done.

    Ah yes, another small step on the road to Hell XML! Carry on…



  • @lucas said:

    Whine all you like, it is not defined in the spec and therefore isn't a valid case. I suppose you it must be impossible to write some code on your end to deal with this situation.

    This reaction is part of what I think is wrong with programmers in general. No, it's not impossible to deal with in the code; in fact, I've already dealt with it. However, the code that deals with it is longer, more complicated, and also slower. It is a small addition, but will naturally grow linearly with the array size.

    I am definitely not "whining". If you honestly think I am, it could be because you have failed to understand the point of my OP: JSON does not support Javascript Objects fully, and the omission of the undefined keyword is a small omission with large consequences. To add insult to injury, the specs don't even mention the spec writer's reasons for omitting it. So, either they didn't think of it at all, or there is some other convoluted explanation that boils down to the spec being poorly written in this respect. The spec spells out why functions are omitted (basically, they're hard to serialize). It also omits native data structures and explains that too. However, a basic component of Javascript is completely skipped and no reasoning is given--not even a mention.

    If you truly think remarking on this omission is whining, then YATRWTF here.

    @lucas said:

    Considering I've recently had Android 2.x bugs hit me with JSON.parse and empty string blowing up.

    What object is the empty string representing? JSON.parse doesn't hesitate to puke errors for anything else, and as far as I know it throws an error if passed an empty string, as it well should. An empty string is not valid JSON. It would be nice if it had a validate function, or returned an error object, or had a continual passing form ala node to prevent having to use try/catch blocks with it, but I suppose that's just more whining, huh?

    @lucas said:

    Twat - "I would expect it to work with that markup" (not markup that the plugin would expect).My Manager - "I would expect it to only work as documented, nothing more."

    Except that I have already programmed according to the behaviour in the spec. I'm not sitting here waiting for it to magically start working how I think it should. And I'm certainly not telling my clients that I can't complete a project because JSON's spec isn't how I think it should be.

    Was any of that reply supposed to be more than an ad hominem?

    @Planar said:

    Q: How many non-values does a JavaScript programmer need?

    A: three: null, undefined, and FileNotFound

    You forgot "WTFIE?", the need for which is thankfully slowly eroding.

    @flabdablet said:

    instead. I'm with @VaelynPhi - this looks like sloppy spec writing to me.

    Edit: JSON.parse doesn't accept the first form either.

    The problem is not that the JSON spec is inconsistent, just that it seems a shame for the tidy Javascript sparse array notation to be missing from it.

    Eureeka!

    @Eldelshell said:

    I believe they (whoever did the JSON spec) had to compromise this because of other languages that don't define undefined. Many C-based PLâ„¢ do have an undefined stated, but no value, and the compiler usually complains about this.Anyway, I've never found my self in a situation where this was a problem.

    Perhaps there's a reason for the way this was handled, but, again, it's one of those things that isn't addressed anywhere I can find. I would expect the JSON parsers in different languages to handle those differences according to the languages, not for them all to default to a behaviour which might not even be supported in the language (ie, is there a strict "null" equivalent everywhere?).

    @flabdablet said:

    To address @VaelynPhi 's original use case (a requirement to persist sparse arrays, some of whose elements might have null values): it seems to me that the least ugly way to press JSON into service would be to reserve some specific JSON-representable data value as a placeholder for undefined array elements, and use replacer and reviver functions passed to JSON.stringify() and JSON.parse() to deal with that as required.

    This is true, but I'm just putting in nulls since I don't need to distinguish between empty and unspecified entries. The main woe is merely how it impacts the functional code.

    @lucas said:

    I appreciate that but I can't see where this is going to be a massive problem. The behaviour seems sensible to me.

    One of the reasons for expecting the undefined index behaviour is to match the behaviour of a database returning entries with ids: the sparseness in the database and the spareness in the array are intentional, so that when entries are added or removed, the ids of the entries match in both elements created before those operations and elements created after. If I have a client index like so:

    [1,2,3]

    And the client deletes the second key:

    [1,,3]

    The ids on the page are 0 and 2, but if the empty entry isn't kept, a new client connecting will get this array:

    [1,3]

    Which it will then assign the ids 0 and 1, so messages from one client to another about element 2 have no object to handle them and elements from the second client to the first about element 1 have no handler. Further, if elements get added, then every subsequent element in the second client is off by one, and further deletions make this worse. A solution is to assign them to a hash table instead of an array, but this introduces all kinds of boilerplate where just using an array and keeping the empty entries (and perhaps "defragging" when no clients are connected, or before saving--something a proper database will handle smoothly) is straightforward.



  • Where did you learn to base decisions on the position of an unordered unsorted collection? Your example is either obfuscated or really stupid.



  • Convert the array to a hash map before stringify and then convert it back afterwards.

    I don't see the big problem.

    I mean, let's assume you have an entry with an index of 1 and one with the index of 1,000,001. And nothing in between. Using your data model, this would be somewhat inefficient I dare say. After all, your JSON would sport a million of "undefined" entries.

    Might pose a bit of a problem for a sync.



  • @Eldelshell said:

    Where did you learn to base decisions on the position of an unordered collection? Your example is either obfuscated or really stupid.

    Where did you learn that an array is an unordered collection? My example was abstract, true, but pretty specific. Any other details about the implementation would just be fluff. The point is that the semantics of the data are sparse array semantics, and having undefined elements is necessary to keep those semantics cleanly.

    @Rhywden said:

    Convert the array to a hash map before stringify and then convert it back afterwards.

    I don't see the big problem.

    Without modifying the middleware, which I may or may not be able to do, I cannot. Either way, if I were going to do this, I would just store it as a hash map in the first place (which native JS arrays pretty much are to begin with).

    Using a hash is tedious because it requires not only keeping track of keys but also having to write lookup functions to loop through keys. An array already has these built in. The only adjustment I've had to make is checking for nulls on the front end and keeping any indexes there aligned with those nulls. This is far less work than rolling all the code for dealing with a hash map.

    @Rhywden said:

    I mean, let's assume you have an entry with an index of 1 and one with the index of 1,000,001. And nothing in between. Using your data model, this would be somewhat inefficient I dare say. After all, your JSON would sport a million of "undefined" entries.

    Might pose a bit of a problem for a sync.

    This is true, but for that to happen in this specific case, users would have to create a million and one entries, then delete all but the first and last of them, all in one session. This is not something they can (without programming) do automatically, so I hope their insurance covers carpal tunnel.

    The data store is filtered between clients, so all the nulls get dropped, and the array [1, ... a million minus one undefineds... , 1000001] would become [1,1000001].

    By the time there are enough users for anywhere near those numbers to be feasible in one session, the data store will have moved to a database, where the semantics of a sparse index are already well supported.



  • Erm, if the data store is filtered then I don't really see the point in your complaint...

    Also, my example was just one of the problems. Just imagine a JSON object, where every other entry was undefined and similar issues.

    Also, looking for keys is an issue? Seriously? Take a hash array like this:

    var my_array = [{key:1, data: "baz"},{key:2, data: "foo"}]
    

    And then simply define a prototype like this:

    if (!Array.getByKey) {
      Array.prototype.getByKey = function (key) {
        this.forEach(function (val, ind, arr) {
            if (val.key == key) {
                return val;
            }
        });
      }
    }
    

    Done. Simply use my_array.getByKey(1) or something. I mean, even if your original proposal worked you'd still have to deal with the missing entries. Prototype replacement functions for the methods you need once and you're done.



  • Arrays, like everything else in JavaScript, are hashes.



  • I think

    @VaelynPhi said:

    the omission of the undefined keyword

    is fine; if JSON allowed its arrays to be sparse and used the same notation as Javascript does, it wouldn't need an extra keyword.



  • @VaelynPhi said:

    I'm just putting in nulls since I don't need to distinguish between empty and unspecified entries.

    In that case, you could tidy things up a bit by just accepting the use of null as the JSON placeholder value for undefined JS array elements, and passing JSON.parse() a reviver function that returns undefined whenever it sees a null.



  • @Eldelshell said:

    Where did you learn to base decisions on the position of an unordered collection? Your example is either obfuscated or really stupid.

    An array is an ordered collection. That's kind of the point of it.



  • Yeah, I meant unsorted not unordered. Anyway, the point stands on relying on an array's index of an element to base decisions upon.



  • For extra fun, try this: JSON.stringify(Infinity). The answer is the same for JSON.stringify(-Infinity), of course.



  • @Eldelshell said:

    the point stands on relying on an array's index of an element to base decisions upon.

    It's perfectly reasonable to design something that relies on specific values for array indexes in Javascript or other languages that support sparse arrays.

    In a sparse array, the relationship between the specific value of an array index and the data held in the indexed element can be made as unproblematically reliable as that between the value of a database table's numeric primary key and the rest of the data in the row with that key.

    Yes, this is treating a sparse array as if it were mostly a hashmap. But in Javascript, as in many other languages that support sparse arrays, arrays implement hashmap semantics as well as the indexing, ordering and all-at-once processing you typically get with non-sparse arrays, and using them can result in cleaner and more concise code than generalizing to non-numeric keys and using straight-up hashmaps would do.



  • @Rhywden said:

    Take a hash array like this:

    var my_array = [{key:1, data: "baz"},{key:2, data: "foo"}]
    if (!Array.getByKey) { Array.prototype.getByKey = function (key) { this.forEach(function (val, ind, arr) { if (val.key == key) { return val; } }); } } Inner-platforming Javascript in order to avoid its perfectly usable O(1) array lookup operation in favour of a verbose O(n) EAV-style replacement? Very enterprisey.


  • @flabdablet said:

    Inner-platforming Javascript in order to avoid its perfectly usable O(1) array lookup operation in favour of a verbose O(n) EAV-style replacement? Very enterprisey.

    Are you sure that it's O(1)? After all, as already stated, Javascript arrays behave a lot like hashes.

    Take this, for example:

    var largeObject = {};
    var smallObject = {};
    
    var x, i;
    
    for (i = 0; i < 10000000; i++) {
      largeObject['a' + i] = i;
    }
    
    for (i = 0; i < 100; i++) {
      smallObject['b' + i] = i;
    }
    
    console.time('10k Accesses from largeObject');
    for (i = 0; i < 10000; i++) x = largeObject['a' + (i % 10000000)];
    console.timeEnd('10k Accesses from largeObject');
    
    console.time('10k Accesses from smallObject');
    for (i = 0; i < 10000; i++) x = largeObject['a' + (i % 100)];
    console.timeEnd('10k Accesses from smallObject');
    

    That yielded the following in Chrome on an i5 3570

    10k Accesses from largeObject: 3.079ms
    10k Accesses from smallObject: 1.143ms
    

    Not to mention that I'm not sure where you get this "perfectly usable" from if his scenario doesn't yield the "perfectly usable" arrays... But, hey, carry on with your "my statements are not supported by reality but I'll scoff at your notions anyway"-sentiment.



  • @Rhywden said:

    Are you sure that it's O(1)?

    Considering that you had to make your large object 100,000 times bigger than your small object in order to make access to it 3 times slower, Javascript hashmap access is clearly a lot closer to O(1) than to O(n).

    @Rhywden said:

    Not to mention that I'm not sure where you get this "perfectly usable" from if his scenario doesn't yield the "perfectly usable" arrays...

    The OP is about deficiencies in JSON arrays, not JS arrays.



  • A near 200% increase does not look like O(1) to me.

    @flabdablet said:

    The OP is about deficiencies in JSON arrays, not JS arrays.

    Yes, your point being? If you can't use the "perfectly usable" JS arrays due to deficiencies in your data source, then I don't quite see why you criticise me for posting an alternative?

    Seriously, that's like lambasting someone for not using a hammer when you only have screws.



  • @Rhywden said:

    If you can't use the "perfectly usable" JS arrays due to deficiencies in your data source

    But you can, and @VaelynPhi already is. And Javascript's JSON.stringify() and JSON.parse() do take replacer and reviver function parameters whose entire point is to let you define your own mappings between JS objects and JSON's available data structures when the latter are not natively sufficient.

    Also, to quibble about whether native JS element retrieval is truly O(1) as opposed to O(log n) or something in between is completely beside the point.

    Your proposed alternative, as well as requiring a very ugly EAV-style data layout complete with inner-platform key: and data: keys, implements an O(n) retrieval operation which will in general perform much worse than native. Go ahead and benchmark 10,000 calls to getByKey() for a 10,000,000 element array if you truly can't see that by inspection.



  • You're the one who brought big-O-notation into the discussion in the first place. So you don't get to whine about how I'm using your argument and running with it. Don't want to talk about something? Then don't mention it in the first place.

    Secondly, if the stringify-method does indeed accept such functions, then I don't understand the OP's complaint at all. I mean, we're all familiar with the concept that casting from one data type to another doesn't always happen with 100% accuracy and without any data loss.

    And I never said that my alternative was the best.


  • Discourse touched me in a no-no place

    @Rhywden said:

    And I never said that my alternative was the best.

    We are all very glad of that, but if I see that code in production, I will delete it with prejudice.



  • @Rhywden said:

    So you don't get to whine about how I'm using your argument and running with it.

    Pointing out the overwhelming difference between your O(n) proposal and whatever near-enough-to-O(1) algorithm Javascript uses natively doesn't count as "whining".

    @Rhywden said:

    I don't understand the OP's complaint at all.

    Given the ham-fisted nature of your proposed non-solution, that doesn't surprise me.



  • var foo = function(){};
    console.log(JSON.stringify(foo)); // undefined
    


  • Yeah, doesn't help the OP; any attempt to make JSON.stringify() put something JSON can't represent into a JSON array element makes it put a null there instead.



  • A spare array is something that probably shouldn't exist in your data interchange format. It's pretty much an artifact of the flexibility of dynamic programming and would be difficult for strongly-typed languages to handle. I think it's a good idea that the JSON spec decided to leave out the ability to let the scourge of undefined leak out of the layer it is being used in.

    I'm all for dynamic languages in web browsers. But creating data interchanges that can only be properly consumed by languages that understand sparse arrays is a time bomb waiting to explode. The accepted way to serialize the type of data that JavaScript can handle as a sparse array is a collection of name-value pairs.



  • @Jaime said:

    The accepted way to serialize the type of data that JavaScript can handle as a sparse array is a collection of name-value pairs

    So, like this?



  • That's the JSON syntax for an object with three properties. Sure, it works if both ends are dynamic; but pretty much every strongly typed language on earth is going to have problem deserializing that. My preference would be something like:

    {
        "things": [
            {"key": "0", "value": "zero"},
            {"key": "1", "value": "one"},
            {"key": "2", "value": "two"}
        ]
    }
    


  • And here I thought I'd just check in and reply to a couple comments...

    Do I get a flamewar badge? Maybe just a small one?

    I'm just taking a short break from work, or I'd reply to everyone now, so... I'll be back when I have more time. I think @flabdablet has hit some of the main points I'd make, though. Something I might have omitted earlier, or at least that seems to have dropped from the conversation, is that the back end that's sending this array is on node.js, so it's using Javascript arrays too.

    I will toss out that having an array whose entries have internal keys was a much slower lookup for large arrays both in node and on FF, though Chrome had a much smaller increase in time for lookups. A hash ("object", javascriptwise) was slightly slower in my tests, but not enough to reject; however, arrays in javascript being objects with a few bells and whistles, I opted to just use the array and adapt to it. Since I'd remove elements from a hash the same as an array (delete), it doesn't make much difference to the code, but the arrays have array functions, where with a hash I'd have to use call/apply and hope some detail of their implementation didn't break on my hashes.


  • Discourse touched me in a no-no place

    @Rhywden said:

    A near 200% increase does not look like O(1) to me.

    It's not, strictly, O(1), but it's a hell of a lot closer to O(1) than to O(n), or you'd have been seeing a several-orders-of-magnitude increase in time.


Log in to reply