DId zad0m post something or not?
turingmachine
@turingmachine
Best posts made by turingmachine
-
RE: The Wibble
Are you sure it's the AI and not some stupid markdown thingy? I can reproduce the effect in this very forum software:
- Ten
- Nine
- Eight
- Seven
- Six
- Five
- Four
- Three
- Two
- One
-
RE: FedEx Tracking XML Schema
@Gąska said in FedEx Tracking XML Schema:
I can think of at least two: child elements and namespaces. No matter how much the edgy teenagers in charge of modern software development would want you to think otherwise, XML and JSON are not isomorphic.
You are right, JSON and XML are not isomorphic - but not because it is hard to map XML to JSON, but because if you have one mapping from any of the two to the other, you are restricting the allowed documents of the other to an arbitrary subset. For example,
[null]
will almost never map to some XML document (if you don't define some very strange mapping, but please tell me why[null]
should map to<foo/>
other than just because you arbitrarily defined it like that).But you can get to a subset of JSON which matches XML without being too verbose. For example, we could define that a JSON object only has one field which defines the tag name and contains an array of its children. Attributes are then represented as a special case by using the first element of that array as an object of key/value pairs for the attributes. Thus, your example becomes:
{ "root": [ {}, { "element": [ { "stuff": "goes here" }, { "childrenElements": [ {}, { "childElement": [ { "id": "1" } ] }, { "childElement": [ { "id": "2" } ] } ] } ] } ] }
Now this is still much more verbose than your XML. But at least we are not repeating
"@tag"
all the time. We can even do better. If we define attributes to start with an@
, we can get rid of these strange empty objects:{ "root": [ { "element": [ { "childElements": [ { "childElement": [], "@id": "1" }, { "childElement": [], "@id": "2" } ] } ], "@stuff": "goes here" } ] }
If you remove the whitespace from this and the XML, the JSON version is even a few characters shorter. It is still not exactly what a normal person would write, but it looks at least a little bit more natural. The main problem I see with that however that it is crippling your JSON just so you can use another technology, namely XML schema. But you can't use numbers, booleans or nulls now without thinking how this will mapped back to XML so you can validate it. Just take a string representation? But what if a field is not allowed to be null and contains the string
"null"
? Or you don't want to have such a fixed structure for your document? Dealing with XML namespaces and child elements is quite easy compared to that.So yeah, seems like using XML validation for JSON is using the wrong tool for the job and if you do it, make sure the next developer who has to deal with that mess doesn't know where you live.
-
RE: CodeSOD collection
@Bulb said in CodeSOD collection:
Yes, provided pagination isn't getting in the way. I've seen a couple systems that load the first page of data only, then re-sort or filter client side. Cue puzzled user looks.
You mean like AWS when listing ECS services? Well, I guess Amazon is just a small startup who can't afford to do it right...
-
RE: jsdoc 2.0
@zad0m I read a few paragraphs and... this feels so wrong, I am sorry. Let's take this:
when developers place a .dot to access data on an object, and don't see expected properties or methods, they recognise that the type is wrong and return to fix it.
Well... if you have write-only code, that might be true. As soon as you start changing things, I very much want the compiler to tell me about each and every place of the code I need to change, too. A language with a strong type system allows you to fearless refactor stuff because you will get a type error at compile time instead of a runtime error and can immediately fix it. And it takes a lot less time for a compiler to check all my code than it takes me to check the part of the code I think I need to look at.
-
RE: Fast CSV processor
How about xsv? https://github.com/BurntSushi/xsv It is written in Rust, so it seems quite fast
-
RE: How to find "common" types of exceptions in C#?
@Bulb
To bad the annotations in go are case sensitive until they aren't. https://go.dev/play/p/_uQErbkUc6mBut yeah, they are the correct way to go. They just need to be implemented as perfect matches instead of the case insensitive matching go does (and yes, we already had a bug because of that because we had already two properties with names only differing in casing and go selected the wrong field of a struct to serialize to. Just comment out MyData2) and observe it failing to parse the JSON)
-
RE: WTF Bites
@Tsaukpaetra said in WTF Bites:
Isn't that bad for random access though?
Yes. It's more secure for streams, but can't do random access at all. That's why use case matters a lot when choosing the encryption method suite.
You can actually decrypt a random block quite easily - you simply decrypt it and then xor the resulting plaintext with the ciphertext of the previous block to get the final plaintext: CBC Decryption
-
RE: Error'd Bites
@Bulb said in Error'd Bites:
@Zerosquare It won't prevent genuine discrimination, but if the first impression is left to the interview, it will be fairer than if its made primarily from a photo.
Latest posts made by turingmachine
-
RE: The Wibble
Are you sure it's the AI and not some stupid markdown thingy? I can reproduce the effect in this very forum software:
- Ten
- Nine
- Eight
- Seven
- Six
- Five
- Four
- Three
- Two
- One
-
RE: How to find "common" types of exceptions in C#?
@Bulb
To bad the annotations in go are case sensitive until they aren't. https://go.dev/play/p/_uQErbkUc6mBut yeah, they are the correct way to go. They just need to be implemented as perfect matches instead of the case insensitive matching go does (and yes, we already had a bug because of that because we had already two properties with names only differing in casing and go selected the wrong field of a struct to serialize to. Just comment out MyData2) and observe it failing to parse the JSON)
-
RE: Error'd Bites
@Bulb said in Error'd Bites:
@Zerosquare It won't prevent genuine discrimination, but if the first impression is left to the interview, it will be fairer than if its made primarily from a photo.
-
RE: CodeSOD collection
@Bulb said in CodeSOD collection:
Yes, provided pagination isn't getting in the way. I've seen a couple systems that load the first page of data only, then re-sort or filter client side. Cue puzzled user looks.
You mean like AWS when listing ECS services? Well, I guess Amazon is just a small startup who can't afford to do it right...
-
RE: Fast CSV processor
How about xsv? https://github.com/BurntSushi/xsv It is written in Rust, so it seems quite fast
-
RE: jsdoc 2.0
@zad0m I read a few paragraphs and... this feels so wrong, I am sorry. Let's take this:
when developers place a .dot to access data on an object, and don't see expected properties or methods, they recognise that the type is wrong and return to fix it.
Well... if you have write-only code, that might be true. As soon as you start changing things, I very much want the compiler to tell me about each and every place of the code I need to change, too. A language with a strong type system allows you to fearless refactor stuff because you will get a type error at compile time instead of a runtime error and can immediately fix it. And it takes a lot less time for a compiler to check all my code than it takes me to check the part of the code I think I need to look at.
-
RE: WTF Bites
@Tsaukpaetra said in WTF Bites:
Isn't that bad for random access though?
Yes. It's more secure for streams, but can't do random access at all. That's why use case matters a lot when choosing the encryption method suite.
You can actually decrypt a random block quite easily - you simply decrypt it and then xor the resulting plaintext with the ciphertext of the previous block to get the final plaintext: CBC Decryption
-
RE: FedEx Tracking XML Schema
@Gąska said in FedEx Tracking XML Schema:
I can think of at least two: child elements and namespaces. No matter how much the edgy teenagers in charge of modern software development would want you to think otherwise, XML and JSON are not isomorphic.
You are right, JSON and XML are not isomorphic - but not because it is hard to map XML to JSON, but because if you have one mapping from any of the two to the other, you are restricting the allowed documents of the other to an arbitrary subset. For example,
[null]
will almost never map to some XML document (if you don't define some very strange mapping, but please tell me why[null]
should map to<foo/>
other than just because you arbitrarily defined it like that).But you can get to a subset of JSON which matches XML without being too verbose. For example, we could define that a JSON object only has one field which defines the tag name and contains an array of its children. Attributes are then represented as a special case by using the first element of that array as an object of key/value pairs for the attributes. Thus, your example becomes:
{ "root": [ {}, { "element": [ { "stuff": "goes here" }, { "childrenElements": [ {}, { "childElement": [ { "id": "1" } ] }, { "childElement": [ { "id": "2" } ] } ] } ] } ] }
Now this is still much more verbose than your XML. But at least we are not repeating
"@tag"
all the time. We can even do better. If we define attributes to start with an@
, we can get rid of these strange empty objects:{ "root": [ { "element": [ { "childElements": [ { "childElement": [], "@id": "1" }, { "childElement": [], "@id": "2" } ] } ], "@stuff": "goes here" } ] }
If you remove the whitespace from this and the XML, the JSON version is even a few characters shorter. It is still not exactly what a normal person would write, but it looks at least a little bit more natural. The main problem I see with that however that it is crippling your JSON just so you can use another technology, namely XML schema. But you can't use numbers, booleans or nulls now without thinking how this will mapped back to XML so you can validate it. Just take a string representation? But what if a field is not allowed to be null and contains the string
"null"
? Or you don't want to have such a fixed structure for your document? Dealing with XML namespaces and child elements is quite easy compared to that.So yeah, seems like using XML validation for JSON is using the wrong tool for the job and if you do it, make sure the next developer who has to deal with that mess doesn't know where you live.