...did Kubla Khan a stately pleasure-dome decree...


  • Discourse touched me in a no-no place

    @ScholRLEA Even there, they're quite a lot more complicated than you make out because of the whole content negotiation thing and the fact that a server isn't obligated to map them to files either. The initial implementation did that, but since when was the initial implementation ever required to be the last word on a topic?



  • @dkf OK, I fucked up there. Point granted.


  • Discourse touched me in a no-no place

    @ScholRLEA 'Tis OK. The point is where we are now we are quite a lot closer to the ideas that inspired Xanadu than you seem to be making out. Some of the stuff isn't there, and some of the things have definitely worked differently to how anyone would have originally expected, but that's still cool.


  • I survived the hour long Uno hand

    @dkf said in ...did Kubla Khan a stately pleasure-dome decree...:

    and the fact that a server isn't obligated to map them to files either

    It amazes and delights me to realize we often have URIs that point to a virtual endpoint whose content is constructed on the fly and served up on demand.


  • Discourse touched me in a no-no place

    @Yamikuronue said in ...did Kubla Khan a stately pleasure-dome decree...:

    It amazes and delights me to realize we often have URIs that point to a virtual endpoint whose content is constructed on the fly and served up on demand.

    It's all a shared hallucination. But then so much else of society is too. 😄



  • It occurs to me that I can clear up a lot of the confusion here by simply pointing out that Xanadu isn't (primarily) a client-server system, but a peer-to-peer system. Most of the talk of 'Xanadu servers' is a reflection of the technical limits of the systems at the time they started out - it was originally envisaged as a network of mainframes serving thousands of terminals and small computers, but the network itself, between the different systems, was P2P. The plan was that as small computers got powerful enough to run the Xanadu system themselves, they would join the P2P network directly as well. It was meant to be a single large virtual document system that would replace the local file systems entirely.


  • Discourse touched me in a no-no place

    @ScholRLEA said in ...did Kubla Khan a stately pleasure-dome decree...:

    The plan was that as small computers got powerful enough to run the Xanadu system themselves, they would join the P2P network directly as well.

    The web was very much envisaged to work that way to start out with too. It's not done that way because of other effects, such as what happens when a resource becomes very popular, or what happens when someone tries to hack one of the participant systems. A successful Xanadu would have had the same challenges to overcome; similar technical solutions would have likely evolved.



  • @dkf You're right about the security and related issues, though they actually did at least try to address them (unsuccessfully).

    As for the web being meant as a peer-to-peer system... yes and no. It was always peer-to-peer in the sense that any system could run an HTTP server and (if they had a stable network presence with a static address), serve a site publicly, but it always had the separation of 'server' and 'client', with only certain kinds of requests going to the server and certain kinds of responses going to the client. While it was always possible to use the HTTP protocol for more than just serving a web page (since HTTP/1.1 it supported full CRUD operations through the PUT/GET/POST/DELETE operations, even if it took developers ages to notice that), it still had that separation of server and client, at least for a given connection.

    Still, the real key thing here is that it is meant to replace the file system completely. No files. No server. No client. No running all user actions through a browser. Just a document system that applications could use to retrieve and store data, running transparently with no significant difference between local and remote copies except the retrieval time (well, and any possible connection issues) from the application's POV, and the same transparency between local and a remote (or partially remote) applications from the user's POV.

    Could it have worked, given hardware that was up to running it? Not sure. Would it make a difference? Not sure there, either. Is it worth studying to see what could still be done with the ideas? I think so, but I can understand if you disagree.


  • Discourse touched me in a no-no place

    @ScholRLEA said in ...did Kubla Khan a stately pleasure-dome decree...:

    it still had that separation of server and client, at least for a given connection.

    That's not something to fixate upon. Any time you use TCP you get that arrangement; it is very difficult to make networking function any other way (especially in a timely fashion and with any degree of reliability). While you can do things in different ways, it's really much more complicated.



  • I suspect that the so-called 'no copying' rule confuses a lot of people, too. It would be better to describe it as the rule 'transclusion, not duplication'. That is to say, the system can make all the copies it needs, for caching and backups and so on, but that should be transparent to the user - a cut and paste operation would actually create a transcluded link to the original, automatically, but to the users it would be no different than if it were actually two copies of the same text in different documents (unless the application specifically showed that connection to them for some reason). Conversely, the virtual address of a datum may refer to a different physical location depending on whether the system has reason to move it, if the system has been rebuilt or copied to a different drive, or even if it is on a different system entirely. The virtual address remains the same regardless of location.


  • Discourse touched me in a no-no place

    @ScholRLEA said in ...did Kubla Khan a stately pleasure-dome decree...:

    I suspect that the so-called 'no copying' rule confuses a lot of people, too.

    You're wanting it to mean “this identifier, A, that identifies this thing, X, will always identify the same thing”. Which is a good principle, despite the massive complexity that surrounds the concept “same”. In particular, some people want it to mean “exactly the thing I am looking at right now” and some people want it to mean “this modifiable thing which I am sharing”, and there's probably many other variations. Most people are not good at appreciating the consequences of this vital distinction, but without understanding it, you really can't comprehend what is being identified (and hence the meaning of the identifier and what it identifies).

    Not a trivial problem at all.


  • area_pol

    @ScholRLEA said in ...did Kubla Khan a stately pleasure-dome decree...:

    NFS file system is mean for serving static content

    Yes, exactly this. Maybe we understand "static" differently.
    I mean "static" as "retrieved from storage, NOT created dynamically in response to the request". Exactly like a file system.
    From your previous posts, I got the impression that Xanadu was supposed to be a distributed append-only database.

    As opposed to current web, where many responses are created dynamically by a program every time you request them. Many services rely on this architecture (shops, chats, games).
    I don't think Xanadu had any concept of dynamically generated responses, or did it? you probably know better.



  • @Adynathos Ah, I see where the confusion lies, then, yeah. Hmmn...

    First off, most of those things would not be part of what a Xanadu document system would itself do, no. Why not? Because that would be application-level, not service-level. They aren't part of HTTP either, though many HTTP servers can support them. Many of the other things that require additional support, such as many database access webservices would simply be... well, Xanadu links themselves.

    However, Xanadu links can point to other Xanadu links, and there are types of links which are indeed dynamically generated, or which can automatically update to direct to the most recent version of a document (the equivalent of a VCS repository's HEAD) rather than to a specific datum. The existing data is still there, as are the links to it, but unlike other types of links, the indirect links can be changed.

    Xanadu documents are dynamic, period; they aren't the data themselves, they are a series of links to data that the application can then rearrange as needed, which can include using indirect links or generating new links themselves. This content can be cached by the Xanadu document system, and even pre-flattened if the application requests it that way, but the heart of Xanadu is the use of meta-data to manipulate, present, save, and update the data without actually changing it in an irreversible manner.


  • area_pol

    @ScholRLEA Ok, so you could express many applications in terms of this distributed document store.
    Like an email application would be creating a document which is only accessible by receiver and sender.
    That would indeed be elegant, to work with various applications using the same underlying architecture :)

    Since a lot development has been done in the area of distributed/cloud databases, now we even may have the technical ability to create such a thing.


  • Discourse touched me in a no-no place

    @Adynathos said in ...did Kubla Khan a stately pleasure-dome decree...:

    Since a lot development has been done in the area of distributed/cloud databases, now we even may have the technical ability to create such a thing.

    The big question is where would the seams between things turn up. Things like the imperfectness of networks and the general scumminess of some people have real impacts, and it's not clear how such a system will behave in the wild.




Log in to reply