Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break



  • This past Tuesday, I had what I still am hoping was a genuine brainwave rather than yet another entry for the Bad Idea Thread. I posted it in five separate long-ish posts on the OSDev.org forum, to very, very little response.

    The idea actually something I have been mulling over for quite a while now, but this was the first time that I sat down to explain it. The idea - or cluster of ideas, really - relate to a somewhat off-the-wall approach towards version control and the staging of a program's development and testing, and how things like type predicates for variables and function parameters could be applied in a stepwise, scriptable fashion. The ideas rely on a somewhat unusual (though not entirely new) method of representing program code, one that was common in the past but which I thought to use in a rather different fashion than any previous ones I know of.

    It needed that many posts because in order to explain the ideas themselves, I first needed to explain, as best I could, the way Project Xanadu (yeah, I'm still going on about that, deal with it) was supposed to handle storing data, managing copies of data, and most of all associating different data with each other, in a way that would make sense to a group of low-level systems programmers, without talking down to them.

    Just how well I succeeded at this isn't clear, given the underwhelming number of replies to date (one, which focused on a few really minor points).

    I also sent it to some selected members (past and present) of Project Xanadu. The only one to reply was Ted Nelson himself, and his answer boiled down to, "I'm not sure I an following all of this, but it looks interesting, unfortunately I am really busy right now KTXBYE".

    So, after a recent 'discussion' in the Random Thought of the Day thread where I mentioned all of this in passing, I decided that now, you lucky bastards get to read it, too, starting in the next post in this thread. Aren't you excited?



  • OK, so Solar's rant about the importance of const-correctness made me realize that, once again, I was keeping things in my head that I really needed to get on to (virtual) paper, and ask for opinions on.

    This one is a bit out in left field, to say the least. Strap yourselves in, this is going to be a rough ride, as it intersects language design, compiler design, and some rather unconventional ideas about document systems and hypertext (though ironically enough, it is rooted in the foundational concept of hypertext, as I will explain).

    As should be clear to anyone reading the earlier posts of this thread, I am taking a lot cues for my language concepts from Scheme, and to a lesser extent Common Lisp and Clojure, though I am adding influences from non-Lisp languages as well - a long list of them, in fact, which includes Forth, Smalltalk, Haskell, APL, Erlang, Python, Ada, and Self (a Smalltalk derivative that eschews classes in favor of prototypes, and has several interesting ideas about both member slots and implementation techniques).

    It's a pretty wildly divergent group, to be sure, and trying to draw from such differing perspectives is, to say the least, confusing.

    But the main inspirations are in the Lisp family, even if the language I am moving toward right now probably won't actually be one - that has yet to be decided, hence the Apophasi experiment.

    Right now, that experiment is on hold pending two preparatory projects, consisting of a comprehensive walkthrough of the interpreter and compiler examples given in the book Lisp In Small Pieces (which I strongly recommend to anyone interested in interpreters and compilers, even if you have little interest in Lisp - it covers several variations on the basic design of a Scheme translator, as well as several language variants), followed by a preliminary exploration of how I intend to write my own code translation engine (note this term). The results of those will influence where I go with Apophasi, and that in turn may change the final design plans for Thelema (so many names, too much bikeshedding... sigh).

    Anyway, let me get to the first thing I need to get written down, which is that in the longer term, I don't intend to use conventional source files and source management techniques. I am going to be using them for now, because the system I would need to support my intended system is going to have to wait on the first few iterations of my language designs so I can then implement them (there are a lot of bootstrapping problems in all this), but I am already trying to find ways to jury-rig something suitable in the short term.

    Please bear with me, I am getting to my point but it is going to take some explaining. Hell, the original point I meant to make will have to wait for a follow up post, because this one is already pretty long. I will post this one now to make sure it doesn't vanish into the ether, then post my explanation of the source code structure ideas, then I will finally get to my ideas about typing, type checking, procedure dispatch, and procedural arguments and returns.



  • OK, let's go over how Xanadu is intended to work, and how I intend to apply the ideas, if not always the methods, thereof.

    As I already said, the point of Xanadu is to replace files. Much of what Nelson & Co. are doing can be done with existing file systems, and in fact several of the iterations of the project ran on top of existing file systems or RDBMS systems, but doing them with existing files involves a lot of ad-hoc work. Xanadu, at least in the file-system based forms, is essentially a library for doing just that, though part of how it does that is to create what amounts to a separate database system on top of the existing ones - which is why the real eventual goal was to implement it in a more stand-alone form, working on the media directly rather than through the file system.

    In the 1988 and 1993 designs of Xanadu (and presumably in OpenXanadu, though despite the name not all the code is exposed yet AFAICT), there is supposed a Back End and a Front End, with the Front-End/Back End (FEBE) protocol between them. The BE handles managing the storage and retrieval of data fragments, both locally and remotely, keeps track of what is where and which things are stored or mirrored locally, and maintaining coherence across distributed storage and caching. It is a hairy piece of work and while its operations are secondary to the goals of Xanadu, it is at the heart of the implementation of those goals.

    My understanding is that the FE handles the decisions about which fragments to request at a given time. Note that the FE is a front end to the applications, not some equivalent of a browser - it is primarily an API for the BE, though it does do some management of the data views as well.

    Presentation to the user is up to the applications themselves, and to the display manager, which is supposed to permit various ways of display connections between applications to the user. At this point, you can probably see part of what most people get confused by in all of this, as there is no single 'browser' anywhere in all of this.

    Basically, when a new datum is created - whether it is a few sentences/words/individual characters in a 'text document', the value of a 'spreadsheet cell', a 'saved game record', an 'image' or 'picture', an 'audio recording', or whatever - the FE passes it to the BE, which it encrypts in some way and then writes to some storage media - possibly as part of a journal that contains data from several other applications and users.

    Along with the data, the FE passes the BE information about the datum and it source. If it is part of a larger 'document' - which is usually going to be the case - it includes information about the document, including a link to the address of any related data, and how they are related. For example, for a 'word processing' application, it might pass a link to the datum which was, at the time of editing, the immediate predecessor of the datum being stored, and that the datum was (again, when created) the successive item in a larger document.

    The BE catalogs each of the recently written data according to the format the datum is in, its size, the user who created it, local date and time of creation, the application it originated in, the encryption type - all things that a conventional file system may or may not record - but also the publication status (and later, publication history), the current (and later, previous) owners/maintainers of the datum, the current location it is stored in, and whether to mirror it elsewhere (which is the default for most things).

    Up to this point, it looks normal. Here is where it changes course a bit.

    The BE generates a permanent address link for the datum, one which is independent of its current location in storage. This is a key point, because the storage location itself is only an ephemeris to the system - while the datum is meant to be treated as immutable, and the parent copy should never be overwritten, the actual physical image of the datum in the storage medium isn't the datum itself. This is also why, for networked systems, automatic mirroring is the default (and why it being encrypted - and the fact that the encryption methods can vary from datum to datum or even copy to copy of the same datum - is important).

    A large part of this is to abstract away, from the perspective of the FE, the applications, and the user, the process of storing, transmitting, mirroring, and caching the data. As far as everything outside of the BE is concerned, the datum is (or should be) immutable and eternal, approximating a Platonic essence of the idea it encodes. The reality is obviously more complicated, but the system is meant to bend over backwards to maintain that illusion, across the entire 'docuverse' straddling the network.

    (So far, it hasn't quite managed this, and perhaps never will, but in terms of its goals, it goes further than any other system that I know of.)

    Now, you may have noted that I haven't talked about links, hyper or otherwise, yet. This is where things go even further out of the norm, because the Xanadu idea of a 'hyperlink' has nothing much at all to do with the hyperlinks of things like the WWW.

    In Xanadu, there are several types of links, most of which are not directly related to how the datum is presented to the user. The particular kind of links in consideration right now might be called 'resolution links', which describe the physical location(s) of the data; and 'association links', which store how two or more data relate to each other (these aren't the terms used by the Project, but explaining their terms would take hundreds of pages, and I only know a fraction of the terminology myself). The former are ephemeral, relating to the specific physical storage, and are stored as the equivalent of a FAT or an i-node structure, while the latter are permanent, and have their own resolution links when they are themselves stored.

    Some of the types of association links are:

    • 'span links', which refers to a slice or section out of the datum, allowing just the relevant sections to be referenced in documents or transferred across a network, without having to serve the whole document - the 'whole document' is itself just a series of different kinds of association links.
    • 'permutation-of-order links', which are used to manipulate the structure of the document, creating a view - or collection of views - which can themselves be stored and manipulated. This relates to the immutability of data - rather than changing the data when updating, the FE permutes the order of the links that make up the 'document' or view, and pass that permutation to the BE to record it. This, among other things, serves as both persistent undo/redo, and as version control.
    • 'structuring links', which describe the layout of the view independent of the data and the ordering thereof. This acts as out-of-band markup, among other things - the markup is not part of the datum itself.
    • 'citation links', which represent a place where a user wanted to record a connection between two ideas. This link associates bi-directionally, and has its own separate publication and visibility which is partially dependent of that of the data - an application, and hence a user, can view any citation link IIF they have permission to view both the citation and all of the data it refers to. There many also be 'meta-citations' which aggregate several related citations, but I don't know if that was something actually planned or just something discussed - since citations are themselves data, and all data are first-class citizens, such a meta-citation would just be a specific case of a view.

    It is important to recall that the 'views' in question are to the applications and the display manager. They can then organize the actual user display based on those views into the data as needed. The same data - or even the same views - may be shown as part of a 'text document' by one application, as set of spreadsheet cells by another, or composed with some image in yet another. This is why markup is out-of-band, and why structuring links applied to a given set of data are stored for later use by the applications.

    There are still other links for recording the history of the datum's ownership and publication status, connecting a data format to one or more means of interpreting the format or transposing it into another format, indirection (to allow for updating of views - since most links available to the FE are immutable, these allow for the equivalent of a VCS repo's 'HEAD' branch, allowing the applications to fetch whatever the latest version of a document is and separating 'currently published' from 'previously published'), tracking where copies of a given datum can be found for the purposes of caching and Torrent-like network distribution, and so forth, but most of those are only for use internally by the BE.

    When the new datum is created as part of some new document, a new association link is created to connect it to that document, which is then passed back to the FE for use by the application. The FE then creates a permutation link for the document, incorporating the datum into the document link traces, which is then passed back to the BE for storage.

    Moving on...



  • OK, so I've covered all this stuff about how xanalogical storage is meant to work, so I can pop that off the stack and talk about the plans for my languages, or more particularly, for my compiler and toolchain.

    Basically, my plan is to have an editor that performs the lexical analysis on the fly, and saves the programs not as text, but as a link trace of meta-tokens. The lexical analyzer would still be available as a separate tool, which the editor would be calling as a library, so the compiler could potentially use source code from other editors, but that's going in a different direction than I have in mind.

    What is a meta-token, you ask? Well, in part it is a token in the lexical analysis sense - a lexeme which the syntax analyzer can operate on. The 'meta' part comes from the fact that the datum it references does not need to be a specific text string - it can, in principle at least, be any kind of data at all, provided that the syntax analyzer can interpret it in a meaningful way.

    Also, a meta-token may be associated with more than one value, allowing for alternate representations of the syntactic structure - provided that the syntax analyzer agrees that the different representations have the same meaning. So what the meta-token really is is a way of associating a representation with a syntactic meaning that was set by the syntax analyzer when the meta-token was created.

    I expect that you can see why I am talking of a 'syntax analyzer' rather than 'parser'.

    Now, this does complicate the editing a bit - there has to be a way to differentiate between 'change the name/representation of this particular variable globally' and 'change from this variable to a new one or a different one, just for this particular part of the program', among other things. But it also opens up a lot of possibilities that would be a lot less feasible with the conventional 'plain text' model of source code.

    For example, if the editor treats some parts of the program structure as 'markup' rather than 'syntax' - for example, indentation, newlines, delimiters for things like string literals or the beginning and ending of lexical blocks - then the same code could be edited in multiple 'programming languages' without needing an explicit translator - the program is stored as a syntax tree of meta-tokens anyway, so the representation of the program is separate from the 'Platonic essence' of the program the code describes. The source code itself is just a specific presentation of the program.

    Mind you, it would still be in 'the same' language in the sense that the actual syntax would be the same, just shown in different ways, so it wouldn't quite be all things to all programmers, but it would make things a lot more flexible. And if two analyzers for different language syntaces had some or all of the possible meta-tokens in common, it drastically changes certain code-level interop issues.

    It also solves some of the dichotomy between conventional programming languages and 'visual' ones, though the problem of the Deutsch Limit would still exist for any given visual presentation.

    On a side note, and as a preview of where I am going with this, the lexer could also add citation links to make it an annotated[ AST, adding to the ability to pass information about the program to the semantic analyzer and code generator. This can allow for additional analysis of things like, say, whether certain optimizations could be applied in the generated executable.

    Oh, and because the final executable is also stored xanalogically, there is no reason it has to produce a single executable image - it can create multiple whole executable images for different architectures, branched executables for variants of the same architecture and system hardware which the loader could select from, even 'templates' which the loader could fill in the gaps to at load time - the loader would only need to fetch those parts it needed, possibly along with additional information it could use to further tweak the executable by means of runtime code synthesis (Surprise! you knew I was going to bring Synthesis kernel up somewhere in all of this, didn't you?). Oh, and the executables would be cached on systems other than the origin, and only permanently stored if the user or an application chooses to mirror it, so updating and backtracking isn't especially difficult (which is also a reason why everything transferred between systems is encrypted), and any node currently mirroring or caching something can be used by the other nodes as the equivalent of a Torrent site for published programs if the administrators chose to allow it, according to the limits they choose (but only to users who have rights to use them - I am sure that there would be a way around that, and it raises some hairy issues about licensing, regulation, and compliance, but that would have to be dealt with after the experimental stage of all this).

    Just one more post, I promise. I am finally ready to explain how all of this ties into types and dispatch.



  • OK, some of you can probably see where this is going, but some of you are probably completely lost (some may be both, in varying ways). Don't worry, this is the payoff for all of that.

    Aside from the whole 'abstract syntax tree of meta-tokens' form for programs, the xanalogical approach opens up another possible avenue for programming language design. Remember how I said that the use of permutation links rather than changing the stored values allowed for nearly-unlimited undo/redo, and (here' the key part) acted as a form of version control? And remember what I said about indirection links allowing for updatable views? Here's the important part: you can have multiple indirection links to different parts of the development history.

    This gives you things like branching, forking, and staging, practically for free, once you have xanalogical storage.

    OK, so getting to there is anything but free, but bear with me here.

    If the compiler is working from an indirection link to the stored AST, and the AST itself is mostly just a tree of links to the meta-tokens, then the compiler can keep a separate record of which warnings, constraints, and optimizations to apply when compiling the program, and link that to the indirection handle.

    Back to the compiler annotations. Did you notice that these can - once again - be anything that the compiler might have a use for? It can serve to link to code documentation, design documents, UML diagrams, whatever. And if it has some hooks that allow it to, say, apply a constraint based on the documentation - a reminder to update the documentation, say, or some kind of constraint based on a class declaration matching the structure defined in UML - then it could use that to change the errors, executable output, or other results.

    Or, just perhaps, it could be used to apply type constraints on code which doesn't explicitly declare types.

    Now, I would still want to be able to add explicit typing to the program source code, especially for things like polymorphic procedural dispatch (where you need it in order to have the program call the right procedure), but, if we can have it represented as a form of compiler constraint, well, there's no reason that the code editor can't hoist them out and save them as annotations, right?

    That would let you, say, write most of the code without worrying about typing when you are first working out things, then progressively add more stringent constraints as you stage from (for example) 'development-experimental', to 'development', to 'unit testing', to 'integration', and so forth up to 'release'.

    And the editor and compiler together could be configured to enforce that you can only edit the program code in either 'development' or 'development-experimental', while still permitting you to add type predicates later on. Oh, it couldn't stop you from creating a different permutation in some other application, but it could simply refuse to work with that alternate permutation.

    So, now you know what I have in mind. Will it work? I have no idea; probably not, if I am really honest about it. But I should learn a lot about what does and doesn't work along the way, right?



  • NB: Crap, I missed this one, which is supposed to be the second one of the series.

    OK, so now for the digression about hypertext, and specifically about Project Xanadu and xanalogical storage.

    Project Xanadu, for those unfamiliar with it, was one of, if not the, first explorations of the concepts of hypertext and hypermedia, both terms coined by the originator of the project, Ted Nelson. The project informally began in 1960, taking an ever more solid shape throughout the 1960s and 1970s despite the skepticism, indifference, and even obstructionism of others, and after a half dozen or more iterations, is still ongoing today - you can see the latest project here. While Ted claims that it is finally a working system, after being the declared to be longest-running software vaporware ever by his critics and suffering as the butt of many industry jokes, though for all that there is now a light at the end of the tunnel, it has so far fallen far short of its intentions due to forces often out of his control.

    It was a very different idea from the modern World Wide Web, even though one of Ted's books, Literary Machines, was a primary influence on Tim Berners-Lee and his design (though contrary to what Nelson has said, not the prime inspiration - Berners-Lee had been working on both SGML document formats, and other forms of hypertext, for at least three years before the book was published, and was originally only intending HTML and HTTP as a means to share research papers among scientists with something approximating citations).

    Actually, much of the confusion about it is that it is a combination of many ideas, most of which would seem unrelated to most people. This is where both the strength, and the weakness, of Nelson's vision lies, in that people rarely see the connections he does - and connections are at the heart of his ideas.

    And yes, Nelson (like myself) has ADHD. Quite severely, in fact. He see it as a strength, rather than a problem, but in terms of getting support for the project, and seeing it through to the end, it has been crippling, which is unfortunate, because it is also what led him to it in the first place. Perhaps more than anything, Xanadu was Ted's attempt to find a prosthesis with which to grapple with the breadth of his interests and ideas, a breadth borne out of his 'butterfly mind'.

    Anyway, all of this is prologue. The point is that while Project Xanadu encompasses a wide number of ideas, many of which have since spread out into the computer field in separate pieces and in distorted forms, one piece that hasn't caught on is the idea that Files Are Evil.

    OK, a bald statement like that, so typical of Ted, is going to take some explaining. I know, I know, I promise I will get to the point eventually, but digressions are a big part of all of this, and this probably won't be the last one here.

    What Ted means when he talks about 'the tyranny of the file' is that the conventional, hierarchical model of files as separate entities, which need to be kept track of both by the file system and the user, is a poor fit for how the human mind actually works with information, and in particular, that it obscures the relationships between ideas. This applies to both conventional file handling, and to file-oriented hypertext/hypermedia systems like the World Wide Web.

    It is here that Ted loses most people, because to most people, he is mixing up different levels of things - and Ted would even agree, but his views about what those levels are, is quite different from the one most people are familiar with. Basically, where most people see separate documents, which might refer to each other through citations or hyperlinks but are fundamentally separate, he sees swarms of ideas which can be organized in endless ways and viewed through many lenses, of which the 'documents' are just one possible view of them, and not an especially fundamental one at that.

    Now, this will seems somewhat familiar to those of you who have some experience with relational databases, and in fact Ted took a look at RDBMS ideas in the late 1990s, concluding that they were on the right track, but still blinkered by their assumptions about what data 'really is'.

    To his eyes, there is no 'really is'. He views information as a continuum -- what he calls a docuverse - and his primary frustration is in the fact that everyone else is (by his estimation) trying to impose their ideas of what the pieces of that continuum are, rather than them float free for anyone to view as they choose. He sees Xanadu as an attempt to approximate that free-floating continuum - he's is trying to reduce the amount of inherent structure imposed by the storage system, in order to allow variant structures to be easier to find.

    Getting these ideas across is really, really difficult, especially since (again, like myself) he often leaves the best parts in his own mind, making it look like he's jumping all over the place and skipping steps.

    He does that, too, but most of that impression comes from things he has so well-set in his own mind that he forgets that other people haven't heard them yet. This is a trap that is far too easy for an visionary to fall into, and while he is aware of the problem and does strive to avoid it, it is one which is hard to notice for anyone until it is brought to there attention - and sadly, few have had the patience to do so.

    Moving on to the next post, which discusses the back-end and front-end part of Xanadu, which I need to gloss a bit before explaining how this all ties into my language ideas.


  • BINNED

    Have you looked at Factor?



  • @antiquarian said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    Have you looked at Factor?

    I had heard the name, but I hadn't looked at it. It looks like something I would do well to look into, thanks.


  • ♿ (Parody)

    My main reaction, upon reading this, is that it's going to end up as a system being mostly metadata. It seems to me that you're talking about making explicit a lot of things that are implicit in an application.

    For instance, you talked about a datum being a cell in a spreadsheet. And how data will have various links (span, permutation of order, etc). So instead of the application just programmed to understand what a cell is and how it relates to the rest of the spreadsheet (based on how the spreadsheet program is written, maybe with a class for the cell, sheet, etc). Instead, all of that has to be stored explicitly and then interpreted by the application.

    Yikes. It makes my head spin. Really reminiscent of semantic web stuff, which is to say that it sounds cool, but if I think too hard about it I start to get vertigo.


  • Java Dev

    @boomzilla said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    My main reaction, upon reading this, is that it's going to end up as a system being mostly metadata.

    Same here. You're going to have to optimize out 99% of all that to get performant storage, both from a size and from a throughput/latency perspective.



  • @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    Basically, when a new datum is created - whether it is a few sentences/words/individual characters in a 'text document', the value of a 'spreadsheet cell', a 'saved game record', an 'image' or 'picture', an 'audio recording', or whatever - the FE passes it to the BE, which it encrypts in some way and then writes to some storage media - possibly as part of a journal that contains data from several other applications and users.

    ... why?

    Ok I'm not going to pretend I read all of this, but inverted pyramid, man.

    Look, the first thing you need to explain to us is what are you trying to accomplish? Just that paragraph above makes me think, "so if you lose a block of data, you corrupt 100 documents instead of 2? Why would you do that?" Which isn't to say you don't have a reason to do that, but since you explained what you're doing before you explained why you're doing it, you've completely lost me as a reader.

    If you really want to communicate these ideas, here's what I'd do:

    1. Put the text you've written now into a Word document or something that has no character limitations or any other reason to split it up into bite-sized chunks

    2. Write an outline of your project, ensuring that it's organized in an approachable, logical, progression. (If I need to know A before B makes sense, make sure I know A before you tell me B.)

    3. Pitch the idea to us. Why should I drop what I'm doing and support your idea? How does it help me, or anybody else? How does it save my time and money?

    Anyway I'll try to chug through it, since I have nothing better to do this afternoon than watch a progress bar slowly cross the screen in about 4 hours.

    What you have here is a great essay (well, perhaps), or maybe even the seed of a book, but man you gotta work on the writing.



  • @blakeyrat's this is actually really good feedback. While I am not sure if you saw the 'lost' post that I had to add back in at the end, this critique still stands - I really need to think long and hard about how to express this better, especially regarding why it is meant to be like that.

    Thank you, this is exactly the sort of feedback I am looking for.



  • On a side note, TIL about edit decision lists, which Ted Nelson compared some of the types of links to in his reply to me this morning. They are used to manage multiple cuts of a film during editing.

    While I knew that one of the major influences on the ideas behind Xanadu was the essay "As we May Think", which Doug Engelbart also cited about his work, I can definitely see that Nelson's original passion - filmmaking - had an even greater impact on Xanadu than I had realized. I think it really shines a light on what he really is trying to accomplish.

    WRT what Blakey said, the fact that he expected me to know about them - he just said 'EDLs' with no explanation, I had to look it up - also explains a lot about why he has had so much difficulty explaining Xanadu to people over the past 55 years. That by itself is an object lesson about writing, especially since my 'techie translation' left him baffled despite his own experience in programming.


  • Impossible Mission - B

    @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    @blakeyrat's this is actually really good feedback. While I am not sure if you saw the 'lost' post that I had to add back in at the end, this critique still stands - I really need to think long and hard about how to express this better, especially regarding why it is meant to be like that.

    Even after reading that, I'm still wondering, what is the problem that Xanadu solves? And why hasn't anyone else noticed that it's a big problem over the course of the ~50 years that this guy's been working on it? Because @PleegWat's critique looks pretty valid. A system like this is going to have really bad performance vs. a traditional file system, and so you'd need a truly massive benefit to counterbalance that. And I'm really not seeing what that is.



  • @scholrlea You're welcome, and now more comments now that I've read the rest of this:

    Your idea sounds a lot like what the .NET CLR is doing now, except far far far more complicated and tied into all this weird stuff about how "files are evil" that I don't see how that's at all relevant.

    (Basically, I was thinking the whole time: so that's exactly with .NET does, but with enough meta-data attached so when you use a decompiler you get the original variable names back?)

    So as writing advice, I'd focus a lot "how is this different than what current systems are doing?" Because if you just want ".NET but with more metadata", you can do that right now-- .NET's open source, the decompilers already exist, etc.


    On another note, you have the same (class of) problem here that the people who were bitching and whining about how Microsoft should open-up their document formats were ignoring:

    The on-disk format is the application.

    By that I mean, the data the application writes to disk in order is determined solely by the features of that program, and it's pointless and useless to have multiple applications reading and writing to the same data format unless those multiple applications have identical features.

    What does this mean per your proposal? When you initially design your set of universally-understood meta-tokens, you've forever nailed down the feature-set of the programming languages you support. The instant a second person is using your development environment, you're in a world of shit where you can't remove anything. Ever.

    Now you could add tokens to your system in theory, but the problem is Ted Blarpy wrote his XanaduBasic code back when Xanaduwhatever only had 500 tokens, Bob loves coding his corporate application in XanaduBasic. But XanaduC# uses all of the new 575 tokens, so now you have a program that can be expressed in XanaduC# but can't be expressed in XanaduBasic.

    If you're going to lock-down your system like this (and you are locking it down, whether or not you realize it), you gotta be 100%, 200%, 500% sure you've gotten the right set of tokens to last forever. Are you? Can you be? The designers of JavaVM couldn't. The designers of .NET couldn't.

    How does your solution address the problem of needing forwards, backwards, and sideways compatibility with everything forever?


    There's also a lot of dubiousness of how performant any of this shit could ever be, but those objections are obvious and other people have already brought it up.



  • @scholrlea Well a bigger, more universal point, is that he says "files are bad because they don't match the way humans think" to which I instinctually knee-jerk, "prove it."

    In other words, prove storing data the way humans do is superior to storing data in files. The WHOLE THING seems to be based around this premise, and I see zero reasoning of why you'd assume the premise is true, much less any data proving it's true.



  • @blakeyrat I... need to give this a lot more thought, clearly. I think I can see why you aren't understanding what I am trying to convey, and it is pretty definitely my fault.

    Partly, it's that when I wrote it, I was thinking in terms of compiler design and file system design, and wrote it with the OSdev.org forum in mind - many of the people know how to write the internals of both. That's not a typical audience, not even a typical tech audience, and the fact that I didn't rephrase parts of it was a real lack of foresight.

    It is also partly that I fell into exactly the same trap I said Ted Nelson often does - I'm jumping from A to Q without drawing the lines to points B, C, D, etc.

    But mostly, I think am not doing very well with writing about it, Period.

    I will work on actually replying tomorrow, because I really need to think about what I want to say. If I forget, you have my permission to slap the back of my head.



  • @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    Partly, it's that when I wrote it, I was thinking in terms of compiler design and file system design, and wrote it with the OSdev.org forum in mind - many of the people know how to write the internals of both.

    Fair enough; but what does that have to do with proving that making computers store data the way human brains store data is superior than making computers store data the way corporations and individuals have stored data (invented, BTW, by human brains) for centuries?

    Like, the explanation for that one's going to be the same regardless of how many OS internals you know, right? Since everything else you're bringing up here seems to be dependent on that idea, it'd behoove you to examine your assumptions here.



  • @blakeyrat Isn't it obvious that natures process is superior to our primitive inventions?

    When human's make the biggest leaps it comes from modeling natures natural patterns.


  • Discourse touched me in a no-no place

    @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    But mostly, I think am not doing very well with writing about it, Period.

    One of the main ways to get better is to practice, but it has to be practicing “for real” so that one doesn't take too many short-cuts.


  • Discourse touched me in a no-no place

    @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    'citation links', which represent a place where a user wanted to record a connection between two ideas. This link associates bi-directionally, and has its own separate publication and visibility which is partially dependent of that of the data - an application, and hence a user, can view any citation link IIF they have permission to view both the citation and all of the data it refers to. There many also be 'meta-citations' which aggregate several related citations, but I don't know if that was something actually planned or just something discussed - since citations are themselves data, and all data are first-class citizens, such a meta-citation would just be a specific case of a view.

    I think you've got a few issues here. One is that anything on the meta level should also be representable on the system level, on the grounds that we have general symbol processing machines and damn well ought to be able to work reflexively. Another is that there are also one-way links, and the links between data should not be directly coupled to the visibility of each end-point datum; there are use cases for systems where it is the links between things that matter at all (see bibliometrics and scientometrics, and also much of what the security services do) and there are also cases where it is important to prevent some users from joining data up despite them being authorised to see the individual endpoints (a critical requirement for civil liberties protection). I think this means that access control decisions cannot be delegated to the endpoint datum, but I'm pretty sure that the implications go far deeper.

    Personally, I'd also want to make the associations also less trusted: just because someone or something asserts that there is some datum or some meta-datum that describes a characteristic of a datum doesn't make it true. I've no idea how that mixes in with the other issues. ;)

    And you've still got to deal with the fact that digital machines really do work with sequences of bits. Everything else is merely an interpretation layered on top.



  • @robotarmy said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    Isn't it obvious that natures process is superior to our primitive inventions?

    Nope.

    Nature hasn't made anything that can outrun even a cheap Mazda, nor has it made anything that can out-fly a 737.

    @robotarmy said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    When human's make the biggest leaps it comes from modeling natures natural patterns.

    I think you're banking on the fact that the word "nature" is very vague. Since literally everything originated in "nature", well, sure I guess I can't argue on this point. Nature invented, for example, fission reactors. We've seen that in the fossil record. But it was man who carefully arranged those elements to make fission reactions useful for controlled energy release.


  • Discourse touched me in a no-no place

    @blakeyrat said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    Nature hasn't made anything that can outrun even a cheap Mazda, nor has it made anything that can out-fly a 737.

    But they can at least sometimes outfly a cheap Mazda, especially the second time after being thrown off a cliff… ;)



  • @blakeyrat said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    Partly, it's that when I wrote it, I was thinking in terms of compiler design and file system design, and wrote it with the OSdev.org forum in mind - many of the people know how to write the internals of both.

    Fair enough; but what does that have to do with proving that making computers store data the way human brains store data is superior than making computers store data the way corporations and individuals have stored data (invented, BTW, by human brains) for centuries?

    Like, the explanation for that one's going to be the same regardless of how many OS internals you know, right? Since everything else you're bringing up here seems to be dependent on that idea, it'd behoove you to examine your assumptions here.

    You have a point, and it's sort of a case where I assumed something without explaining it, again.

    And part of the problem s also that, as I said, there are a lot of different ideas all intertwined here.

    I am working one writing down why, to me, this makes sense. I am not entirely certain if this is going to exactly how Ted would say it.

    The first part of the argument has to do with function versus utility, but I am not sure if I can quite express it effectively. It ties in directly to the Bush article, as well as to Nelson's ideas and even some of Doug Engelbart's and Alan Kay's.

    Basically, the idea is that regardless of the mechanism involved, the goal is an intellectual prosthesis that meshes well with human memory and pattern-making. Something that reflects human thought patterns, and can fit those of individuals.

    The particular aspect of this which Xanadu proposes to address is about how different people can make connections to things that others can't. The intent is to make it easy to see, record, and share these connections.

    The issue with file systems is that, to my mind (and AFAIK, to Nelson's), the separation between files act as a barrier to making these connections - they become meta-data that has to be represented by yet another file or set of files. Further, because of the way different applications treat data differently - even if it is encoding the same data, such as (for example) a user name and street address which appears in a letter stored as a text document, a database of customer information, and a spreadsheet of customer deliveries for the week - this leads to the need for things like file conversion utilities, mechanisms such as OLE for combining different objects into views (or copying them into other objects automatically), and various kinds of ad-hoc 'interop' tools which the application developers, and all too often, the users, need to work with in order to make these connections.

    In addition, many of these tools involve duplication of the same data into a different file, rather than using the data as it is stored in the existing file. This is made necessary, in part, because conventional file systems can delete data, or overwrite it - in order for the interop to work correctly, the connections have to be maintained, often in a way that requires manual user intervention (or at least administrator intervention).

    Similarly, version control presents a lot of headaches when operating on files or groups of files - indeed, for you Blakey, IIRC one of the problems you see in Git is that, given its focus on individual files rather than groups of files, it tends to lead to problems for things that have to apply to several files at once rather than one at a time, but by the same token, in order to work with it at all, you need to have a copy of the entire repo locally if you are to commit locally (more on this later, maybe).

    Nelson's particular goals also involve a lot of things which are less about the technology than about using if for art and literature - a large part of his goal is to facilitate literary criticism, allow authors and filmmakers ways to produce branching or 'theme and variation' hyperfilms and hyperliterature with multiple interconnected perspectives and possibly different endings (mind you, he was originally talking about this before video games were really a thing, and certainly before any which had things like broad storylines, moral choice systems, and cinematic presentations, and I'm not sure if he really sees how a lot of what he wanted now exists, but in that form rather than in books or films).

    All of these things, and others which he and the others involved in the project, are possible with existing file systems, but they don't fit well with them - there's an impedance mismatch, as it were. Furthermore, the most direct and obvious ways to do these as individual features tend to conflict with each other - for example, imagine trying to have a system that handles the jobs of both OLE and a version control system, without the two of them stepping on each others toes.

    But the worst part is, because they would be something added on top of the existing system, nothing that is outside of the walled garden of applications using that add-on set of features can use them, and trying to get those things which would be inside the application or framework to work with anything outside of it would almost invariably mean copying the piece outside into the walled garden - causing more unnecessary duplication.

    Finally, while there is unnecessary duplication of data, there is also no consistent way to ensure and manage necessary duplication of the information - either for backup purposes, or to share it with other computers.

    So, from a technical standpoint, what is wanted is a system that -

    • manages data in a regular and consistent fashion regardless of the source
    • can select a piece of data from out of a larger piece of data without the application having to explicitly parse, seek, or otherwise have to keep track of where in the larger piece of data the sub section is

    I will get to this again later, I need to go right now.



  • @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    I am not entirely certain if this is going to exactly how Ted would say it.

    Ok before I read any more of this, you keep saying that Ted whatsisname is awful at communicating his ideas and hasn't really accomplished anything at all in, what, 20 years? So... not saying it the way he does maybe is a good idea.

    @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    Basically, the idea is that regardless of the mechanism involved, the goal is an intellectual prosthesis that meshes well with human memory and pattern-making. Something that reflects human thought patterns, and can fit those of individuals.

    Ok; but once we've built that, what do we do with it?

    @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    The particular aspect of this which Xanadu proposes to address is about how different people can make connections to things that others can't. The intent is to make it easy to see, record, and share these connections.

    This is actually the first thing I've seen here that has any real meat to it.

    But I can't help thinking of IMDB. Why? Because IMDB is full of people who make connections, like this. Is that even remotely like what you're envisioning?

    @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    The issue with file systems is that, to my mind (and AFAIK, to Nelson's), the separation between files act as a barrier to making these connections - they become meta-data that has to be represented by yet another file or set of files.

    Well you could shove them into a relational database. And I'm not sure what the implication of "yet another set of files"... why is that so awful exactly?

    @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    Further, because of the way different applications treat data differently - even if it is encoding the same data, such as (for example) a user name and street address which appears in a letter stored as a text document, a database of customer information, and a spreadsheet of customer deliveries for the week -

    Any organization worth its shit would:

    • Have a single source of truth for this information
    • Store it in a centralized database of some sort that all applications they use pull the information back out of

    @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    this leads to the need for things like file conversion utilities,

    Honest question: how often do you do this for simple factual information, like a customer's address?

    @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    Similarly, version control presents a lot of headaches when operating on files or groups of files -

    Yeah but not because the technology is inherently flawed in some way, but because VCS systems are always written by assholes who don't give even a slight shit about usability. I don't think I've ever complained about the technology behind Git (except perhaps its need to occasionally compress its database, which stops user interaction cold). I've complained about its usability and its lack of competitive features.

    If this new Xanadu system you're making also has shitty usability, guess what? It's not going to make anybody's life better.

    @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    Nelson's particular goals also involve a lot of things which are less about the technology than about using if for art and literature - a large part of his goal is to facilitate literary criticism, allow authors and filmmakers ways to produce branching or 'theme and variation' hyperfilms and hyperliterature with multiple interconnected perspectives and possibly different endings

    That's an interesting idea, and it might work in text, but I'm not sure how it could possibly jive with the realities of making a film. (Does the actor playing the main character just change based on which contributor directed that few minutes of footage? Does the actor have to sign an agreement to be available for shooting for the rest of his natural life, just in case someone wants to make a new ending? Who pays for that?)

    @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    (mind you, he was originally talking about this before video games were really a thing, and certainly before any which had things like broad storylines, moral choice systems, and cinematic presentations,

    I highly doubt that is true. Defender of the Crown came out in 1986.

    Again: why are you talking about what this Ted Nelson guy thinks? Is he paying you? If he's bad at explaining these ideas, his explanations aren't going to help you convince anybody else of them.

    @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    All of these things, and others which he and the others involved in the project, are possible with existing file systems, but they don't fit well with them - there's an impedance mismatch, as it were.

    Yeah; but, again, what about relational database systems?

    If you're a big corporation (or any corporation, really), you're not "a bunch of files", you're "a database". That's the important stuff.

    @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    Finally, while there is unnecessary duplication of data,

    I'm still not convinced that this is a thing that exists in any problematic quantity.

    @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    I will get to this again later, I need to go right now.

    Cheers.



  • @blakeyrat said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    I am not entirely certain if this is going to exactly how Ted would say it.

    Ok before I read any more of this, you keep saying that Ted whatsisname is awful at communicating his ideas and hasn't really accomplished anything at all in, what, 20 years?

    Rather longer. As in, 36 years longer. His main idea struck him in 1961, and he was actively working on the early stages of the process by 1966.

    So... not saying it the way he does maybe is a good idea.

    True. Mind you, it's more than just inability to explain or persuade people - though that's been hard enough, to be sure.

    Also, the situation has changed a lot since he started on this. Prior to the mid-1970s the main issue was that he was talking about things no had done yet, and many doubted were possible. Today, many of the pieces of things he envisioned - GUIs, ubiquitous networking, small interactive computers, film-grade video generation and video editing without needing a massive investment in mainframe hardware and film equipment - are now commonplace to the point of banality. The problem is that while most of the pieces are there, they don't fit together in the way he - and I - think they could, and the reliance on certain things he considers backwards looking and inadequate - conventional file systems firstt and foremost - is keeping them from fitting together smoothly.

    But yes, finding a better way to explain the ideas is a lot of what I am trying to do, which is why I posted it somewhere where I knew I would get criticism which is both thoughtful (sometimes) and unflinchingly brutal - it may sting, but unlike Jeff, I am willing to face critics, hecklers, and even trolls and see what I can learn from them.

    Or try to, anyway. Famous last words .



  • @scholrlea Have you ever considered that you might be in a cult of personality?

    Seriously, you're like 2 small steps from "Ted Nelson makes the sun rise!"



  • That's... actually a cogent point, and one which I, being the one with the unreasonable Belief System, wouldn't really be able to judge rationally if it is true.

    If someone else had brought it up, someone who isn't fixated on being argumentative for its own sake, I would really have to take it seriously.

    Blakey, I have a certain respect for your technical knowledge and sharp skepticism about claims, but when you start taking aim at other people's motives rather than their assertions, I am a lot less inclined to listen.

    Still, it is something I need to consider. I do realize that Nelson has fucked up and fucked off - a lot - over that 56 years, and that he tends to get tunnel vision as new things come up - he frequently dismisses things that are kinda-sorta similar to what he talks about as 'missing the point', and while he's right about it sometimes, a lot of the time, he's the one who has dropped the thread.

    And yeah, he does a really, really shitty job of getting his ideas across - which is why I keep hedging my bets, because even though I have been interested in what he has to say since 1989 or so (when I first came across the Microsoft Press update to Computer Lib/Dream Machines), I am still not sure if I really understand what he and the people he thrashed this out with in the 1970s and 1980s really were saying, especially regarding the published code.

    And yeah, a lot of his ideas are total crap, in my opinion. I just think that this particular set of ideas is a good one, or at least has something worth seeing through to the end with to see if it turns out better.

    Yes, his stridency, with the rhetoric about the evils of file systems and other outrageous claims, is worthy of eye-rolling. I'm not saying 'file systems are evil' myself, even if he is saying; I am saying that I think that underneath that rhetoric, his point that there might be a better solution is worth more investigation,

    And that a lot of the reason it hasn't been properly investigated was in the way he and the people trusted to do it seriously fucked up the management of the work.

    And yeah, not all of his claims for priority or independent development can be substantiated. Several can, though; he first published the terms 'hypertext' and 'hypermedia' in 1965, and AFAICT no earlier published mention of the terms exists in the formal literature or anywhere else. He did write papers on separating displays into multiple windows around that time - the idea was in the air, though, with at least three others coming up with it around then, and priority probably goes to either Doug Engelbart or Ivan Sutherland. His "parallel text" demo of displaying cross-window links as lines connecting two pieces of text or other visual elements, which is one of his babies and still not really supported too well in most windowing systems, was first published in 1972, and appears in the 1974 edition of his books Computer Lib/Dream Machines.

    I dunno, though. As I said, I can't judge my own levels of ego commitment and self-delusion from the inside. I like to think I am just trying to get feedback, and that my goal is to see if I can use these ideas in other contexts, see what I can draw from them. I am don't think I am trying to convert people, and in fact I am explicitly asking for folks to pick the ideas apart -but I can't explain my ideas without explaining the ones they are based on, so I ended up explaining those first.

    Which is why I don't trust it when Blakey says this, because he has a habit of mixing criticism of an idea, with criticism of an implementation, with criticism of the motives of the people promoting or discussing those ideas, and with criticism of the people themselves. While they aren't entirely independent topics, he seems to deliberately conflate them more than is reasonable, not to make a point, but to mock, and then if called out on it goes on the offensive by accusing the people of being biased and irrational - just as he did here.

    (Also, he either has terrible memory, or he's being deliberately obtuse again, as I have discusses part of this at length here before.)

    And of course, the trap is that you either try to reply with ideas, which he will then dismiss as being phantasms of the deluded, or by counter-attacking his motives, either of which play to what he really wants: strife. It looks like I fell into both of those traps in the act of trying to evade them. Well played, sir.



  • @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    Blakey, I have a certain respect for your technical knowledge and sharp skepticism about claims, but when you start taking aim at other people's motives rather than their assertions, I am a lot less inclined to listen.

    1. My only motive here is to understand the ideas you're trying to communicate.
    2. I would hope my level of skepticism would be considered normal by anybody with even a slight understanding of science. (You can't just drop in a claim like "computers work better when they store information like the human mind does" without some justification. If I go into a thread and say "BTW elephants are blue", I ought to at least have a photo, even a doctored one, to go with it, yes?)
    3. Part of the problem, I think, is that all you're talking about (AFAICT) is implementation-detail, and in general I don't give a shit about that. I care about the actual stuff the actual user sees on the screen, and so far the only hint of that has been when you talked about how a user could take an existing novel or film and "remix" it by adding their own ending.

    Feel free to ignore me entirely.

    But you seriously need to evaluate why you're dropping this guy's name 3 times every paragraph. Is his work required reading to understand yours? Then maybe explicitly say so. Are you just communicating his ideas instead of your own? Well, the OP didn't imply that, but I'm starting to think maybe that's the case.

    Why not leave all personalities out of it and explain the idea? Go read, for example, Stephen Hawking's A Brief History of Time, a brilliant book that just explains the ideas involved without spending paragraphs talking about:

    • The scientist who came up with that idea's personal history
    • Whether the scientist who came up with another idea predated video game narratives or not
    • What physiological illnesses the scientist who came up with idea number three might have had that affected his thought process
      etc.

    Basically: give us more idea, less biography. If someone as respected as Stephen Hawking can pull it off and not offend or confuse anybody, then surely you can.

    @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    Yes, his stridency, with the rhetoric about the evils of file systems and other outrageous claims, is worthy of eye-rolling. I'm not saying 'file systems are evil' myself, even if he is saying;

    Then why do I, a person who has not read any of Ted Nelson's work and in fact when I think of the name "Ted Nelson" I can only think of the bland scientist in The Incredible Melting Man who gets shot by the security guard-- that sentence went off the rails:

    Then why do I know about Ted Nelson's "file systems are evil" attitude if it's not part of your idea? Why was this distraction in the text? Which is demonstrably distracting us right now this moment. Good illustration to what I was talking about a few paragraphs back.

    @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    Which is why I don't trust it when Blakey says this, because he has a habit of mixing criticism of an idea, with criticism of an implementation, with criticism of the motives of the people promoting or discussing those ideas, and with criticism of the people themselves. While they aren't entirely independent topics, he seems to deliberately conflate them more than is reasonable, not to make a point, but to mock, and then if called out on it goes on the offensive by accusing the people of being biased and irrational - just as he did here.

    I just want to point out that two paragraphs before this one is yet another biography of Ted Nelson.

    Again: if you don't like what I'm typing here, feel free to ignore it. But I'm calling it how I'm seeing it.

    @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    And of course, the trap is that you either try to reply with ideas, which he will then dismiss as being phantasms of the deluded, or by counter-attacking his motives, either of which play to what he really wants: strife. It looks like I fell into both of those traps in the act of trying to evade them. Well played, sir.

    I have no idea what this means.



  • @blakeyrat said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    Feel free to ignore me entirely.

    Yeah, maybe I should.

    @blakeyrat said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    Is his work required reading to understand yours? Then maybe explicitly say so.

    I did. In about five places. I said right at the start that 'I am explaining this so that I can explain my own ideas.' I probably wasn't quite explicit enough though, I will grant you that.

    @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    And of course, the trap is that you either try to reply with ideas, which he will then dismiss as being phantasms of the deluded, or by counter-attacking his motives, either of which play to what he really wants: strife. It looks like I fell into both of those traps in the act of trying to evade them. Well played, sir.

    I have no idea what this means.

    Sure you don't.



  • I had skimmed over the thread and I would say too much historical references, fluff

    What I like to know

    • what the final user will see?
    • what the dev will see?
    • Modern mockups / partial implementations, there's any? (In a spoiler you mentioned someone suppossedly implementing something related for Autodesk,; is something of that visible? )
    • Some study on how user / devs perform under said mocks / partial implementations ?

    Your ideas can be valuable, but you are terrible at presenting it. Give us something solid to chew.



  • This post is deleted!


  • @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    Basically, when a new datum is created - whether it is a few sentences/words/individual characters in a 'text document', the value of a 'spreadsheet cell', a 'saved game record', an 'image' or 'picture', an 'audio recording', or whatever - the FE passes it to the BE, which it encrypts in some way and then writes to some storage media - possibly as part of a journal that contains data from several other applications and users.

    Man, I hate to piss on your parade, but we've had this since forever. It's called a "filesystem". It does a bloody good job of abstracting any underlying datum away into the least common denominator, called "a bag of bytes", gives it an address, gives it metadata (and extended attributes, if you will), and you're good to go.

    If we need to, we can slap additional databases and version control systems on top of that (which may or may not be the same thing), but it's optional.

    https://www.joelonsoftware.com/2001/04/21/dont-let-architecture-astronauts-scare-you/ is mandatory to read.



  • @wft :facepalm: I knew some smart-ass would say that. Especially when the second post of the group vanished into the ether and reposting it at the end screwed up the order of things.

    I don't see this as trying to fix something that isn't broken, because a main point in this is that I think that filesystems are broken, and always have been.

    Nor do I see it as putting anew coat of paint on the same old crooked fence. I don't see this as being the same as a file system. It might serve many of the same purposes, but it does so in a different way, a way I believe could be better.

    Clearly, I have screwed up. Either I am simply wrong, or I am not explaining this well. I am open to the former being the case, but I am not yet convinced that I am.

    But this does convince me that the only compelling way to get people to look is to demonstrate a working system, which may be out of my ability. But hey, if I try and fail, or if I succeed in making it work and it isn't what I expect it to be, I've only wasted my own time and a few minutes of the time of people here and elsewhere whom I ask for advice and input on it. I don't know if you will resent that imposition, but for my own part, I have nothing but time in the forseeable future.


  • Discourse touched me in a no-no place

    @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    I think that filesystems are broken, and always have been.

    You should focus on describing what is wrong there. Current filesystems are persistent stores of byte sequences, and impose very little extra on top of that in the way of constraints. They do have a simple hierarchic naming system and a really extremely simplistic metadata model, but if that's constraining to your ideas (given that they hardly impose any limits at all on what the names are otherwise; a big bag of stuff indexed by OID, UUID or GUID is trivial to make, and almost no serious application sticks to just using the filesystem's metadata) then you appear to be expecting the system to be other than a byte store. That is the ground reality of a basic datastore.

    Filesystems with other, more complex methods of organisation have been tried in the past. I remember using systems where all files were highly record-oriented, with the record definitions being enforced by the OS itself. They sucked. They really sucked. They were a horrible straitjacket that prevented the computer from being used in all sorts of different ways, and that was because they only ever allowed for the interpretations of the data that the creator of the format of the file had thought of. What we have now is far more flexible, and is definitely a reaction to those bad old days.

    We can do much higher-level concepts of data storage and manipulation. You should check out what the major libraries (e.g., the Library of Congress) do for their digital archives: they have far richer sets of metadata, and that's in large part because they've got the incentive to get the quality of the metadata time up to a much higher standard than usual at the time of creation of the data. They can then put searching and manipulation systems on top that leverage this stuff; I've done work in the past with building processing systems that were general semantic rules engines that used the metadata to decide what to do with the entities in the archive and which would transform them appropriately (while creating a lot more metadata). This was all done on top of standard filesystems and conventional databases (as well as an RDF DB, which gave us quite a bit more flexibility than usual fixed-schema SQL DBs) and all fronted by services that gave sensible interfaces to it all; the filesystem was absolutely not the constraining factor.

    However, you need to be aware that you cannot get everyone to agree on what metadata should be. The more people you really talk to about this, the more confused you will get because they use the same words to mean subtly different things. This is especially prevalent in groups that do not have a long-standing habit of exchanging systematically formatted information (if biology labs the world over switched to doing all their publications in Lojban, they'd save millions of man-years of work and end the careers of rather a lot of senior scientists; it'll never happen). Different applications care about different metadata, and so too do different people. Some people are malicious in their use of metadata (and this breaks all sorts of fundamental assumptions, as it requires that metadata be a contingent concept with a modal trust/belief model, either explicit or implicit). You can't have everyone and everything understanding every piece of data fully: it won't ever work. If that's what you're trying to do, your quest is truly quixotic; give up tilting at these particular windmills.

    Maybe I've misunderstood what you're after, but LISPing ALL THE THINGS and baldly declaring filesystems to be not fit for purpose without further justification really isn't likely to win friends. The justifications you've brought are either largely irrelevant to a tech audience — we're just a bunch of people who are not too bothered about the mental state of some old programmer — or very close to incomprehensible. What is wrong with the current persistent (string → byte sequence) mapping concept? Until you can actually answer that concept better, even if only partially, you will not persuade anyone.

    (OTOH, being able to answer it fully would probably be PhD level work. ;) I don't expect it to be easy to answer at all.)



  • @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    Especially when the second post of the group vanished into the ether and reposting it at the end screwed up the order of things.

    I know you don't want my feedback, but I feel compelled to point out there is an edit button and you could just fix this problem.


  • 🚽 Regular

    @blakeyrat said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    @scholrlea Well a bigger, more universal point, is that he says "files are bad because they don't match the way humans think" to which I instinctually knee-jerk, "prove it."

    My own knee-jerk reaction is "what's a file (in your opinion [and what's the alternative? {and why can't I call that a file?}])?"


  • 🚽 Regular

    @robotarmy said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    When human's make the biggest leaps it comes from modeling natures natural patterns.

    It's not clear to me what humans were imitating when they invented space travel.


  • Java Dev

    Traditional (unix) filesystems contain four main types of nodes:

    • Maps of name to node ('directory')
    • Maps of name to list-of-names ('symlink')
    • Data nodes ('plain files')
    • Special nodes (named pipes, sockets, devices, etc) which can probably be disregarded for this discussion.

    I think your most viable approach may be to try to add to that. For example, windows contains libraries, which kind of act like unions of multiple directories.

    Which constructs would you add? Which real problems would they solve?

    Various systems in the past have tried various ways to identify the type of a file. Windows uses extensions for this. HTTP uses mime types. Modern linux uses extensions to some degree, and in other places uses mime type but doesn't really store it. Regardless of what you choose, it will not be possible to be exhaustive.



  • @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    or I am not explaining this well.

    For me personally, it's this. I glanced at it first when you posted it, and now tried to go over the text again. There really needs to b e a "tl;dr" version of your ideas.

    The best advice I can give off-the-cuff is to look into writing. Tighten and structure up your text. State up-front what you want to achieve, and why in like one paragraph (what problem does it solve?). Skip the history lesson and rambling introduction in that part -- if your first paragraph doesn't catch my attention (it didn't), it's likely I'm going to skip the whole thing (I did initially, I'm now commenting because of others' posts).

    Perhaps look a bit into the structure of academic papers, that is: abstract, introduction, related work, your method (you don't have tangible results, so skip the rest). It's not perfect, but it does impose a bit of structure. And it makes it clear which part is your ideas and what comes from others and already exists. It'll also be much more of a pain to write, but your goal should be to make it easy to read. After all, you want this to be read and understood (I suppose) by others. Minimize their effort to do so.


  • :belt_onion:

    @cvi said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    Skip the history lesson and rambling introduction in that part -- if your first paragraph doesn't catch my attention (it didn't), it's likely I'm going to skip the whole thing (I did initially, I'm now commenting because of others' posts).

    I don't think the history lesson necessarily needs to be skipped but it needs to be sectioned off, as you say. Better organization would make it easier to follow points without them being intermingled.



  • @heterodox said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    @cvi said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    Skip the history lesson and rambling introduction in that part -- if your first paragraph doesn't catch my attention (it didn't), it's likely I'm going to skip the whole thing (I did initially, I'm now commenting because of others' posts).

    I don't think the history lesson necessarily needs to be skipped but it needs to be sectioned off, as you say. Better organization would make it easier to follow points without them being intermingled.

    That's a perfect use for spoiler tags. Collapsible sections for the parts that are really asides/not-always-relevant make it seem way less like a wall of text.



  • @pleegwat said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    Various systems in the past have tried various ways to identify the type of a file. Windows uses extensions for this. HTTP uses mime types. Modern linux uses extensions to some degree, and in other places uses mime type but doesn't really store it. Regardless of what you choose, it will not be possible to be exhaustive.

    Sigh.

    Another person who badly needs to experience Mac Classic...



  • @benjamin-hall said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    That's a perfect use for spoiler tags. Collapsible sections for the parts that are really asides/not-always-relevant make it seem way less like a wall of text.

    I think that would be more annoying than useful. Unless the information in the spoiler tags is truly parenthetical, you'll end up having to read it anyway to get forward in the text. (If it's that parenthetical that you can skip it ... why is there in the first place?)

    IMO, the problem is not the quantity of text. Streamlining the text with better structure and getting rid of a bunch of "bear with me" and similar fluff will make a far larger impact.



  • @cvi said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    @benjamin-hall said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    That's a perfect use for spoiler tags. Collapsible sections for the parts that are really asides/not-always-relevant make it seem way less like a wall of text.

    I think that would be more annoying than useful. Unless the information in the spoiler tags is truly parenthetical, you'll end up having to read it anyway to get forward in the text. (If it's that parenthetical that you can skip it ... why is there in the first place?)

    IMO, the problem is not the quantity of text. Streamlining the text with better structure and getting rid of a bunch of "bear with me" and similar fluff will make a far larger impact.

    I've seen a forum that allows you to give titles to spoiler tags--that way you can use them as collapsible structuring elements. But yes, being less prolix would help, as would better organization. It's a fault I share--why use 2 words when 52 would be marginally more precise?


  • Java Dev

    @cvi @Benjamin-Hall I think, if the text you're writing is more than, say, twice as tall as your forum input box, you shouldn't be writing it in the forum in put box. Same if writing it takes more than 5 minutes. Multiple consecutive posts count as one.


  • Impossible Mission - B

    @blakeyrat said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    Part of the problem, I think, is that all you're talking about (AFAICT) is implementation-detail, and in general I don't give a shit about that. I care about the actual stuff the actual user sees on the screen, and so far the only hint of that has been when you talked about how a user could take an existing novel or film and "remix" it by adding their own ending.

    This. I've asked before in this thread and gotten no answer, so once again, what is the actual problem to which Xanadu is a solution? And why has it not been generally noticed and accepted, at any point in the last half-century plus, that this is a real problem in need of a solution?



  • @scholrlea said in Xanadu, Predicate Dispatch, and Schol-R-LEA's latest psychotic break:

    Clearly, I have screwed up. Either I am simply wrong, or I am not explaining this well. I am open to the former being the case, but I am not yet convinced that I am.

    An ability to explain something is the ultimate litmus test on whether you actually understand the topic yourself.


  • area_pol

    @scholrlea Thank you for sharing your ideas. Please let me know if I understand you correctly, here is how I see it:

    You present motivation and design for:

    • an append-only, decentralized graph database
    • a programming language including its data format, editor and source control

    The programming language stores its source, libraries and executables in the aforementioned graph database.

    The source code format is similar to an AST, but slightly higher level.
    In an AST we would have VariableByName("my_var"), which would have to be resolved against the current scope during compilation. Here this is resolved during editing, and there is a reference (graph edge) to the node representing the actual variable.

    The language is edited using a structure editor (good, I love the idea of structure editors).

    The language has optional type constraints and optimization hints.

    The version control relies on the fact that we already store source in a graph format, as references (edges) to the nodes which represent the classes, functions, statements, parts of AST.
    We only need to create a new node for the smallest part affected by our change. We do not copy the rest of the code nodes, instead keep references to the old nodes which did not change.



  • @adynathos Sorry for the delay in responding, I was kind of caught up in something.

    Hmmn... Yeah, this is actually a pretty good description, not perfect but a lot better than what I have said so far. I will need to get back about it later but it captures a lot of it.


Log in to reply