Formalization
Any knowledge set (a brain, a book, a belief map) have degrees of formality. Here is my attempt to squishing several dimensions of formality into one dimension:
esoteric: a folder of documents, your brain
informal linear: a book, an interview, a speech
informal structured: wikipedia, the examples in semi-formal
formal structured: computer code, equations, Gellish
Humans talk (produce natural language) by organizing esoterically organized brains into informal linear writings. The goal of structuring is to automate the conversion of informal linear content into informal structure. The goal of daemons is to convert esoteric content in our brains into into informal belief maps and perhaps eventually into formal ones. Humans will likely prefer to interact with informal ones, but daemons talking other other computer systems may instead use a formal conversion.
Normalization
There is a second axis, which is unnormalized versus normalized. Normalized documents are those which obey a schema, and thus can be compared to other documents which obey as well. A schema defines terms, conventions and grammar.
Combined
Normalized informal linear: academic papers, legal contracts
Normalized informal structured: argument trees which use normalized word definitions
Normalized formal structured: a GOFAI brain, a semantic database, a belief map where the word definitions and document grammar are formalized.
A normalized formal structured belief map would be the most difficult document to create and maintain. There are benefits though, two such maps (assuming they are made under the same formal rules) can be easily and unambiguously compared and diffed. Two people who created such maps for themself could instantly get a list of any cruxes which exist between the two maps. A database of many of these maps would allow for N-squared comparisons, which itself would be important for wide scale belief polling, sociology research, and idea taxonomy.
Reasonableness versus Rationality
In David Chapman’s book-in-progress https://metarationality.com he lays out the terms reasonableness and rationality.
Reasonableness is about being “not stupid, crazy, or meaningless” and I believe functionally this means being specific enough to be useful while not so specific that communication is impractical.
Rationality, as he puts it, “works mainly with general knowledge. Ideally, it aims for universal truths. Typically, knowledge of a specific object does not count as “rational” unless it applies to every other object in some class.” The point of his book is that you can’t even apply Rationality to physical objects because even exhausting and impractical definition of such an object (fuzzy boundaries, quantum foam, etc) defies any formal classification we know of.
An AI instantiated on a digital computer is math, and thus can perform rational transformations to symbols it observes. We consider an AI useful when we judge their bespoke form of rationality to be reasonable.
My definition of “Normalized formal structured” would then be a rational system in a given ontology by his definitions. Such a system would be necessarily free-floating from our physical reality, as he holds physical objects cannot be rationally described, and designing & utilizing such a system would require what he describes as meta-rationality. And while advanced AI is capable of meta-rationality in theory, in practice a useful meta-rationality much be in dialog with the physical world. As John Vervaeke would say, it would need to constantly be checking for fittedness. Chapman’s other recent https://betterwithout.ai/ might be summarized as pointing out that AI’s can’t do this, but they can still be used to create harm without it. The point of my work right is that I’m holding out hope we as humans can do the meta-rationality and find reasonable ways to wield the limited rationality of AIs to accomplish our goals.