Category Archives: Computation

Classes of Automata

Here is one way to carve up abstract (mathematical) automata into different classes of complexity (from low to high).

There is a somewhat different and older one that is based on the Chomsky Hierarchy (where ND stands for non-deterministic), and the associated language or grammar that they recognize:

Further Reading:



Conditional Branching is Not Necessary

Four simple instructions are sufficient to evaluate any Turing computable function using self-modifying programs: LOAD, STORE, GOTO, INCREment.

Further Reading:

Raul Rojas / Conditional Branching is Not Necessary for Universal Computation in von Neuman Computers, Journal of Universal Computer Science 2,11 (1996) 756-768

William F. Gilreath, Phillip A. Laplante / Computer Architecture: A Minimalist Perspective



The Pi Calculus

My previous post on Wolfram’s physics mentioned the Pi calculus, but I liked this little diagram so much I decided to let it have its own mention. The rules aren’t really four in number, but oh well.

  • (νx)P: create a channel named x, then do P
  • x(y).P: receive y over channel x, then do P
  • x‾<y>.P: send y over channel x, then do P
  • P|Q: do P and Q at the same time
  • !P: do P over and over until stopped
  • 0: stop

Further Reading:




The Wolfram Physics Project

When I first started looking at Stephen Wolfram’s latest proposal to solve physics, I was somewhat disappointed. I was rather fond of his previous “New Kind of Science” based on the structural rigidity of cellular automata. However, I am now intrigued by his latest ideas, based on the looser but more flexible basis of networks.

And once you have pithy statements with space, time, energy, and matter (as momenta), you catch my attention:

  • Energy is flux of causal edges
  • through Spacelike hypersurfaces
  • Momentum is flux of causal edges
  • through Timelike hypersurfaces

I confess I haven’t read much about the project yet, but it seems to be using rewriting rules, perhaps similar to the notion of rewriting in Wolfram’s previous framework, cellular automata. Of course, cellular automata and also rewriting rule systems can be computationally universal or Turing complete.

Another idea might be to try some sort of computational metaphysics between nodes like the pi-calculus (or some other process calculus). After all, you have to support quantum entanglement! However if you can encode everything with simpler structures then do it!

Further Reading:

View at

Cellular automata:

Note this quote for future reference:

The primary classifications of cellular automata, as outlined by Wolfram, are numbered one to four. They are, in order, automata in which patterns generally stabilize into homogeneity, automata in which patterns evolve into mostly stable or oscillating structures, automata in which patterns evolve in a seemingly chaotic fashion, and automata in which patterns become extremely complex and may last for a long time, with stable local structures. This last class are thought to be computationally universal, or capable of simulating a Turing machine.




Combogenesis and Alphakits

Let us calculate!

— Gottfried Leibnitz

I admit it, I’m rather a Utopian.

Perhaps I’ve been thinking all this time that it should be possible to find a reduced set of words, symbols, or even concepts that could serve as a basic core of human expression and being, some kind of fundamental proto-language that might cut across all cultures and yet connect all individuals. Something to undo the “Tower of Babel” and be able to heal all misunderstandings, resolve all disagreements, and find everyone’s common ground. I see now I have fallen down the “perfect language” rabbit-hole.

Of course, along with our imperfect languages we also have to deal with our imperfect thoughts and our imperfect feelings. Not only do we want to hide what we’re really thinking and feeling from others, we want to hide it from ourselves. Is it because we don’t want others to know the weakness and darkness within us, or we don’t want to face those parts of our own identities? Perhaps that is the main problem with language, the ease with which we can lie to both ourselves and others, and our eagerness to accept these lies.

Psychology is supposed to help us understand ourselves better. But before that, there were the Tarot decks, Ouija boards, and the I Chings that were supposed to illuminate our thoughts and actions, and help us perceive, however dimly, a little clearer into the past and future. I’m sure I’m not alone in thinking that such devices merely bring concepts to the forefront of the conscious mind and allow one to engage in creative and playful thinking. Maybe they tie into the “unconscious”, whatever that really means, and if not, then what is the source of their utility?

In the same vein, there are other instruments purported to aid in the effort to know thyself, such as Astrology and Myers-Briggs. Astrology has also been used for divination and that is its popular and sad ubiquity, that is “your daily horoscope”. Myers-Briggs is popular in the business world to help the managers manage and to resolve conflicts, and takes itself more seriously. In my foolishness, even though I didn’t believe that there was a perfect language lost in antiquity, perhaps I thought I could invent one anew like Ramon Llull or Gottfried Leibnitz!

Does language reveal reality or does it mask it? Can it blend and synthesize different realities or can it shape and create the very reality we inhabit? I’ve been mulling over the idea of what the next combogenetic alphakit might be, after chemical-molecular, biological-genetic, and symbolic-linguistic. Could it be something hyper-linguistic or hyper-cognitive, to serve as a perfect language, melding syntax, semantics, pragmatics? Or could it be something completely different, a blending of mathematics and philosophy?

Or even a new type of computer science? Such studies are still in their infacy, so one hopes for future breakthroughs and grand theories of logical systems and (e)valuations. Could a machine that creates reality from mere thought be the perfect language we seek, one that performatively produces no ambiguity by changing the abstract into the concrete, the inner into the outer? The Krell machine in the movie Forbidden Planet was one such hypothetical device, and showed the folly of a scheme that granted god-like powers to mere mortals.

Possibly better is a system that starts from grounding axioms that are so simple and fundamental that all must agree with their basis, utilizes logics that are so straightforward and rational that all must agree with their validity, demonstrates proofs that are so rigorous that all must agree with their worth, all enabled by overarching schemas that allow the truth of all things to vigorously and irrefutably shine. Even then, humankind might be too weak to suffer the onslaught against its fragile and flawed cogitations.

But O, what a wonderful world it might be.

Further Reading:

Umberto Eco / The Search for the Perfect Language



The Arcane Arts of Ramon Llull : the Dignities

Oh, Ramon Llull, where have you been all my life? I’m sure he’s been there all along, death now over seven hundred years in the past, just like always. His legacy seems at first glance to be quite the essence of medieval religion and scholastic philosophy, but still significantly and obscurely different to be enticing to this one. And on further examination, much more.

My schema above has little to do with his grand elaborate figures, except for listing the sixteen attributes he called “dignities”. Llull’s diagrams are full of clock-like wheels within wheels, complicated tableau, and combinatorial patterns. He wished to create a universal model to understand reality, and who wouldn’t want to discover the same? It is said that his methods are akin to an early computer science, and I’m just now starting to understand why.

The magister based the substance of his methods on his Christian faith, although he converted in midlife from Islam. Living in Barcelona, it was probably a good place to make such a change, but felt his calling was to convert others as well, so traveling he went. The methods he developed to convince others of their errors in belief were quite remarkable, as were the volume of his writing.

Like Gottfried Wilhelm Leibniz, who lived four hundred years later and was influenced by him, Llull wished to automate reasoning. But instead of building mechanical devices, Llull built computers from paper and ink, rulers and drawing compasses, scissors and glue. And instead of numbers as the smallest tokens of his computer, he used abstractions (i.e. words) that he felt would be understood by everyone in exactly the same way.

For example, he enumerated these sixteen dignities or aspects of his Christian diety, although sometimes he used the first nine. His constructions allowed one to pose questions and then obtain answers mechanistically that would be convincing to all observers of the correctness of the result. Too bad he was ultimately stoned to death while on his missionary work, although he lived to be eighty two.

Llull’s devices remind me of some of my pitiful charts and diagrams, and make me wonder if I may either adapt some of his techniques to my own use, or be inspired to develop others. I suspect I have locked myself into limitations by my approach, or are these constraints to my advantage? It might be hard to have spinning elements, but I can envision sliding elements like Napier’s Bones, origami-style folding and pleating, and even physical constructions like linkages and abacuses.

Now a martyr within the Franciscan Order, Llull’s feast day is June 30, which I’ve now missed. I hope to remember him to repost or improve on this by next year.

Further Reading:

The memory wheel



A Rosetta Stone

Abstract of Physics, Topology, Logic and Computation: A Rosetta Stone by John Baez and Michael Stay:

In physics, Feynman diagrams are used to reason about quantum processes. In the 1980s, it became clear that underlying these diagrams is a powerful analogy between quantum physics and topology: namely, a linear operator behaves very much like a “cobordism”. Similar diagrams can be used to reason about logic, where they represent proofs, and computation, where they represent programs. With the rise of interest in quantum cryptography and quantum computation, it became clear that there is extensive network of analogies between physics, topology, logic and computation. In this expository paper, we make some of these analogies precise using the concept of “closed symmetric monoidal category”. We assume no prior knowledge of category theory, proof theory or computer science.

  • Physics
  • Logic
  • Topology
  • Computation

Perhaps Category Theory is a “Fifth Essence”?

Further Reading:

Click to access rose3.pdf

[*9.168, *10.50]


Speak, Listen, Write, and Read

Here’s another simple fourfold and maybe sixfold.

Speaking, Listening, Writing, and Reading are commonly presented together in elementary education as interrelated language skills. One is speaking for a listener and one is writing for a reader. One is listening to a speaker and one is reading a writer.

In the computer age another pair needs to be mentioned, that of programming for computers and the execution or running of that code by the computer. In a way, this new pair doesn’t fit, since one is writing code for computers, not people. And the execution of the program is not performed by a person, but by a computer.

(That’s not entirely true. Programs are also written for other human programmers in mind so that they can debug or maintain or modify the code if the original programmer isn’t available. Structured programming is one method to simplify the logical organization of the program so that others can comprehend it more readily. Object oriented programming is another method to allow multiple programmers to work independently without conflict.)

  • Speak – Listen
  • Write – Read
  • Program – Execute

But perhaps there is a different way to understand these duals. A speaker understands that a listener is following their speech by their response. A writer understands that a reader is comprehending their writing by their response. A programmer understands that a computer is ‘understanding’ the code by its response or output when the program is run.

Also, one can consider speech and writing to be encodings of thoughts into physical representations, and listening and reading to be decoding of the representations back into thoughts. Running or executing a program is not really decoding, or is it? But it is something like processing the speech or writing, like a computer is processing the program.

One might say that listening and reading are like processing the speech and text as programs on the computer of our brains. They are normally thought to be processed as data, as in Natural Language Processing, but it is an interesting twist if one considers them as programs. (Actually, I just recalled that the 1992 science-fiction novel “Snow Crash” by Neal Stephenson used this notion.)

To program effectively the programmer must execute their code in their mind, at least piecemeal and partially, just as a speaker or a writer must listen as they speak and read over what they have written. They can’t understand the full effect of the program’s execution, especially once the program becomes larger than a few statements, just as the full effect of speech or writing that is being processed by another person cannot be completely understood.

Not considered are computers themselves writing programs for other computers to “read” or execute. As the science of artificial intelligence becomes mature, computers writing and reading among themselves may become a common thing. Wasn’t there a news article about that recently? They pulled the plug on that pretty fast.

As digital assistants become more ubiquitous, they are fully participating in our language games of speaking, listening, writing, and reading. Will these so-called virtual assistants program for us next, as in Automatic Programming? That day may already be here.

Further Reading:

N. Katherine Hayles / My Mother was a Computer: digital subjects and literary texts

Neal Stephenson / Snow Crash


[*9.50, *10.36, *10.46]


The Duality of Time and Information, V3

The states of a computing system bear information and change time, while its events bear time and change information.

from The Duality of Time and Information by Vaughan Pratt

The most promising transformational logic seems to us to be Girard’s linear logic.

— from Rational Mechanics and Natural Mathematics by Vaughan Pratt

Here we have three duals:

  • Information – Time
  • States – Events
  • Bear – Change

Further Reading:

Vaughan Pratt / The Duality of Time and Information

Vaughan Pratt / Time and Information in Sequential and Concurrent Computation

Vaughan Pratt / Rational Mechanics and Natural Mathematics



A Digital Universe, V2

A digital universe – whether 5 kilobytes or the entire Internet – consists of two species of bits: differences in space, and differences in time. Digital computers translate between these two forms of information – structure and sequence – according to definite rules. Bits that are embodied as structure (varying in space, invariant across time) we perceive as memory, and bits that are embodied as sequence (varying in time, invariant across space) we perceive as code. Gates are the intersections where bits span both worlds at the moments of transition from one instant to the next.

— George Dyson, from Turing’s Cathedral

Further Reading:

George Dyson / Turing’s Cathedral: the origins of the digital universe


Embodied as Structure, Perceived as Memory

Invariant across Time: ¬ΔT
Varying in Space: ΔS

Embodied as Sequence, Perceived as Code

Varying in Time: ΔT
Invariant across Space: ¬ΔS

[*7.82, *7.83, *7.153, *10.14]