Category Archives: Computation

Information and Computation

I’ve long wished for a succinct definition for the notion of information. What is it? What makes something information and another thing disinformation? Why do people accept things as truth that others know as false? Is information something that informs the physical universe, or is it only a way that the human mind tries to structure itself? The nonhuman world seems to get along just fine without knowing anything, doesn’t it?

Here is an idea for a simple schema for information that utilizes notions from other posts.

  • Distinction = an action that tells or makes a Difference.
  • Difference = a fact (or part) that is told or is made by a Distinction.
  • Information = a structured collection of facts (or parts).
  • Computation = a functional collection of actions.

In relation to the Four Causes, Whitehead’s Criteria, and Structure-Function we see:

  • Efficient: Consistency: Action: Distinction
  • Material: Applicability: Part: Difference
  • Formal: Coherence: Structure: Information
  • Final: Adequacy: Function: Computation

And so:

  • A Distinction is an action that tells or makes a Difference, consistently and efficiently.
  • A Difference is a fact or part that is told or is made by a Distinction, and applies materially.
  • Information is a structured collection of Differences (as facts or parts), that has a formal coherence.
  • A Computation is a functional collection of Distinctions (as actions), that are adequate for an end or purpose.

Further Reading:

https://grammar.collinsdictionary.com/us/english-usage/what-is-the-difference-between-difference-and-distinction

https://en.wikipedia.org/wiki/Information

https://www.informationphilosopher.com/

https://plato.stanford.edu/entries/facts/

https://plato.stanford.edu/entries/action/

https://plato.stanford.edu/entries/logic-information/

https://plato.stanford.edu/entries/computation-physicalsystems/

https://en.wikipedia.org/wiki/Material_implication_(rule_of_inference)

[*9.26, *9.106, *13.196, *13.200]

<>

 

 

Linear Process Algebra

One of computer scientist and Professor Emeritus Vaughan Pratt’s most recent conference papers is on “linear process algebra,” which relates several of his previous interests on linear logic, Chu spaces, concurrent processes, events and states, etc.

The paper opens with a nice overview of computer science research primarily concerned with concurrent processes. Computation itself divides into the aspects of logical and algorithmic, formal methods into the logical and algebraic, concurrent computation into operational and denotational, and then the author gives a brief list of models of processes by a variety of mathematical structures until he comes to his theme of using Chu spaces.

As an example, he presents processes as Chu spaces over the set K, where K = { 0, T, 1, X}, with names and meanings :

  • 0: Ready
  • T: Transition
  • 1: Done
  • X: Cancelled

and then details four binary operations as working in Chu spaces over K:

  • P ; Q: Sequence
  • P + Q: Choice
  • P || Q: Concurrence
  • P ⊗ Q: Orthocurrence

Further Reading:

Vaughan Pratt / Linear Process Algebra

Click to access bhub.pdf

Click to access lpa.pdf

Click to access bud.pdf

https://www.researchgate.net/publication/2663060_Chu_Spaces_A_Model_Of_Concurrency

https://www.researchgate.net/publication/222310260_Types_as_Processes_via_Chu_spaces

https://en.wikipedia.org/wiki/Vaughan_Pratt

https://dblp.org/pid/p/VRPratt.html

[*12.122]

<>

Classes of Automata

Here is one way to carve up abstract (mathematical) automata into different classes of complexity (from low to high).

There is a somewhat different and older one that is based on the Chomsky Hierarchy (where ND stands for non-deterministic), and the associated language or grammar that they recognize:

Further Reading:

https://en.wikipedia.org/wiki/Automata_theory

https://en.wikipedia.org/wiki/Chomsky_hierarchy

[*11.138]

<>

Conditional Branching is Not Necessary

Four simple instructions are sufficient to evaluate any Turing computable function using self-modifying programs: LOAD, STORE, GOTO, INCREment.

Further Reading:

Raul Rojas / Conditional Branching is Not Necessary for Universal Computation in von Neuman Computers, Journal of Universal Computer Science 2,11 (1996) 756-768

William F. Gilreath, Phillip A. Laplante / Computer Architecture: A Minimalist Perspective

[*11.65]

<>

The Pi Calculus

My previous post on Wolfram’s physics mentioned the Pi calculus, but I liked this little diagram so much I decided to let it have its own mention. The rules aren’t really four in number, but oh well.

  • (νx)P: create a channel named x, then do P
  • x(y).P: receive y over channel x, then do P
  • x‾<y>.P: send y over channel x, then do P
  • P|Q: do P and Q at the same time
  • !P: do P over and over until stopped
  • 0: stop

Further Reading:

https://en.wikipedia.org/wiki/%CE%A0-calculus

https://en.wikipedia.org/wiki/Process_calculus

https://en.wikipedia.org/wiki/Game_semantics

https://golem.ph.utexas.edu/category/2009/08/the_pi_calculus.html

[*12.34]

<>

 

The Wolfram Physics Project

When I first started looking at Stephen Wolfram’s latest proposal to solve physics, I was somewhat disappointed. I was rather fond of his previous “New Kind of Science” based on the structural rigidity of cellular automata. However, I am now intrigued by his latest ideas, based on the looser but more flexible basis of networks.

And once you have pithy statements with space, time, energy, and matter (as momenta), you catch my attention:

  • Energy is flux of causal edges
  • through Spacelike hypersurfaces
  • Momentum is flux of causal edges
  • through Timelike hypersurfaces

I confess I haven’t read much about the project yet, but it seems to be using rewriting rules, perhaps similar to the notion of rewriting in Wolfram’s previous framework, cellular automata. Of course, cellular automata and also rewriting rule systems can be computationally universal or Turing complete.

Another idea might be to try some sort of computational metaphysics between nodes like the pi-calculus (or some other process calculus). After all, you have to support quantum entanglement! However if you can encode everything with simpler structures then do it!

Further Reading:

https://www.wolframphysics.org/

https://www.wired.com/story/stephen-wolfram-invites-you-to-solve-physics/

How We Got Here: The Backstory of the Wolfram Physics Project

https://en.wikipedia.org/wiki/Digital_physics

https://www.scientificamerican.com/article/physicists-criticize-stephen-wolframs-theory-of-everything/

https://turingchurch.net/computational-irreducibility-in-wolframs-digital-physics-and-free-will-e413e496eb0a

View at Medium.com

Cellular automata:

https://en.wikipedia.org/wiki/Cellular_automaton

Note this quote for future reference:

The primary classifications of cellular automata, as outlined by Wolfram, are numbered one to four. They are, in order, automata in which patterns generally stabilize into homogeneity, automata in which patterns evolve into mostly stable or oscillating structures, automata in which patterns evolve in a seemingly chaotic fashion, and automata in which patterns become extremely complex and may last for a long time, with stable local structures. This last class are thought to be computationally universal, or capable of simulating a Turing machine.

Rewriting:

https://en.wikipedia.org/wiki/Rewriting

https://en.wikipedia.org/wiki/Semi-Thue_system

[*12.32]

<>

Combogenesis and Alphakits

Let us calculate!

— Gottfried Leibnitz

I admit it, I’m rather a Utopian.

Perhaps I’ve been thinking all this time that it should be possible to find a reduced set of words, symbols, or even concepts that could serve as a basic core of human expression and being, some kind of fundamental proto-language that might cut across all cultures and yet connect all individuals. Something to undo the “Tower of Babel” and be able to heal all misunderstandings, resolve all disagreements, and find everyone’s common ground. I see now I have fallen down the “perfect language” rabbit-hole.

Of course, along with our imperfect languages we also have to deal with our imperfect thoughts and our imperfect feelings. Not only do we want to hide what we’re really thinking and feeling from others, we want to hide it from ourselves. Is it because we don’t want others to know the weakness and darkness within us, or we don’t want to face those parts of our own identities? Perhaps that is the main problem with language, the ease with which we can lie to both ourselves and others, and our eagerness to accept these lies.

Psychology is supposed to help us understand ourselves better. But before that, there were the Tarot decks, Ouija boards, and the I Chings that were supposed to illuminate our thoughts and actions, and help us perceive, however dimly, a little clearer into the past and future. I’m sure I’m not alone in thinking that such devices merely bring concepts to the forefront of the conscious mind and allow one to engage in creative and playful thinking. Maybe they tie into the “unconscious”, whatever that really means, and if not, then what is the source of their utility?

In the same vein, there are other instruments purported to aid in the effort to know thyself, such as Astrology and Myers-Briggs. Astrology has also been used for divination and that is its popular and sad ubiquity, that is “your daily horoscope”. Myers-Briggs is popular in the business world to help the managers manage and to resolve conflicts, and takes itself more seriously. In my foolishness, even though I didn’t believe that there was a perfect language lost in antiquity, perhaps I thought I could invent one anew like Ramon Llull or Gottfried Leibnitz!

Does language reveal reality or does it mask it? Can it blend and synthesize different realities or can it shape and create the very reality we inhabit? I’ve been mulling over the idea of what the next combogenetic alphakit might be, after chemical-molecular, biological-genetic, and symbolic-linguistic. Could it be something hyper-linguistic or hyper-cognitive, to serve as a perfect language, melding syntax, semantics, pragmatics? Or could it be something completely different, a blending of mathematics and philosophy?

Or even a new type of computer science? Such studies are still in their infacy, so one hopes for future breakthroughs and grand theories of logical systems and (e)valuations. Could a machine that creates reality from mere thought be the perfect language we seek, one that performatively produces no ambiguity by changing the abstract into the concrete, the inner into the outer? The Krell machine in the movie Forbidden Planet was one such hypothetical device, and showed the folly of a scheme that granted god-like powers to mere mortals.

Possibly better is a system that starts from grounding axioms that are so simple and fundamental that all must agree with their basis, utilizes logics that are so straightforward and rational that all must agree with their validity, demonstrates proofs that are so rigorous that all must agree with their worth, all enabled by overarching schemas that allow the truth of all things to vigorously and irrefutably shine. Even then, humankind might be too weak to suffer the onslaught against its fragile and flawed cogitations.

But O, what a wonderful world it might be.

Further Reading:

Umberto Eco / The Search for the Perfect Language

The Arcane Arts of Ramon Llull : the Dignities

Combogenesis: a constructive, emergent cosmos

The Myers-Briggs Type Indicator

The Twelve Houses of the Zodiac

The Tempest and Forbidden Planet

The 64 Hexagrams of the I Ching

https://en.wikipedia.org/wiki/Hedgehog%27s_dilemma

[*11.132]

<>

The Arcane Arts of Ramon Llull : the Dignities

Let us calculate!

— Gottfried Wilhelm Leibniz

Oh, Ramon Llull, where have you been all my life? I’m sure he’s been there all along, death now over seven hundred years in the past, just like always. His legacy seems at first glance to be quite the essence of medieval religion and scholastic philosophy, but still significantly and obscurely different to be enticing to this one. And on further examination, much more.

My schema above has little to do with his grand elaborate figures, except for listing the sixteen attributes he called “dignities”. Llull’s diagrams are full of clock-like wheels within wheels, complicated tableau, and combinatorial patterns. He wished to create a universal model to understand reality, and who wouldn’t want to discover the same? It is said that his methods are akin to an early computer science, and I’m just now starting to understand why.

The magister based the substance of his methods on his Christian faith, although he converted in midlife from Islam. Living in Barcelona, it was probably a good place to make such a change, but felt his calling was to convert others as well, so traveling he went. The methods he developed to convince others of their errors in belief were quite remarkable, as were the volume of his writing.

Like Gottfried Wilhelm Leibniz, who lived four hundred years later and was influenced by him, Llull wished to automate reasoning. But instead of building mechanical devices, Llull built computers from paper and ink, rulers and drawing compasses, scissors and glue. And instead of numbers as the smallest tokens of his computer, he used abstractions (i.e. words) that he felt would be understood by everyone in exactly the same way.

For example, he enumerated these sixteen dignities or aspects of his Christian diety, although sometimes he used the first nine. His constructions allowed one to pose questions and then obtain answers mechanistically that would be convincing to all observers of the correctness of the result. Too bad he was ultimately stoned to death while on his missionary work, although he lived to be eighty two.

Llull’s devices remind me of some of my pitiful charts and diagrams, and make me wonder if I may either adapt some of his techniques to my own use, or be inspired to develop others. I suspect I have locked myself into limitations by my approach, or are these constraints to my advantage? It might be hard to have spinning elements, but I can envision sliding elements like Napier’s Bones, origami-style folding and pleating, and even physical constructions like linkages and abacuses.

Now a martyr within the Franciscan Order, Llull’s feast day is June 30, which I’ve now missed. I hope to remember him to repost or improve on this by next year.

Further Reading:

https://en.wikipedia.org/wiki/Ramon_Llull

https://plato.stanford.edu/entries/llull/

http://trepanatus.blogspot.com/2007/06/ramon-llull.html

https://history-computer.com/Dreamers/Llull.html

https://publicdomainreview.org/2016/11/10/let-us-calculate-leibniz-llull-and-computational-imagination/

http://www.ncregister.com/blog/astagnaro/bl.-raymond-lull-and-the-worlds-first-computer

https://dangerousminds.net/comments/the_13th_century_thinking_machine_of_ramon_llull

The memory wheel

https://www.google.com/search?q=ars+magna+llull&tbm=isch

[*11.118]

<>

 

 

A Rosetta Stone

Abstract of Physics, Topology, Logic and Computation: A Rosetta Stone by John Baez and Michael Stay:

In physics, Feynman diagrams are used to reason about quantum processes. In the 1980s, it became clear that underlying these diagrams is a powerful analogy between quantum physics and topology: namely, a linear operator behaves very much like a “cobordism”. Similar diagrams can be used to reason about logic, where they represent proofs, and computation, where they represent programs. With the rise of interest in quantum cryptography and quantum computation, it became clear that there is extensive network of analogies between physics, topology, logic and computation. In this expository paper, we make some of these analogies precise using the concept of “closed symmetric monoidal category”. We assume no prior knowledge of category theory, proof theory or computer science.

  • Physics
  • Logic
  • Topology
  • Computation

Perhaps Category Theory is a “Fifth Essence”?

Further Reading:

Click to access rose3.pdf

https://arxiv.org/abs/0903.0340

[*9.168, *10.50]

<>

Speak, Listen, Write, and Read

Here’s another simple fourfold and maybe sixfold.

Speaking, Listening, Writing, and Reading are commonly presented together in elementary education as interrelated language skills. One is speaking for a listener and one is writing for a reader. One is listening to a speaker and one is reading a writer.

In the computer age another pair needs to be mentioned, that of programming for computers and the execution or running of that code by the computer. In a way, this new pair doesn’t fit, since one is writing code for computers, not people. And the execution of the program is not performed by a person, but by a computer.

(That’s not entirely true. Programs are also written for other human programmers in mind so that they can debug or maintain or modify the code if the original programmer isn’t available. Structured programming is one method to simplify the logical organization of the program so that others can comprehend it more readily. Object oriented programming is another method to allow multiple programmers to work independently without conflict.)

  • Speak – Listen
  • Write – Read
  • Program – Execute

But perhaps there is a different way to understand these duals. A speaker understands that a listener is following their speech by their response. A writer understands that a reader is comprehending their writing by their response. A programmer understands that a computer is ‘understanding’ the code by its response or output when the program is run.

Also, one can consider speech and writing to be encodings of thoughts into physical representations, and listening and reading to be decoding of the representations back into thoughts. Running or executing a program is not really decoding, or is it? But it is something like processing the speech or writing, like a computer is processing the program.

One might say that listening and reading are like processing the speech and text as programs on the computer of our brains. They are normally thought to be processed as data, as in Natural Language Processing, but it is an interesting twist if one considers them as programs. (Actually, I just recalled that the 1992 science-fiction novel “Snow Crash” by Neal Stephenson used this notion.)

To program effectively the programmer must execute their code in their mind, at least piecemeal and partially, just as a speaker or a writer must listen as they speak and read over what they have written. They can’t understand the full effect of the program’s execution, especially once the program becomes larger than a few statements, just as the full effect of speech or writing that is being processed by another person cannot be completely understood.

Not considered are computers themselves writing programs for other computers to “read” or execute. As the science of artificial intelligence becomes mature, computers writing and reading among themselves may become a common thing. Wasn’t there a news article about that recently? They pulled the plug on that pretty fast.

As digital assistants become more ubiquitous, they are fully participating in our language games of speaking, listening, writing, and reading. Will these so-called virtual assistants program for us next, as in Automatic Programming? That day may already be here.

Further Reading:

https://www.englishclub.com/learn-english/language-skills.htm

https://en.wikipedia.org/wiki/Natural_language_processing

https://www.theatlantic.com/technology/archive/2017/06/artificial-intelligence-develops-its-own-non-human-language/530436/

https://en.wikipedia.org/wiki/Automatic_programming

N. Katherine Hayles / My Mother was a Computer: digital subjects and literary texts

Neal Stephenson / Snow Crash

https://en.wikipedia.org/wiki/Virtual_assistant_(artificial_intelligence)

Images:

https://www.google.com/search?q=speak+listen+write+read&tbm=isch&tbo=u&source=univ&sa=X&ved=0ahUKEwi14LrrsarXAhUG6CYKHaugBH4QsAQIVQ

[*9.50, *10.36, *10.46]

<>