Comments on "The mechanics of embodiment: A
dialogue on embodiment and computational modeling."
By Pezzulo, G., Barsalou, L.W., Cangelosi, A., Fischer, M.A., McRae, K., Spivey,
M. Frontiers in Cognition, 2(5), 1-21 (2011).
Full text.
I read this extraordinary paper in May 2012. It contains a review of state of
the art and current trends in Computational Psychology, particularly the
fundamental aspects of grounding, embodiment, and situation. I found it very
close to my own thinking, being as it is that I am a Physicist/Mathematician and
no expert in Psychology. I am sure this paper will be very influential and
goal-setting for years to come. It inspired me with much thinking, and I created
many comments, which I share in this short article.
1. THE CAUSAL MODEL OF THE BRAIN
While working with the Mathematics of causal sets in the last few years, I came
to realize how important they are for understanding high brain function. Causal
sets are used in Physics to capture cause-effect relationships in causal
systems. The brain is a causal system, and causal theory applies to its function
just as well as it applies to the function of any other causal system.
In the context of causal theory, I will address grounding and embodiment first.
Consider any sensor, for example a TV camera where light coming from the world
illuminates a "retina" made of pixels, and is converted into electric signals.
Those signals are a causal set. Each pixel receives a beam of light and
generates an electrical signal. That is a cause-effect relationship. That
simple. There is no need to understand how the pixel works. The output cable
from the camera carries a giant, constantly changing causal set, consisting of
millions of unrelated, completely independent signals. This causal set is said
to be in an *unbound* or *unstructured* state, because there is no binding among
the various signals.
Consider now the retina of the human eye. It is just the same thing. The optical
nerve carries a giant, constantly changing causal set of action potentials, in
an unbound state, and delivers it to the brain.
Consider now any of the other senses. Stereocilia in the inner ear converts
sound pressure patterns into electrical signals, and the auditory nerve carries
them to the brain. A finger touching Braille dots sends an unbound causal set to
the brain via afferent sensory nerves. Even chemical signals are describable by
unbound causal sets.
The brain receives only unbound causal sets. Those causal sets carry no
information whatsoever regarding meaning. If you are a teacher and teach a
class, your voice is converted into and unbound causal set representation before
it even reaches any of your pupil's brains. The brain makes its own meaning
on-the-go, from the causets it receives. Someone has compared an unbound causal
set to a snowfall. The flakes are beautiful, and they fall, but they don't know
each other. And they don't know how to make patterns. This is a central concept
in this article, and the reader is advised to take it very seriously, because it
is not so easy to grasp. A surprise is coming before the end of this article.
Next, consider the output from the brain. The brain sends its output out via
efferent nerves. The output also consists of causal sets. But this time the
causal sets are in a *bound* state. They are organized and structured, and they
carry meaning. Causal sets are isomorphic to algorithms, or "behaviors." They
directly control the behavior of the muscles that that make us write, or speak,
or move, and the centers that control the chemistry of the body, such as the
levels of hormones and other chemicals. If you are the teacher, they control
your tongue and your gestures, if you write, they control your hand.
But then, what happens in-between? How do unbound causal sets become bound? What
process converts unbound causal sets into bound causal sets? I said above that
the brain is a causal system, and causal theory should apply to it just as it
applies to any other causal system, independently of how complex is the
implementation. A causal system is mathematically described by a causal set. If
the brain receives unbound causal sets as input, sends out bound causal sets as
output, and is itself mathematically described as a causal set, then it is
natural to assume that the conversion process is based on a causal set as well,
and that the brain itself works as a causal set. This is the central hypothesis
of this effort. It is a *working hypothesis*, meaning that we are going to use
it for all our future work, unless it is disproved by experimental evidence. It
is also known as the *principle of feature binding*.
2. BINDING
Having made that hypothesis, we can now forget about the brain and start
considering the entire process, input, binding, and output, in terms of causal
sets alone. We have created a mathematical model of the brain, and are going to
build a theory on top of that model. Now, consider the most fascinating part,
the process of binding.
The process of binding is a form of inference. Inference is a part of logic
that allows one to derive new facts from known facts. Clearly, binding does
allow us to derive new facts such as structure, organization and meaning, none
of which exists in the unbound sets. So binding is a form of inference.
Physiologist and Physicist Hermann von Helmholtz was the first to predict this inference ca. 1850, in his studies of vision. He called it
"unconscious inference," because we are not aware we are doing it. The fact that
this inference is unconscious has important consequences: it means we can not
describe it or simulate it in a computer program. But that's another story. But
von Helmholtz did not formalize his prediction.
Many of the greatest minds of the 20th. century tried to formalize binding,
among them Bool, Bertrand Russell, Church, Goedel, Hilbert, Turing. Wittgenstein
proved the problem unsolvable in the context of first order Mathematical Logic.
The famous and closely related Entscheidungsproblem posed by Hilbert in 1928 was
also proved to be unsolvable by Church and Turing.
I believe I solved the binding problem a few years ago. We see binding at work
all the time in ourselves so we know there must exist a solution. If a solution
exists, someone would some day find it, one way or another. Well, that was me, I
think. I named the inference "Emergent Inference" (EI), because it pertains to
causal sets alone, irrespective of whether the brain uses it or not, or how it
uses it if it does, and because it explains the phenomena of emergence in
complex systems. EI bounds a causal set, there is no argument about that. The
theory, as well as hundreds of computational experiments on causal sets
summarized in [1] confirm that. The interested reader can also see [2] and [3].
But what does this binding have to do with binding in the brain? Everything,
as confirmed by several computational experiments where results from high brain function were compared with results from EI.
If the brain can be described as a causal set, then it can not be described in
any other way, for example, as an artificial neural network, because artificial
neural networks do not have the properties of causal sets. Describing the brain
as an artificial neural network would be like describing a jet engine as a
coal-burning steam engine.
3. GROUNDING, EMBODIMENT.
Grounding and embodiment are now examined in the context of this causal theory
of the brain. Information that the brain needs to embody and to ground itself
comes from stereoscopic vision and hearing, touching and grabbing, the
perception via afferent nerves of the relative positions of body parts, and the
fact that sensory organs are mounted in fixed positions on those body parts. But
the brain does not "know" all that, in the sense that we use the word "know."
The brain does not know the exact geometric position where each finger or each
eye are located at every instant, or how each arm and each eye are moving.
The brain only receives unbound causal set signals from the muscles that move
the eye, or the fingers, or the head, or the limbs and hands, and also unbound
causal set signals from both eyes and both ears. If the hand touches something
that the eye is seeing, the brain receives causal information about that. That's
how infants learn to touch and gtrab. If the brain directs a hand to grab
something that the eye can see, the brain receives causal information about
that.
All that enormous volume of causal information arriving at the brain must be
considered as a whole. It all consists of exactly similar electrochemical
signals, the action potentials of the neurons, all of the signals completely
unrelated to each other. There is nothing in it telling the brain where each
pulse comes from, or what kind or category of information it pertains to, or how
the pulses could or could not be related to each other. There is just a pulse.
And another pulse. And another. It is just one single huge unbound causal set,
just one huge body of dispersed information, and the brain's task is to make
sense of it as a whole.
Isn't this the same problem that robots confront? Or the same problem that AI
confronts? Or the same problem that Software Engineering confronts, where human
analysts are unavoidably required to create the objects in object-oriented
designs? Each one of those is a different story, but they all converge on EI.
The brain does not run any algorithms or do any calculations. All it does is
binding, by way of EI. The inadvertent reader may have been expecting me to
provide an explanation, that is, an algorithm, that would reconcile grounding
and embodiment information coming from various different sources, such as
vision, hearing, or touch, and would ultimately allow one to "calculate",
mathematically, the exact geometric positions of the sensory organs or the body
parts relative to each other or relative to some external fixed system of
coordinates. Then, for example in Robotics, one would be able to use other
algorithms to control physical interactions among robots or between robots and
humans. I already have answered that. But it's not an algorithm. It is a process
that generates the algorithms you need. Each robot, or human, receives
information via sensors or senses, and bounds it by way of EI to directly obtain
the algorithms needed for controlling the body or acting upon the environment.
The brain *interprets* all the information without the need for any coordinate
transformations. That's why, if the tongue of a blind person is connected to
electrodes activated from a camera, the brain will learn to see through that
camera. All this can now be done on a computer that is running EI. At least one
such machine has been built by the author (my PC). It works, in small problems
for now, and it has been operational for some time. That's it. This is the
surprise I promised.
In the process of binding, the brain generates behaviors, which we also call
*algorithms*. This statement explains the origin of algorithms. The behaviors,
or algorithms, are used to control muscles in the body. Or, they can be copied
to a computer and become computer programs. But that's another story.
In my view, binding in the brain is unconscious. It is the process
where inter-neural connections grow shorter (by what process, I don't know). By
doing that they cause the binding to occurr. The memory itself is conscious.
When synaptic contraction imposes structure on memory, and neural cliques
appear, then inter-clique connections start contracting themselves, causing a
new level in the structural hierarchy to appear. And so on, all the way to high
brain function. It's a start.
4. CONCLUSIONS
Two very different issues were discussed in these article. One is the
mathematical theory of causality, based on causal sets, and on emergent
inference, a new form of logical inference that is a mathematical property of
causal sets. The theory is currently solid. The foundation of the theory has
been proposed in Section 3 of [1], and hundreds of examples of its application
have been summarized in Section 4 of [1]. The theory is also self-consistent,
meaning that it does not depend on any hypothesis about the brain or another
physical system.
The second issue, is the hypothesis that the brain uses emergent inference to
create meaning and bind incoming unbound causal sets into bound outgoing
behaviors. To test the hypothesis, a small number of computational experiments
have been performed, where the output from a machine running emergent inference
was compared with the output generated by a human analyst when both received the
same input. They are summarized in Section 4.5 of [1]. No disagreements with the
hypothesis were found. Of course, more experiments and a larger scale are
needed. But the hypothesis will always remain a hypothesis. No amount of
experimental work can ever prove a hypothesis.
I know by myself and independently that our knowledge is grounded in in sensory
and motor experiences. I need no convincing about that. But "The mechanics of
embodiment" is enormously important because it represents the exact convergence
of two very different lines of thought to the same target. It provides me with
an extensive observational verification of my hypothesis, and at the same time
it provides you with a theoretical infrastructure from which detailed
experiments can be planned.
This is not, however, my first "convergence of very different lines of thought."
I'll tell you about this some other time.
The approach presented here satisfies all three requirements proposed in "The
mechanics of embodiment." Cognition is not studied as a module independent from
sensory and motor modules. The representation (a causal set) is grounded from
the start, and all its processing (EI) is fully grounded as well. It could be
said that the representation is multi-modal and remains as such during
processing. But the notion of modal/amodal is a little different here, because
the architecture of the system is a consequence of the process, not a
pre-requisite. Modalities have a hierarchical structure, EI is strictly
hierarchical (see [1], Section 4). EI is the "principle of feature binding."
Abstraction and abstract thought is build atop of sensorymotor experience, as a
higher level in the EI hierarchy, by reusing sensorymotor patterns (naturally,
as a property of EI).
Brain dynamics is an area where more evidence is needed. The main question here
would be whether the brain can support EI, and how. In [1], I have proposed a
simple model where neurons first connect to support memory, and then shrink or
tighten their connection as much as they can. If these features alone can be
demonstrated, then it would be sufficient to argue in favor and plan more
experiments. There is some encouraging evidence: the existence of neural
cliques, and the recent (2012) proposal of underlying simplicity in the brain.
More is needed. Perhaps brain-on-a-dish experiments can help.
The requirements are satisfied naturally, as a property intrinsic to causal
sets. It is not that I have somehow "invented" EI, or "engineered" or "adjusted"
the functional in such a way that it works. I only found EI, I discovered it in
the course of research, it was there all the time. All we humans need to do, is
to use it.
The thinking in this article shows how utterly inadequate are the efforts of
those who try to model grounding problems by way of geometric constructs and
centers of gravity. It also argues that grounding and embodiment are integrated
with the rest of the brain's process in an inextricable way. These concepts
acquire meaning and become "grounding" or "embodiment" only
after the binding
process is complete, and as a result of the binding process. It is only our
heuristic thinking that came up with the names once the meaning was there.
Trying to model them by a traditional computational approach would make no
sense, and would never succeed. EI is uncomputable.
Cross-fertilization between disciplines such as Computational Psychology and
Robotics will not help. Neither one has applied EI yet. The correct course of
action is to introduce EI in both disciplines, and then let them mutually
cross-fertilize.
REFERENCES
[1] Emergence and self-organization in partially ordered sets. Sergio
Pissanetzky. Complexity (Wiley InterScience), Vol.17, Issue 2, pp. 19-38 (2011).
Article first published online: 22 OCT 2011 | DOI: 10.1002/cplx.20389. Note: The
type of partially ordered sets discussed in this publication are known as causal
sets, or causets.
[2] Structural Emergence in Partially Ordered Sets is the Key to Intelligence.
Sergio Pissanetzky. Lecture Notes in Artificial Intelligence, a subseries of
Lecture Notes in Computer Science, LNAI 6830, pp. 92-101. Springer-Verlag.
[3] A New Universal Model of Computation and its Contribution to Learning,
Intelligence, Parallelism, Ontologies, Refactoring, and the Sharing of
Resources. Sergio Pissanetzky. International Journal of Information and
Mathematical Sciences, Vol. 5, no. 2, pp. 143-173 (2009). Published on-line:
August 22, 2009. Please note that "International Journal of Information and
Mathematical Sciences" (IJIMS) is the new name of "International Journal of
Computational Intelligence" (IJCI).
HOME