How to Build an Intelligent Machine
My Causal Inference system consists of the following:
(1) Input from sensors. Sensors collect causal information about an
event. If light impinges on a pixel in a camera, that's an event. It happens at
a given point in space - the location of the pixel in the camera - and at a
given time. The event is causal. The input brings causality with it.
(2) The causal set. A causal set can learn, meaning it grows with
time directly from the causal information arriving from the sensors.
(3) The functional corresponding to that causal set. Very important: I
did not design or engineer or invent or adjust the functional in any way. I
found it. It is a mathematical property of causal sets. I also found causal sets
from direct observation of self-organization in computer programs. I did not
introduce them because they are better, simpler, faster, or anything else
(for a history of the discovery, please read the Introduction of my Complexity
paper). I wasn't even looking for them. Like you are walking in a wilderness and
you see a treasure that you didn't know was there.
(4) Something that can optimize the functional. Any physical dynamical
dissipative system that is causal can do it. Many do.
That's it. There is no computer, no program, no developer studying intelligence
during decades and writing programs to simulate it. I do however use a computer
to read the input from a file, optimize the functional, and print the results.
The program is always the same. It is never altered or adjusted or engineered in
any way. It can receive an unbound or partially bound causal set as input and
produce a bound causal set as output.
At this point, I expect the reader to begin to see a system that is unique,
fundamentally different from anything you ever saw in AI/AGI. I also expect the
reader to see some similarities with the brain, and to have many, many
questions. Perhaps one of them is, since nothing else has worked so far, why do
I think my AGI will?
Solomonoff's paradox helps here.
"A heuristic programmer would try to discover how he himself would solve
(some problem) - then write a program to simulate himself. " This says, very
clearly, that the programmer finds a procedure to solve a problem using her own
intelligence, then writes a program by copying the procedure. But the program is
only the procedure invented by the programmer to solve that problem. The
intelligence the programmer used to find the procedure remains in her head. None
of it can be found in the program.
Is this not a paradox? It tells you that the harder you work on artificial
intelligence, the farther away you stray from artificial intelligence. Just look
at the history of AI, at the history of failed promises. My system solves the
paradox. I took my brain, and the program, entirely out of the loop.
HOME