You won the Lottery, but your check is sredded

The difference between data and knowledge, or entropy = uncertainty.

You won the Lottery and you check has arrived in the mail. But it is shredded! The envelope only contains a pile of tiny shreds. Some unexplained mistake must have happened. The fact is, you can not collect your money. Your bank refuses to cash ... that, the Lottery refuses to send you another check even if you return the shreds because they can not verify their source, and you don't even know what amount you have won.

Technically, all the information is still there. Each shred bears a tiny portion of the original text. One possible solution would be for you to try and put all the shreds back together again. If you did, you would have recovered the original text, at the cost of spending a month or two sorting shreds. The fact is, the information in the form you have it now is useless. It is there, but it is useless. We say that the entropy of the information is very high, or, equivalently, that the information you have is highly uncertain. High-entropy, low-certainty information lacks order, because the shreds are all mixed -up, organization, because you don't know where they came from, and understandability, because nobody can understand the meaning.

 This story seems fantasy. But it happens to you all the time. It is happening to you right now as you read these lines. Your eyes do not "see" a document, they only see shreds of light on your retinas. Your optical nerves carry signals to your brain, but do not carry the coordinates of the points on the retina where each signal came from, much less the coordinates of the points in the environment where the shreds of light originated. The brain is confronted with the exact same situation as you were with your. Yet, in about 0.1 seconds, your brain puts the shreds back together, and you "see" the document. Here is what Douglas Hofstadter had to say after he saw his mother's face:

"The major question of AI is this: What in the world is going on to enable you to convert 100,000,000 retinal dots into one single word `mother' in one tenth of a second?"

Those retinal dots are shreds of information. Hofstadter can not put them in an envelope and prove that he has seen his mother. By looking at his mother and acquiring the 100,000,000 dots of light, the entropy of Hofstadter's brain has increased, his ideas became less organized, he became more ignorant and more uncertain. At this precise point, Hofstadter has knowledge, not just data.

There is not a shade of a doubt in my mind that the brain is and organ that has learned by evolution how to reduce its own entropy. In a fraction of a second, Hofstadter's brain removes the excess entropy and corrects the situation. He again becomes more knowledgeable, more certain, better organized. And he sees his mother.

I believe that pairs of adjacent neurons in the brain reduce the entropy by switching their positions, either physically or logically. They do that to save energy, they don't know anything about entropy, much less about intelligence. But as a side-effect of their attempt to save energy, they also minimize the entropy, and generate the associations and the patterns or invariant representations our brains use.

One more issue remains to be addressed. I said that the brain receives signals but no geometric information about where on the retinas those signals came from. So how can the brain organize the signals? The brain organizes the signals simply by associating them. As it learns, it creates a map of their sources. The map does not look like a computer file, instead it is associated and integrated with the rest of the information. The brain is self-calibrating. No need for coordinates, as long as the signals always come from the same retinal dot. The brain itself does the rest. This  is what a toddler does when he is learning to grab and recognize object. His brain is building the map, which he will use the rest of his life.