What you see is not what you get.

Excellent posting on Developing Intelligence argues that our brains constantly re-interpret information from our senses in order to present a coherent picture of an event. This reinterpretation can be identified and re-trained.

Using the example of timing, Chris Chatham points out that different signals arrive at different times. For instance, messages about touch take longer to travel from the foot to the brain than from the face to the brain. This comes back to the Binding Problem: how does the brain take several asynchronous streams of information from different senses, and combine them all so that we have one integrated experience at one instant, and another the next. Its a sort of endless packet switching problem.

Chatham says: “In order to correctly perceive the temporal order of events in the world, our brain is constantly recalibrating the temporal relationship between the motor system and our perceptual systems. It does this by implementing a variable delay in the perceived onset of our own motor actions, so that we are able to dynamically adapt to changing environmental and sensory conditions.”

He cites research (“Illusory Reversal of Temporal Order and the Anterior Cingulate Cortex” by Chess Stetson, Xu Cui, P. Read Montague and David M. Eagleman of UT Houston, Department of Neurobiology and Anatomy, Baylor College of Medicine, Department of Neuroscience). Subjects pressed a key on a cue, and saw a light flash. Adapting the artifical delay between the two led to subjects believing that the light had flashed before they pressed the key, when it actually flashed afterwards. Apparently the human brain starts to adapt within the first 20 experiences.

These are very small differences – of the order of 100 mseconds – but important. (They use the example of a twig breaking in the forest. Did it break when you put your foot down – or are you being followed? Exact temporal perception could save your life here, and almost certainly the brain would notice that 100ms gap and draw conclusions from it. It may well be that this sort of split second, unconscious, judgement underlies a lot of what we call intuition.)

Apparently visual processing also slows down in low light conditions.

The implications for simulation are two-fold
1. the brain is used to getting signals that dont bind properly, which in a sense makes it easier to simulate an experience. (Though I suspect that simulations may be better bound than the real experience, since they come in through a limited range of senses, and are artificially constructed from the start. Also as they are usualy not 100% live, but prepared in advance, the simulator has the time and the ability to synchronise them properly.)
2. the brain can be retrained, accidentally or deliberately, which makes me wonder (again) about the effect of repeated use of simulated reality. Does this re-educate our perceptions as a side-effect? Do we have to switch from real to game modes of perception and back again? Does it therefore blunt our intuitions?

Leave a Reply

Your email address will not be published. Required fields are marked *