What a one-eyed robot can teach us about the brain

    07 February 2018


    To understand how the brain processes and reacts to visual information neuroscientists in the Human Brain Project are building a one-eyed robot capable of ‘deciding’ what it should look at.

    The work is being undertaken by the ‘Visuo-Motor Integration’ co-design project of the HBP.

    The task, called the neurorobotic closed-loop engine, is carried out by a small team at Maastricht University headed by Mario Senden, working under Rainer Goebel and in collaboration with teams in Forschungszentrum Juelich, Technische Universitaet Muenchen and Forschungszentrum Informatik Karlsruhe in Germany.

    Together they are building a robot-brain system with an embedded loop from input to output that will model the way the brain makes decisions on where to move the eyes.

    “It is highly ambitious, to replicate how our brain does this, so in the beginning we are taking small steps. Up until now we have usually had our brain models on a computer but there are many things that we cannot test if we don’t have an environment, after all our brains exist so we can interact with our environment,” says Senden.

    “The robot sees something, and based on what it sees in the environment that is interesting, moves its eye to look at it more intently,” he says.

    “It’s nice on the one hand for understanding the brain itself but also from the perspective of robotics because it may be useful, for example, in sending robots to Mars that can explore the environment.”

    The human eye and brain can only see a fairly small area of the visual field in crisp detail. So we are continually moving our eyes (around three times a second, though we barely notice these movements) to paint a broad picture of what we perceive to be around us. The brain receives a series of high resolution snapshots centred on different points in a scene, all of which end up in the same small area of visual cortex.

    Senden explains that in the brain, the neurons that take input from the eyes form an orderly map of the retina (referred to as retinotopic map). This means that visual space is brought into a coordinate system centred around fixation.

    “But the problem is when we move the eyes, and so shift the coordinate system around, some of the relationships change. My eye can move around, but if you look to the left or right you will have the same area but what the eye takes in changes. There is a completely different input but the brain needs to know how this still makes sense. There are no good computer models of how this is done.”

    To test how the brain accounts for shifting the coordinate system the robot is fitted with separate loops that model the different processes at work. The first looks at the visual field and detects salience – how much something pops out, such as a red apple on a grey table.

    The second model is for target selection; a decision is made on where to look based on the salience. Then a system interface between the sensory stimulation and the motor produces a movement of the robot’s eye. The camera swivels to look at the interesting object and then finally, the object recognition system recognises it.

    “This will basically give us a robot with the same problem and therefore a fantastic opportunity to study potential solutions. We are neuroscientists and want to learn something about the brain. We are interested not just in perception but its relationship with action, and this means you need something that can act. With a robot you can poke at everything, not just take its brain apart but you can give it a different brain and see how that works. Of course you also need to test this against real people.”

    Senden says they hope to have the integrated model running on the robot by the end of April.

    Article written by Greg Meylan. Email: gregory.meylan@epfl.ch

     

    Images showing how external space is represented on the surface of the cortex (here the left hemisphere is shown as if inflated - normally the cortex is folded in on itself but for visualisation the team inflates the surface, like a balloon).

    The Card Players by Paul Cezanne showing three fixation points (top) and the same image with the fixated regions crisp and the rest a bit blurry (bottom).