Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS EveryONE

Using Virtual Reality to Understand Human Perception

VR image

How do we visualize the world around us? To answer this question, researchers are using virtual reality to gain insight into how we perceive our surroundings. At the University of Reading, Dr. Andrew Glennerster and his international team are utilizing this technology to study how people generate a three-dimensional representation of the world around them. His recently published PLOS ONE article, A Demonstration of ‘Broken’ Visual Space, tests the theory on whether there is a one-to-one match between reference points in our internal representation of the world and those in our actual surroundings.

In this author spotlight, Dr. Glennerster answers questions about his background, his research and his PLOS ONE manuscript.

Andrew GlennersterLet’s start off with your background.  How did you become interested in studying human vision and what role does virtual reality play in assisting your research?

I originally studied medicine at Cambridge. It was an exciting time for vision research there, and in my 3rd year I was taught by lots of the big names in vision like Horace Barlow and John Robson. I found it fascinating and went back to studying vision after I had finished my clinical training.

I have set up a virtual reality lab because it allows us to study people’s 3D vision as they move around.  It is a much more difficult technical challenge than studying 3D vision from binocular stereopsis, like in a 3D movie, but moving around is the main way that animals see in 3D.  To study this process systematically, you need virtual reality.

In your paper, you mention that most theories on three-dimensional vision suggest that the way we represent space in our visual systems assumes that we generate a one-to-one model of space in our brains. Why did your team test this theory and what did you find?

We tested this theory because, for a long time, it has been the dominant one in the literature. It is very easy to believe people have something like a ‘model of the world’ in their heads, but by itself that is not a good argument. We need to move away from accounts that are easy to imagine yet hard to explain at a neural level and toward ones that are based on operations we know the brain can carry out even if they are more conceptually challenging. We do not yet have a well worked out alternative to the one-to-one, ‘reconstruction’  model, but there are some promising beginnings.

When a three-dimensional illusion is depicted in a two-dimensional picture, certain paradoxes occur that wouldn’t be possible to replicate in real life. Given this, how does the Penrose staircase illusion, included in Figure 1 of your paper, compare to your experiment? Where there any similarities?

The similarity between the Penrose staircase and people’s representation of space is that if you tried to build a real 3D model of either you would fail. You cannot make sense of people’s responses in this experiment using a real 3D model. People believe they are in a stable room during the whole experiment. Anyone who suggests that a stable perception comes from the observer having an unchanging 3D ‘model’ of the environment in their head has a difficult time explaining these data. If you try to pick coordinates in some perceptual space for each of the objects in the experiment then you get tangled up in just the same way that you do with the Penrose staircase: you cannot say whether one object is in front of or behind another one. The solution is to give up trying to assign coordinates to each of the objects.

How does your research help further our understanding of human perception? Does it have real world applications?

There is an increasing focus within visual neuroscience on the issue of stability: what is it in the brain that remains constant as we move our eyes around and, even more problematically, as we walk around?

What’s next? Where do you hope to go from here?

This paper attacks the current dominant model, but the next, more positive, stage is to build coherent alternative models. I believe that working with colleagues in computer vision is a good way to do this as robots have to deal with images arriving in real time and react accordingly. Currently in computer vision, new types of representation are being developed that are not at all like a reconstruction. These can act as an inspiration for testable models of human vision. Equally, if neuroscientists produce good evidence about how the brain represents a scene, it could influence the way that mobile robots are programmed.

To learn more about University of Reading’s Virtual Reality Research Group and Dr. Andrew Glennerster’s work, click here. To find more PLOS ONE research on human vision and virtual reality click here.

Image credits: hiperia3d

 

Leave a Reply

Your email address will not be published. Required fields are marked *


Add your ORCID here. (e.g. 0000-0002-7299-680X)

Back to top