Enjoying the view from the VSS Conference in St. Pete Beach, Florida. Photo courtesy of Minjung Kim.
Going to conferences is one of my favorite aspects about being a scientist. As a PhD student, I spend a lot of my life in solitude: when I read new literature, when I program new experiments, or when I conduct new analyses, I am very much alone in my thoughts. Outside of scheduled meetings with my advisors, there is little reason for interaction with other people. The loneliness and boredom that swallows up every graduate student – yes, it will happen to you, too – is an unfortunate, little-discussed aspect of being a PhD student. The isolation, together with the drive to succeed and the illusion that everyone else is doing better than you, can at times wreak havoc on mental health.
But, there is a cure: talking to people! Of course, the number one confidants are lab mates or department friends (this is why it’s important to have a friendly, accepting lab culture). Another perhaps less obvious outlet for fun and relief are conferences. At conferences, you will:
1) Meet people who find your research novel and interesting
2) Learn about other people’s research
3) Discover that other people have experience the same graduate school woes that you may be experiencing, and survived.
In short, being at a conference allows you to step back from the details of your own graduate research and think again about why it is you decided to become a scientist in the first place. It’s stimulating. In fact, I cured my ennui with this year’s meeting of the Vision Sciences Society (VSS).
What is VSS?
VSS is an annual conference dedicated to studying the function of the human visual system, and took place this year from May 15th to May 20th in idyllic St. Pete Beach, Florida. With 1400 attendees, VSS focused on all things visual perception, including 3-dimensional (3D) perception, attention, and methodologies spanning from psychophysics to neuroimaging to computational modeling.
This year’s VSS was filled with interesting panels, exhibits, and demos devoted to visual perception, including the viral phenomenon #theDress. I’ve highlighted in my round-up below some of the talks and posters that I found the most interesting.
The research round-up
Optimal camouflage under different lighting conditions
As a person who studies lighting and shading, I loved Penacchio, Lovell, Sanghera, Cuthill, Ruxton and Harris’s poster on how the effectiveness of countershading, a type of camouflage, varies with lighting condition. Countershading refers to shading patterns commonly found in aquatic animals, where the belly is coloured a brighter shade than the back: viewed from below, the animal’s bright belly is camouflaged against the bright light from the sun above, and viewed from above, the animal’s dark back is camouflaged against the darkness of the water.
But, should the transition between back and belly be sharp or smooth? Penacchio et al.’s answer is that it depends on whether the lighting condition is sunny or cloudy. On a sunny day, the shading is characterized by high contrast and sharp shadows, whereas on a cloudy day, the shading is characterized by low contrast and soft shadows. Penacchio et al. found that, when the target item’s countershading was matched to the lighting (e.g., target was sharply countershaded and was in a sunny scene), people had difficulty finding the target, whereas when the target was mismatched to the lighting (e.g., target was in a cloudy scene), people were very good. Interestingly, birds behaved the same as humans, showing that optimal camouflage works across different species.
I really enjoy ecologically based studies like this, because they help me understand how biological systems exploit constraints posed by the environment.
Can computers discriminate between glossy and matte surfaces?
Tamura and Nakauchi won the student poster prize for the Monday morning session for answering this question. Glossy materials are interesting, because unlike matte materials, they are characterized by white highlights. It’s an old painting trick: to make objects look glossy and shiny, add white highlights. However, the location of the highlights matters: if highlights are haphazardly placed on the surface without attention to surface geometry, they will look like streaks of paint (Anderson & Kim, 2009).
Tamura and Nakauchi examined whether a computer algorithm (a “classifier”) without sophisticated understanding of scene geometry could nevertheless learn to discriminate between images of matte, glossy, and textured (painted) surfaces. This does not mean that scene geometry is unimportant for glossiness perception, but rather, that the image representation they were using (Portilla & Simoncelli, 2000), conveys at least information about the surface geometry without explicitly encoding the shape.
I think this study is an excellent example of combining knowledge in two different fields, in this case human vision and computer vision, to answer an interesting question.
The shrunken finger illusion
One of my favorite talks at VSS this year was the talk on the shrunken finger illusion by Ekroll, Sayim, van der Hallen and Wagemans, a novel illusion that demonstrates how something as basic as your knowledge of your finger length can be overridden by visual cues.
Drawing by Rebecca Chamberlain. Reproduced from Ekroll, V., Sayim, B., Van der Hallen, R., & Wagemans, J. (2015). The shrunken finger illusion: Unseen sights can make your finger feel shorter. Manuscript in revision. Copyright by Ekroll et al. (2015).
Ekroll et al. gave human observers hollow hemispherical shells (imagine a ping pong ball cut in half), who wore them on their fingers. When viewed from the top, the observers experienced two illusions: (1) they saw a full sphere, not a hemisphere, and (2) they felt their fingers were shorter.
The explanation has to do with amodal completion, the mental “filling in” of object parts that are hidden behind another object. When my cat peeks out from behind a door, I know that her body has not been truncated in half (perish the thought!); my visual system knows – has a representation of – the rest of her body. Amazingly, humans are not born knowing amodal completion, acquiring this ability at around four-to-six months of age (Kellman & Spelke, 1983).
So, the observers were amodally completing the hemisphere, thus seeing a sphere (Ekroll, Sayim & Wagemans, 2015). But, they also knew that their fingers started behind the “sphere.” So, the observers’ brains are “making room” for the back half of the sphere by assuming that the finger is shorter than usual.
This is a bizarre but interesting illusion that is consistent with previous work on the flexibility of body representations.
Demo Night and #theDress
The second night of VSS is demo night, where researchers and exhibitors share a new illusion, or new software, or anything fun that might not fit in the ordinary proceedings of the conference. At this conference, #theDress was a popular topic, with three demos dedicated to the viral sensation. I presented a demo of my own, as well, based on a project that I am working on with Dr. Richard Murray and Dr. Laurie Wilcox of York University Centre for Vision Research.
Let there be light
When people think of glowing objects, people typically assume that the object must be exceptionally bright. Our demo, an extension of my old master’s thesis work, showed that that’s not true — for some types of glow, it is the perceived shape of the object that determines whether it appears to glow.
We computer-rendered a random, bumpy disc under simulated cloudy lighting. From the front, this disc looks like an ordinary, solid, white object. However, as the disc rotates, revealing its underside, the disc takes a translucent appearance, and appears to glow.
Note that the luminance of the disc is the same from the front and the back – it is only left-right reversed. But, critically, the correlation between the luminance and the depth changes between the front view and the back view: viewed from the front, the peaks of the discs are bright and the valleys are dark; viewed from the back, the peaks are dark and the valleys are bright. Why are the valleys so bright? It must be because there is a light source either inside or behind the object!
The demo was very well received. For me, this was a highlight of the conference — talking to people about something that I am enthusiastic about, and convincing them that it is, in fact, cool. I imagine most scientists feel the same way about their work.
I still see it as white/gold
“The dress” refers to a photo of a dress that went viral in March 2015. To many people, the dress appeared to be white with gold fringes, whereas to others, it appeared to be blue with black fringes; a small minority reported seeing blue/brown. As an unrepentant white/gold perceiver, I was astounded to learn that the dress is in, real life, blue/black.
Most vision scientists agree that the dress “illusion” is an example of color constancy gone wrong. Color constancy is the visual system’s remarkable ability to “filter out” the effects of lighting. For example, my salmon-coloured shirt appears salmon-coloured regardless of whether I look at it indoors or outdoors, on a sunny day or a cloudy day – even though, if I were to take a photo of the shirt, the RGB value of the shirt would vary tremendously across the conditions. The predominant explanation of the dress is that different people’s visual systems are assuming different lighting conditions, and therefore filtering differently, resulting in different percepts. One of the dress demos (Rudd, Olkkonen, Xiao, Werner & Hurlbert, 2015) showed that, indeed, the same blue/black dress under different lighting can appear very different, and that, had the dress been white/black, the illusion would not have occurred. (I should note that there two other dress demos — Shapiro, Flynn & Dixon, and Lafer-Sousa & Conway — but sadly I did not get to see them as I was busy with my own demo.)
However, questions remain. Why is there such huge individual variability? Why are some people able to flip between percepts? I cannot answer all these questions, but I can direct you to this future issue of Journal of Vision dedicated to exploring the dress. If you have your own idea that you would like to test, the submission deadline is July 1, 2016. In the meantime, you can follow these links to see what vision scientists have said so far.
Gegenfurtner, Bloj & Toscani (2015)
Lafer-Sousa, Herman & Conway (2015)
Macknik, Martinez-Conde & Conway (2015)
Winkler, Spillman, Werner & Webster (2015)
David Knill Memorial Symposium
David Knill, renowned vision scientist, suddenly passed away in October last year, at age 53. He was known for his early work on Bayesian approaches to visual perception – that is, the notion that visual perception is the result of computation that optimally combines noisy information from the environment with loose prior knowledge about the environment. As Bayesian inference is such an important foundation in computational theories of vision, it is unbelievable now to imagine that there was ever a time when Bayesian perspective was the minority view in vision science.
In his memory, Weiji Ma – a former post-doctoral fellow of his who is now a professor at New York University – organized a symposium for celebrating Dr. Knill’s life and work. Speaker after speaker talked about his dedication to science, and about his kind and gentle personality. Dr. Ma’s tribute, in particular, made me realize that I am lucky to have met Dave Knill when I did, when I applied to work with him for my PhD. The symposium was respectful and touching, and was the perfect way to commemorate a brilliant scientist.
Dr. Knill’s Forever Missed page is here.
The real reason for going to conferences
But the most memorable aspects of VSS were not part of any scheduled proceedings. I remember: catching up with old friends at the Tiki bar, cooking with my friends in the hotel room, complimenting a speaker on his talk, him complimenting me back on the question I asked, commiserating about the lack of job prospects and sharing new-hire stories… and of course, I can’t forget the annual night-time ocean dip that marks the end of VSS.
As banal as it sounds, scientists are what drive science. Science is not done in a vacuum, and some of the best collaborations come out of friendships that you forge at conferences. And even if nothing productive comes out – so what? Maybe its reward enough to know that there are friendly nerds are out there who share your interests.
Anderson, B. & Kim, J. (2009). Image statistics do not explain the perception of gloss. Journal of Vision, 9(11):10, 1-17. doi:10.1167/9.11.10.
Ekroll, V., Sayim, B., van der Hallen, R. & Wagemans, J. (2015, May). The shrunken finger illusion: amodal volume completion can make your finger feel shorter. Talk presented at the annual meeting of the Vision Sciences Society, St. Pete Beach, FL.
Ekroll, V., Sayim, B. & Wagemans, J. (2015). Against better knowledge: the magical force of amodal volume completion. i-Perception, 4(8) 511–515. doi:10.1068/i0622sas.
Gegenfurtner, K.R., Bloj, M. & Toscani, M. (2015). The many colours of ‘the dress.’ Current Biology. doi:10.1016/j.cub.2015.04.043.
Kellman, P.J. & Spelke, E.S. (1983). Perception of partly occluded objects in infancy. Cognitive Psychology, 15(4). 483-524.
Kim, M., Wilcox, L. & Murray, R.F. (2015, May). Glow toggled by shape. Demo presented at the annual meeting of the Vision Sciences Society, St. Pete Beach, FL.
Lafer-Sousa, R., Hermann, K.L. & Conway, B.R. (2015). Striking individual differences in color perception uncovered by ‘the dress.’ Current Biology. doi:10.1016/j.cub.2015.04.053
Lafer-Sousa, R. & Conway, B.R. (2015, May). A color constancy color controversy. Demo presented at the annual meeting of the Vision Sciences Society, St. Pete Beach, FL.
Macknik, S.L., Martinez-Conde, S. & Conway, B.R. (2015). How “the dress” became an illusion unlike any other. Scientific American Mind, 26(4). Retrieved from
Penacchio, O., Lovell, P.G., Sanghera, S., Cuthill, I.C., Ruxton, G. & Harris, J.M. (2015, May).
Concealing cues to shape-from-shading using countershading camouflage. Poster presented at the annual meeting of the Vision Sciences Society, St. Pete Beach, FL.
Portilla, J. & Simoncelli, E.P. (2000). A Parametric Texture Model based on joint statistics of complex wavelet coefficients. International Journal of Computer Vision, 40(1), 49-71.
Rudd, M., Olkkonen, M., Xiao, B., Werner, A. & Hurlbert, A. (2015, May). The blue/black and gold/white dress pavilion. Demo presented at the annual meeting of the Vision Sciences Society, St. Pete Beach, FL.
Shapiro, A., Flynn, O. & Dixon, E. (2015, May). #theDress: an explanation based on simple spatial filter. Demo presented at the annual meeting of the Vision Sciences Society, St. Pete Beach, FL.
Tamura, H. & Nakauchi, S. (2015, May). Can the classifier trained to separate surface texture from specular infer geometric consistency of specular highlight? Poster presented at the annual meeting of the Vision Sciences Society, St. Pete Beach, FL.
Winkler, A.D., Spillman, L., Werner, J.S. & Webster, M.A. (2015). Asymmetries in blue-yellow color perception and in the color of ‘the dress.’ Current Biology. doi:10.1016/j.cub.2015.05.004.
Photo by Karen Meberg
Minjung (MJ) Kim is a PhD candidate at New York University (NYU) Department of Psychology, in the Cognition and Perception program. She studies the visual perception of light and colour, with a keen interest in material perception (e.g., what makes glowing objects appear to glow?). She is co-advised by Dr. Richard Murray at York University Centre for Vision Research and Dr. Laurence Maloney at NYU.