Seeing like a computer: What neuroscience can learn from computer science

Screenshot from the original "Metropilis" trailer. Image credit: Paramount pictures, 1927. Public domain.

Screenshot from the original “Metropilis” trailer. Image credit: Paramount pictures, 1927. Public domain.

By Marta Kryven

What do computers and brains have in common? Computers are made to solve the same problems that brains solve. Computers, however, rely on a drastically different hardware, which makes them good at different kinds of problem solving. For example, computers do much better than brains at chess, while brains do much better than computers at object recognition. A study published in PLOS ONE found that even bumblebee brains are amazingly good at selecting visual images with color, symmetry and spatial frequency properties suggestive of flowers. Despite their differences, computer science and neuroscience often inform each other.

In this post, I will explain how do computer scientists and neuroscientists often learn from each others’ research and review the current applications of neuroscience-inspired computing.

Brains are good at object recognition

Can you see a figure among the dots? Image credit: Barbara Nordhjem, reprinted with permission.

Can you see a figure among the dots? Image credit: Barbara Nordhjem, reprinted with permission.

Look at the figure (image 2) composed of seemingly random dots. Initially the object in the image is blended with the background. Within seconds, however, you will see a curved line, and soon after the full figure. What is it? (The spoiler is at the end of this post.)

Computer vision algorithms, which are computer programs designed to identify objects in images and video, fail to recognize images like image 2. As for humans, it turns out that although people take time to recognize a hidden object, people start looking at the right location in the image very soon and long before they are able to identify it. That does not mean that people somehow know which object they are seeing, but cannot report the correct answer. Rather, people experience sudden perception of the object after continued observation. A 2014 study published in PLOS ONE found that this may be because the human visual system is amazingly good at picking out statistical irregularities in images.

Neuroscience-inspired computer applications

Like any experimental science neuroscience research starts with listing all possible hypotheses to explain results of experiments. Questions such as: “How can a magician hide movement in plain sight?” or “What causes optical illusions?” are probed by researchers.

Comparatively, computer science researchers begin their process by listing alternative methods to implement behavior, such as vision, in a computer. Instead of discovering how vision works, computer scientists develop software to solve problems such as: “How should a self-driving car respond to an apparent obstacle?” or “Are these two photographs of the same person?”

An acceptable computer-vision solution for artificial intelligence (AI), just like a living organism, must process information quickly and on a limited knowledge. For example, slow reaction time for a self-driving car might result in the death of a child, or stopping traffic because of a pothole. The processing speed of current computer vision algorithms is far behind the speed of visual processes humans employ daily, however, the technical solutions that computer scientists develop may have relevance to neuroscience, sometimes acting as a source of hypothesis about how biological vision might actually work.

Likewise, most AI, such as computer vision, speech recognition or even robotic navigation, addresses problems already solved in biology. Thus, computer scientists often face a choice between inventing a new way for a computer to see or modeling on a biological approach. A solution that is biologically plausible has the advantage of being resource-efficient and tested by evolution. Probably the best-known example of a biomimetic technology is Velcro, an artificial fabric recreating an attachment mechanism used by plants. Biomimetic computing, that is, recreating functions of biological brains in software, is just as ingenuous, but much less well known outside the specialized community.

The interrelated components of neuroscience and computer science compelled me to explore how computer scientists and neuroscientists learn from each other. After visiting the International Conference on Perceptual Organization (ICPO) in June 2015, I made a list of trends in neuroscience-inspired computer applications that I will explore in more detail in this post:

1. Computer vision based on features of early vision
2. Gestalt-based image segmentation (Levinshtein, Sminchisescu, Dickinson, 2012)
3. Shape from shading and highlights — which is described in more detail in a recent PLOS Student Blog post
4. Foveated displays (Jacobs et al. 2015)
5. Perceptually-plausible formal shape representations

My favorite example of the interlocking components of neuroscience and computer science is computer vision based on features of early vision. ( Note that there are also many other, not biologically-inspired approaches to computer vision.) I particularly like this example because in this case a discovery in neuroscience of vision of seemingly simple principles of how visual cortex processes information informed a whole new trend in computer science research. To explain how computer vision borrows from biology, lets begin by reviewing the basics of human vision.

Inside the visual cortex

Let me present a hypothetical situation. Suppose you are walking along a beach with palm trees, tropical flowers and brightly colored birds. As new objects enter your field of vision, they seem to enter your awareness instantly. But in reality, shape perception emerges in the first 80-150 milliseconds of exposure (Wagemans, 2015). So how long is 150 ms? For comparison, in 150 ms a car travels four meters at highway speed, and a human walking along a beach travels about 20 centimeters in the time it takes to form a mental representation of an object, such as a tree. Thus, as you observe the palm trees, the flowers and the birds, your brain gradually assembles familiar percepts. During the first 80-150 ms, before awareness of the object has emerged, your brain is hard at work assembling shapes from short and long edges in various orientations, which are coded by location-specific neurons in primary visual area, V1.

Today, we know a lot about primary visual area V1 thanks to pioneering research of Hubel and Wiesel, who both won the Nobel Prize for discovering scale and orientation-specific neurons in cat visual cortex in the late 1950s. As an aside, if you have not yet seen the original videos of their experiments demonstrating how a neuron in a cat’s visual cortex responds to a bar of light, I highly recommended viewing these classic videos!

Inside the computer

Approximation of a square wave with four Fourier components. Image credit: Jim.belk, Public Domain via Wikimedia Commons

Approximation of a square wave with four Fourier components. Image credit: Jim.belk, Public Domain via Wikimedia Commons

At about the time that Hubel and Wiesel made their breakthrough, mathematicians were looking for new tools for signal processing, to separate data from noise in a signal. For a mathematician, a signal may be a voice recording encoding a change of frequency over time. It may also be an image, encoding a change of pixel brightness over two dimensions of space.

When the signal is an image, signal processing is called image processing. Scientists care about image processing because it enables a computer “to see” a clear precept while ignoring the noise in sensors and in the environment, which is exactly what the brain does!

The classic tools of signal processing, Fourier transforms, were discovered by Joseph Fourier in the nineteenth century. A Fourier transform represents data as a weighted sum of sines and cosines. For example, it represents the sound of your voice as a sum of single frequency components! As illustrated in the figure above, the larger the quantity of frequency components that are used, the better is the approximation. Unfortunately, unlike brain encodings, Fourier transforms do not explicitly encode the edges that define objects.

To solve this problem, scientists experimented with sets of arbitrary basis functions encoding images for specific applications. Square waves, for example, encode low-resolution previews downloaded before a full-resolution image transfer is complete. Wavelet transforms of other shapes are used for image compression, detecting edges and filtering out lens scratches captured on camera.

What do computers see?

It turns out that a specially selected set of image transforms can model the scale and orientation-specific neurons in primary visual area, V1. The procedure can be visualized as follows. At first, we process the images by a progressively lower spatial frequency filters. The result is a pyramid of image layers, equivalent to seeing the image from further and further away. Then, each layer is filtered by several edge orientations in turn. The result is a computational model of the initial stage in early visual processing, which assumes that the useful data (the signal in the image) are edges within a frequency interval. Of course, such a model represents only a tiny aspect of biological vision. Nevertheless, it is a first step towards modeling more complex features and it answers an important theoretical question: If a brain could only see simple edges, how much would it see?

To sample a few applications, our computational brain could tell:

1. Whether a photograph is taken indoors or outdoors (Guérin-Dugué & Oliva, 2000)
2. Whether the material is glossy, matte or textured
3. Whether a painting is a forgery, by recognizing individual artist brushstrokes.

Moreover, a computer brain can also do something that a real brain cannot; it can analyze a three-dimensional signal, a video. You can think of video frames as slices perpendicular to time in a three-dimensional space-time volume. A computer interprets moving bright and dark patches in the time-space volume as edges in three dimensions.

Using this technique, MIT researchers discovered and amplified imperceptible motions and color changes captured by a video camera, making them visible to a human observer. The so-called motion microscope reveals changes as subtle as face color changing with heartbeat, a baby’s breath, and a crane swaying in the wind.Probably the most striking demonstration presented at ICPO 2015 last month showed a pipe vibrating into different shapes when struck by a hammer. Visit the project webpage for demo and technical details.

So, how far are computer scientists from modeling the brain? In 1950, early AI researchers expected computing technology to pass the Turing test by 2000. Today, computers are still used as tools for solving technically specific problems; a computer program can behave like a human only to the extent that human behavior is understood. The motivation for computer models based on biology, however, is twofold.

First, both computer scientists and computer users alike are much more likely to see the output of a computer program as valid if its decisions are based on biologically plausible steps. A computer AI recognizing objects using the same rules as humans will likely see the same categories and come to the same conclusions. Second, computer applications are a test bed for neuroscience hypotheses. Moreover, a computer implementation can tell us not only if a particular theoretical model is feasible, it may also, unexpectedly, reveal alternative ways in which evolution could work.

Answer to image one riddle: A rabbit

Daubechies, Ingrid. Ten lectures on wavelets. Vol. 61. Philadelphia: Society for industrial and applied mathematics, 1992.

Elder, James H., et al. “On growth and formlets: Sparse multi-scale coding of planar shape.” Image and Vision Computing 31.1 (2013): 1-13.

Freeman, William T., and Edward H. Adelson. “The design and use of steerable filters.” IEEE Transactions on Pattern Analysis & Machine Intelligence 9 (1991): 891-906.

Gerhard HE, Wichmann FA, Bethge M (2013) How Sensitive Is the Human Visual System to the Local Statistics of Natural Images? PLoS Comput Biol 9(1): e1002873.

Guérin-Dugué, Anne, and Aude Oliva. “Classification of scene photographs from local orientations features.” Pattern Recognition Letters 21.13 (2000): 1135-1140.

Kryven, Marta, and William Cowan. “Why Magic Works? Attentional Blink With Moving Stimuli” Proceedings of International Conference on Perceptual Organization, York University Centre for Vision Research , 2015.

Levinshtein, Alex, Cristian Sminchisescu, and Sven Dickinson. “Optimal image and video closure by superpixel grouping.” International journal of computer vision 100.1 (2012): 99-119.

Lyu, Siwei, Daniel Rockmore, and Hany Farid. “A digital technique for art authentication.” Proceedings of the National Academy of Sciences of the United States of America 101.49 (2004): 17006-17010.

Murata T, Hamada T, Shimokawa T, Tanifuji M, Yanagida T (2014) “Stochastic Process Underlying Emergent Recognition of Visual Objects Hidden in Degraded Images.” PLoS ONE 9(12): e115658.

Nordhjem, Barbara, et al. “Eyes on emergence: Fast detection yet slow recognition of emerging images.” Journal of vision 15.9 (2015): 8-8.

Nordhjem B, Kurman Petrozzelli CI, Gravel N, Renken R, Cornelissen FW (2014) “Systematic eye movements during recognition of emerging images.” J Vis 14:1293–1293.

Orbán LL, Chartier S (2015) “Unsupervised Neural Network Quantifies the Cost of Visual Information Processing”. PLoS ONE 10(7): e0132218.

Said CP, Heeger DJ (2013) “A Model of Binocular Rivalry and Cross-orientation Suppression. PLoS Comput Biol” 9(3): e1002991.

Tabei K-i, Satoh M, Kida H, Kizaki M, Sakuma H, Sakuma H, et al. (2015) “Involvement of the Extrageniculate System in the Perception of Optical Illusions: A Functional Magnetic Resonance Imaging Study.” PLoS ONE 10(6): e0128750.

Vandenbroucke ARE, Sligte IG, Fahrenfort JJ, Ambroziak KB, Lamme VAF (2012) “Non-Attended Representations are Perceptual Rather than Unconscious in Nature.” PLoS ONE 7(11): e50042.

Johan Wagemans, “Perceptual organization at object boundaries: More than meets the edge” Proceedings of International Conference on Perceptual Organization (2015)

Wu, Hao-Yu, et al. “Eulerian video magnification for revealing subtle changes in the world.” ACM Trans. Graph. 31.4 (2012): 65.

Category: The Student Blog | Tagged , , , , | 1 Comment

Using Modern Human Genetics to Study Ancient Phenomena

By Emma Whittington

We humans are obsessed with determining our origins, hoping to reveal a little of  “who we are” in the process. It is relatively simple to trace one’s genealogy back a few generations, and there are many companies and products offering such services. But what if we wanted to trace our origins further on an evolutionary timescale and study human evolution itself? In this case, there are no written records and censuses. Instead, the study of human evolution has so far relied heavily on fossil specimens and archaeological finds. Now, genetic tools and approaches are frequently used to answer evolutionary questions and reveal patterns of divergence that reflect different selective pressures and geographical movement. This is particularly true for studies of human migrations out of Africa, global population divergence, and its consequences for human health.

Humans Originated in Africa

The current best hypothesis suggests anatomically modern humans (AMH) arose in East Africa approximately 200,000 YBP (Years Before Present). AMH migrated from Africa around 100,000-60,000 years ago in a series of dispersals that expanded into Europe and Asia between 60,000 and 40,000 YBP. It has been scientifically proven that East Africa is the origin of humans, and supported by both archaeological and genetic data. Genetic diversity is greatest in East Africa and decreases in a step-wise fashion from the equator in a pattern reflecting sequential founder populations and bottlenecks. Figure 1 shows three populations with decreasing genetic diversity (represented by the colored circles) from left to right. The first population, with the greatest genetic diversity, represents Africa. A second population is shown migrating away from ‘Africa’ taking with it a sample of the existing genetic diversity. This forms the founding population for the next series of migrations. Each time a population migrates it represents only a sample of genetic variation existing in its founding population, and in doing so, sequential migration (such as those in Figure 1) leads to a reduction in genetic diversity with increasing distance from the first population.

Figure 1. Diagrammatic representation of the serial founder effect model. Image by Emma Whittington.

Figure 1. Diagrammatic representation of the serial founder effect model. Image by Emma Whittington.

Leaving Africa – Where do we go from here?

Although the location of human origin is generally accepted, there is a lack of consensus around the migration routes by which AMH left Africa and expanded globally. There are many studies using genetic tools to identify likely migration routes, one of which is a recent PLOS One article by Veerappa et al (2015). In this study, researchers characterized the global distribution of copy number variation, which is the variation in the number of copies of a particular gene, by high resolution genotyping of 1,115 individuals from 12 geographic populations, identifying 44,109 copy number variants (CNVs). The CNVs carried by an individual determined their CNV genotype and by comparing CNV genotypes between all individuals from all populations, the authors determined similarity and genetic distance between populations. The phylogenetic relationship between populations proposed a global migration map (Figure 2), in which an initial migration from the place of origin, Africa, formed a second settlement in East Asia, which is similar to a founding population seen in Figure 1. At least five further branching events took place in the second settlement, forming populations globally. The migration routes identified in this paper largely support those already proposed, but of particular interest this paper also proposes a novel migration route from Australia, across the Pacific, and towards the New World (shown in blue in Figure 2).

Figure 2. A global map showing CNV counts and possible migration routes. Photo courtesy of Veerappa et al (2015).

Figure 2. A global map showing CNV counts and possible migration routes. Photo courtesy of Veerappa et al (2015).

Global Migration Leads to Global Variation

As AMH spread across the globe, populations diverged and encountered novel selective pressures to which they had to adapt. This is reflected in the phenotypic (or observable) variation seen between geographically distant populations. At the genotype level, a number of these traits show evidence of positive selection, meaning they likely conferred some advantage in particular environments and were consequently favored by natural selection and increased in frequency. A well cited example of this is global variation in skin color, which is thought to reflect a balance between vitamin D synthesis and photoprotection (Figure 3). Vitamin D synthesis requires UV radiation, and a deficiency in vitamin D can result in rickets, osteoporosis, pelvic abnormalities, and a higher incidence of other diseases. At higher latitudes, where UV radiation is low or seasonal, experiencing enough UV radiation for sufficient vitamin D synthesis is a major concern. Presumably, as AMHs migrated from Africa, they experienced reduced levels of UV radiation, insufficient vitamin D synthesis, and severe health problems; resulting in selection for increased vitamin D synthesis and lighter skin pigmentation. Consistent with this, a number of pigmentation genes underlying variation in skin color show evidence of positive selection  in European and Asian populations, relative to Africa. On the flip side, populations near the equator experience no shortage of UV radiation and thus synthesize sufficient vitamin D; however the risk of UV damage is much greater. Melanin, the molecule determining skin pigmentation, acts as a photoprotective filter, reducing light penetration and damage caused by UV radiation, resulting in greater photoprotection in darkly pigmented skin. Selective pressure to maintain dark pigmentation in regions with high UV radiation is evident by the lack of genetic variation in pigment genes in areas such as Africa. This suggests selection has acted to remove mutation and maintain the function of these genes.

Figure 3. A map showing predominant skin pigmentation globally. Photo courtesy of Barsch (2003)

Figure 3. A map showing predominant skin pigmentation globally. Photo courtesy of Barsch (2003)

Can genetics and human evolution have a practical use in human health?

Beyond phenotypic consequences, genetic variation between populations has a profound impact on human health. It has a direct influence on an individual’s predisposition for certain conditions or diseases. For example, Type 2 diabetes is more prevalent in African Americans than Americans of European descent. Genome wide association studies (GWAS) analyze common genetic variants in different individuals and assess whether particular variants are more often associated with certain traits or diseases. Comparing the distribution and number of disease-associated variants between populations can assess if genetic risk factors underlie disparities in disease susceptibility. In the case of Type 2 diabetes, African Americans carry a greater number of risk variants than Americans of European descent at genetic locations (loci) associated with Type 2 diabetes. It is clear that an individual’s ethnicity affects their susceptibility and likely reaction to disease, and as such should be considered in human health policy. Understanding the genetic risk factors linking populations and disease can identify groups of individual at greater risk of developing certain diseases for the sake of prioritizing treatment and prevention.

Applying modern human genetics to human evolution has opened a door to studying ancient evolutionary phenomena and patterns. This area not only serves to quench the desire to understand our origins, but profoundly impacts human health in a way that could revolutionize disease treatment and prevention. In this blog, I have given a brief overview of how using genetic approaches can tell us a great deal about human origins, migration and variation between populations. In addition, I have outlined the complex genetic underpinnings behind ethnicity and disease susceptibility, which suggests an important role for population genetics in human health policy. This blog post covers only a fraction of the vast amount of ongoing work in this field, and the often ground breaking findings. It is unclear exactly how far genetics will take us in understanding human evolution, but the end is far from near. The potential for genetics in this field and broader feels limitless, and I for one am excited by the prospect.


Barsh, G.S. (2003). PLoS Biol. 1(1):e27.
Henn, B.M et al. (2012) The Great Human Expansion. PNAS. 109 (44): 17758-17764.
Ingman, M. et al. (2000). Mitochondrial genome variation and the origin of modern humans. Nature. 488: 708-712.
Jablonski, N.G. and Chaplin, G. (2000). The evolution of human skin coloration. Journal of Human Evolution. 39: 57–106.
Keaton, J.M. et al. (2014). A comparison of type 2 diabetes risk allele load between African Americans and European Americans. Human Genetics. 133:1487–1495.
Liu, F et al. (2013). Seminars in Cell & Developmental Biology. 24: 562-575.
Morand, S. (2012). Phylogeography helps with investigating the building of human parasite communities. Parasitology. 139: 1966–1974.
Parra, E.J. (2007). Human Pigmentation Variation: Evolution, Genetic Basis, and Implications for Public Health. Yearbook of Physical Anthropology. 50:85–105.
Veerappa, A.M. et al. (2015). Global Spectrum of Copy Number Variations Reveals Genome Organizational Plasticity and Proposes New Migration Routes. PLoS ONE 10(4): e0121846.

Category: The Student Blog | Tagged , , , , , , | 1 Comment

Three simple tips to survive grant writing

Tip 2: To survive grant writing, take a break from the computer and go on a walk with friends. Image via Core Athletics.

Tip 2: To survive grant writing, take a break from the computer and go on a walk with friends. Image via Core Athletics.

Like any field, working in research has its ups and downs. Ask any scientist and they will likely identify the opportunity to guide their own inquiries through research as an upside, but grant writing as one of the main downsides.

A recent PLOS ONE paper by Ted and Courtney von Hippel notes that the average principal investigator (PI) spends a minimum of 116 hours per grant submission. That’s at least one full month of work for the PI alone! Add to that the fact that grant funding can make or break a career and it’s no wonder that grant writing is stressful. To avoid burn out from writing a grant (or dissertation), try the following tips.

1. Give yourself a break


Keep your mind energized by taking breaks from work. Image courtesy: Marcel Oosterwijk from Amsterdam, The Netherlands [CC BY-SA 2.0], via Wikimedia Commons

Grant writing can be an all encompassing process in positive and negative ways. Grant writing is a wonderful opportunity to take a deep dive into a body of literature. However, it demands time you might rather spend doing something else (e.g., conducting research as opposed to writing about research conducted by others). To avoid turning into a dull boy or girl, I suggest you engage in at least one small pleasurable activity per day. The activity depends on your interests, but there’s a whole body of literature on the benefits of this approach, so choose whatever is right for you and be sure to stick to it. If you find that you’re making excuses to cancel fun activities, try asking yourself, “What makes more sense, a 15-minute break now or a 2-hour breakdown later?

2. Give yourself an energy boost

When you’re on a deadline, it’s tempting to work around the clock, but it’s likely that this sort of schedule does more harm than good. For example, the evidence shows sleep deprivation reduces creative thinking. Without enough food or sleep you’re unlikely to have enough energy to engage in a task as cognitively complex as grant writing. There are at least three components to getting an energy boost. First, eat regularly. Don’t go for more than three to four hours without eating. Second, sleep regularly — go to sleep and wake up at the same time each day. Third, exercise regularly – engage in some physical activity every day, it can be as simple as going for a walk. You can even combine giving yourself a break with an energy boost; walking with a friend for a snack is a way to have fun, get some exercise, and get enough to eat.

3. Give yourself some props

Tip 3: Think positive by writing affirmations around your workspace. Image by Ignacio Palomo Duarte, via Flickr.

Tip 3: Think positive by writing affirmations around your workspace. Image by Ignacio Palomo Duarte, via Flickr.

Getting feedback from mentors and peers is an important part of grant writing. It also exposes you to a near constant stream of criticism, which, while (hopefully) constructive, can still take a toll on your confidence. To combat this, remind yourself of your past accomplishments and the exciting work you’ll do if the grant is awarded. It’s tempting to do this in your head, but it’s more effective to write down these positive statements and keep them on your phone or on a piece of paper by your computer, that way if you’re feeling down and can’t think of many positive qualities, you’ll have a cheat sheet. If nothing else, the affirmations can improve your mood (though research shows it may depend on your initial level of self-esteem).

These tips may seem simple, but they’re often overlooked and undervalued. Even as a clinical psychologist, it took me weeks to realize that taking care of myself made grant writing easier. While there’s no guarantee that following these tips will increase the likelihood of getting funded (as Drs. von Hippel note, ever-diminishing funds and the excellent quality of many grant applications makes winning funding a “roll of the dice”), they are important to preserving your well being and productivity. After all, having fun, eating and sleeping well, getting exercise, and building your confidence will probably improve your quality of life, which is ultimately more important than grant money. Right?

Dimidjian, S., Barrera Jr, M., Martell, C., Muñoz, R. F., & Lewinsohn, P. M. (2011). The origins and current status of behavioral activation treatments for depression. Annual review of clinical psychology, 7, 1-38.

Hames, J. L., & Joiner, T. E. (2012). Resiliency factors may differ as a function of self-esteem level: Testing the efficacy of two types of positive self-statements following a laboratory stressor. Journal of Social and Clinical Psychology, 31(6), 641-662.

Landmann, N., Kuhn, M., Maier, J. G., Spiegelhalder, K., Baglioni, C., Frase, L., … & Nissen, C. (2015). REM sleep and memory reorganization: Potential relevance for psychiatry and psychotherapy. Neurobiology of learning and memory.

von Hippel, T., & von Hippel, C. (2015). To Apply or Not to Apply: A Survey Analysis of Grant Writing Costs and Benefits. PloS one, 10(3), e0118494.

Category: Academia, The Student Blog | Tagged , , , , , , | 6 Comments

Snark-Hunters Once More: Rejuvenating the Comparative Approach in Modern Neuroscience

By Jeremy Borniger

65 years ago, the famed behavioral endocrinologist Frank Beach wrote an article in The American Psychologist entitled ‘The Snark was a Boojum’. The title refers to Lewis Carroll’s poem ‘The Hunting of the Snark’, in which several characters embark on a voyage to hunt species of the genus Snark. There are many different types of Snarks, some that have feathers and bite, and others that have whiskers and scratch. But, as we learn in Carroll’s poem, some Snarks are Boojums! Beach paraphrases Carroll’s writing outlining the problem with Boojums:

If your Snark be a Snark, that is right:
Fetch it home by all means—you may serve it
with greens
And it’s handy for striking a light.
But oh, beamish nephew, beware of the day,
If your Snark be a Boojum! For then,
You will softly and suddenly vanish away,
And never be met with again!

View post on

Instead of the Pied Piper luring the rats out of town with a magic flute, the tables have turned and the rat plays the tune and a large group of scientists follows. (From Beach, 1950)

Beach provides this metaphorical context to describe a problem facing comparative psychologists in the 1950’s: an increasing focus on a few ‘model species’ at the cost of reduced breadth in the field. The comparative psychologists were hunting a Snark called “Animal Behavior”, but this Snark too, turned out to be a Boojum. Instead of finding many animals on which to test their hypotheses, they settled on one: the albino rat. It was there that “the comparative psychologist suddenly and softly vanished away”.

Even in the mid-1900’s Beach recognized the funneling of biological and psychological research efforts towards a single or few ‘model species’. He even went as far as to suggest that the premier journal in the field be renamed The Journal of Rat Learning as its focus had almost entirely shifted to the rat. This trend has culminated in a true bottleneck, where the vast majority of research now focuses on phenomena occurring in a small amount of ‘model organisms’ like the laboratory mouse (Mus musculus), Norway rat (Rattus norvegicus), nematode worm (Caenorhabditis elegans), fruit fly (Drosophila melanogaster), or zebrafish (Danio rerio). Indeed, a 2008 analysis found that “75% of our research efforts are directed to the rat, mouse and human brain, or 0.0001% of the nervous systems on the planet”. Focusing on such a small fraction of the biological diversity available to us may be skewing or vastly restricting our conclusions.

The Genetics Revolution

In the last quarter of a century, the incredible advancement in genetic technology has pushed a few model organisms further towards the top. For example, the mouse was among the first mammals to have its genome sequenced, the results being published in 2002. Because of the presence of readily available sequence information, subsequent tools (primers, shRNA, oligonucleotides, etc…) and genetic techniques (conditional knockout/overexpression models) were developed specifically for use in the mouse. This further discouraged the use of other organisms in research as most of the ‘cutting edge’ tools were being developed almost exclusively in the mouse. It also promoted ‘shoe-horning’ of research that may not be appropriate for this model organism to take advantage of the genetic tools available. Indeed, this may be the case with research regarding the visual system, or many mental disorders in mouse models. The lab mouse primarily interprets environmental stimuli via olfactory (smell) cues, rather than sight (as it is nocturnal), making it a suboptimal organism in which to study visual function. Also, as mental disorders are poorly understood, developing a robust animal model in which to test treatments remains a significant obstacle. Trying to force the mouse to become the bastion of modern psychiatry research is potentially hampering progress in a field that could benefit from the comparative approach. For example, wild white-footed mice (Peromyscus leucopus), which are genetically distinct from their laboratory mouse relatives, show seasonal variations in many interesting behaviors. For instance, in response to the short days of winter, they enhance their fear responses and alter the cellular structure of their amygdala, a key brain region in the regulation of fear. Because these changes are reversible and controlled by a discrete environmental signal (day length), these wild mice contribute to the development of current translational models that involve amygdala dysfunction, such as post-traumatic stress disorder (PTSD).

What are the Benefits to the Comparative Approach?

Emphasis on a few model organisms became prevalent primarily due to their ease of use, rapid embryonic development, low cost, and accessible nervous systems. In the last number of decades, access to an organism’s genetic code provided another incentive to use it for research purposes. While these advantages are useful, they encourage researchers to become short-sighted and neglect previous contributions of diverse species to the advancement of science. As Brenowitz and Zakon write, “this myopia affects choice of research topic and funding decisions, and might cause biologists to miss out on novel discoveries”. Some examples of breakthroughs that were made possible through the use of the comparative approach include the understanding of the ionic basis of the action potential (squid), the discovery of adult neurogenesis (canary), conditioned reflexes (dog), dendritic spines (chicken), and the cellular basis of learning and memory (sea slug). More recently, incredible advancements in temporal control of neuronal function have been made (optogenetics) thanks to the characterization of channel rhodopsins in algae.

The revolution in genetics ushered in new tools that were only available to be used in the few organisms that have had their genomes sequenced. The comparative approach, however, is now gaining the tools necessary to become part of the 21st century genetic revolution. New gene-editing techniques, such as TALENS, TILLING, or CRISPR/Cas9 allow for fast, easy, and efficient genome manipulation in a wide variety of species. Indeed, this has already been accomplished in Atlantic salmon, tilapia, goats, sea squirts, and silkworms. Also, as the price to sequence an entire genome rapidly decreases, new tools will be developed for use in a wider variety of species than ever before. It is unlikely that many of the groundbreaking discoveries that stemmed from research on diverse and specialized organisms would be funded in the current ‘model species’ climate. We should not put all of our research ‘eggs’ in one model organism ‘basket’, and instead invest in a broad range of organisms that are fit for each question at hand. The time to revive the comparative approach has arrived. In the words of Brenowitz and Zakon, “Grad students, dust off your field boots!”


Beach, F. A. (1950). The Snark was a Boojum. American Psychologist5(4), 115.

Brenowitz, E. A., & Zakon, H. H. (2015). Emerging from the bottleneck: benefits of the comparative approach to modern neuroscience. Trends in neurosciences38(5), 273-278.

Chinwalla, A. T., Cook, L. L., Delehaunty, K. D., Fewell, G. A., Fulton, L. A., Fulton, R. S., … & Mauceli, E. (2002). Initial sequencing and comparative analysis of the mouse genome. Nature,420(6915), 520-562.

Edvardsen, R. B., Leininger, S., Kleppe, L., Skaftnesmo, K. O., & Wargelius, A. (2014). Targeted mutagenesis in Atlantic salmon (Salmo salar L.) using the CRISPR/Cas9 system induces complete knockout individuals in the F0 generation.

García-López, P., García-Marín, V., & Freire, M. (2007). The discovery of dendritic spines by Cajal in 1888 and its relevance in the present neuroscience.Progress in neurobiology83(2), 110-130.

Goldman, S. A. (1998). Adult neurogenesis: from canaries to the clinic. Journal of neurobiology36(2), 267-286.

Li, M., Yang, H., Zhao, J., Fang, L., Shi, H., Li, M., … & Wang, D. (2014). Efficient and heritable gene targeting in tilapia by CRISPR/Cas9. Genetics,197(2), 591-599.

Ni, W., Qiao, J., Hu, S., Zhao, X., Regouski, M., Yang, M., … & Chen, C. (2014). Efficient gene knockout in goats using CRISPR/Cas9 system.

Manger, P. R., Cort, J., Ebrahim, N., Goodman, A., Henning, J., Karolia, M., … & Štrkalj, G. (2008). Is 21st century neuroscience too focussed on the rat/mouse model of brain function and dysfunction?.Frontiers in neuroanatomy2: 5

Stolfi, A., Gandhi, S., Salek, F., & Christiaen, L. (2014). Tissue-specific genome editing in Ciona embryos by CRISPR/Cas9. Development141(21), 4115-4120.

Pavlov, I. P. (1941). Conditioned reflexes and psychiatry (Vol. 2). W. H. Gantt, G. Volborth, & W. B. Cannon (Eds.). New York: International publishers.

Walton, J. C., Haim, A., Spieldenner, J. M., & Nelson, R. J. (2012). Photoperiod alters fear responses and basolateral amygdala neuronal spine density in white-footed mice (Peromyscus leucopus).Behavioural brain research233(2), 345-350.

Wei, W., Xin, H., Roy, B., Dai, J., Miao, Y., & Gao, G. (2014). Heritable genome editing with CRISPR/Cas9 in the silkworm, Bombyx mori.

Category: The Student Blog | Tagged , , , , , , , , , | 1 Comment

Knowledge is where you find it: Leveraging the Internet’s unique data repositories

A user shares the music recommendation system's representation of his listening habits over a month. Photo courtesy of Aldas Kirvaitis via Flickr.

A user shares the music recommendation system’s representation of his listening habits over a month. Photo courtesy of Aldas Kirvaitis via Flickr.

By Chris Givens

Sometimes, data doesn’t look like data. But when circumstances conspire and the right researchers come along, interesting facets of human nature reveal themselves. and World of Warcraft are two entities made possible by the Internet, both aimed at entertainment of consumers. However, through new means of social interaction and larger scales of data collection they also, perhaps unintentionally, advanced science. Scientific achievement may seem like a stretch for a music service and a video game, but these unlikely candidates for scientific study show that the information age constantly offers new ways to study human behavior. and World of Warcraft are contemporary social constructions, part of the new way that humans interact in our rapidly changing digital world. By applying scientific rigor to the data unwittingly generated by two Internet-based companies, we see that knowledge is everywhere, but sometimes requires creative routes to coax it out of hiding. more than a musical concierge is a music service that uses consumers’ listening data and genre tags to recommend new music to the user. It has a huge cache of song clips in its databases, which were not viewed as a data set until recently, when a group of computer scientists mined the songs for certain characteristics and created a phylogeny of popular music. The lead author on the study, Dr. Matthias Mauch, formerly worked on the Music Information Retrieval (MIR) team at MIR is essentially automated analysis of musical data, usually from audio samples. Uses for the data gleaned from audio samples include improved music search, organization, and recommendation. This kind of research has clear benefit to a company like, whose main goal is to catalog users’ listening habits and recommend music they would like based on past listening patterns. Dr. Mauch, however, is interested in more than simply improving musical recommendations; he wants to trace the evolution of the variety of music from around the world. In a recent study, he used a huge data set obtained from his time at to start cracking the code on musical evolution.

Hip-hop is a confirmed revolution

When hip-hop burst into the public consciousness in the late 1980’s, the music polarized Americans. Hip-hop music originally centered on themes of social ills in inner-city America, providing a creative outlet for the frustration felt by many working-class African Americans at the time. Gangsta rap eventually grew out of hip-hop, characterized by at times violent, masculine lyrical themes. After release of their seminal album, Straight Outta Compton, the hip-hop group N.W.A received a warning letter from the FBI as a result of controversial songs on the album. The explosive and politicized emergence of hip-hop created a new genre of popular music, thrusting a marginalized group of Americans into the pop culture spotlight. Starting from humble roots, hip-hop is now a multi-billion dollar industry. But even with all of the popular exposure and controversy, until Dr. Mauch’s study the degree to which hip-hop revolutionized popular music was hard to quantify.

See Dr. Mauch’s TED Talk about music infomatics here.

A group of researchers, led by Dr. Mauch, used MIR techniques on the data set, and in doing so, found previously unknown relationships between hip-hop and other types of twentieth-century popular music. After recognizing the song clips obtained from held a repository of data, the group devised a method of classifying songs based on two categories of attributes: harmonic and timbral. Harmonic attributes are quantifiable, encompassing chord changes and the melodic aspects of songs; timbral attributes are more subjective and focus on quality of sound, like bright vocals or aggressive guitar. The authors deemed these attributes “musically meaningful” and thus more appropriate for quantitative analysis than simple measures of loudness or tempo.

The researchers used modified text-mining techniques to carry out their analysis. They combined characteristics from the harmonic and timbral lists to create “topics” which could then be used to analyze each song based on the number of topics present. Next, the researchers analyzed 17,000 songs from the Billboard Hot 100 charts for the 50 years between 1960 and 2010. After finishing song analysis and clustering songs based on their harmonic and timbral characteristics, the researchers created a phylogenetic tree of popular music.

The tree empirically verified what we already knew — that hip-hop is in a league of its own. Out of four clusters on the tree, hip-hop is the most divergent. Using the tree of life as an analogy, if the genres of rock, soul, and easy listening are animals, fungi, and plants, hip-hop would be musical bacteria.

Using these data, extensive knowledge of musical history is possible. The authors state in their paper that instead of using anecdote and conjecture to understand musical evolution, their methods make it possible to pinpoint precisely where musical revolutions occurred. Due to their efforts, popular music now has a quantitative evolutionary history, and Dr. Mauch isn’t finished yet. He plans to do similar analyses on recordings of classical music and indigenous music from all over the world, in an attempt to trace the origins and spread of music predating the radio. I feel the innovative techniques and range of this study is incredible. Dr. Mauch and colleagues adapted research methods frequently used to improve of music delivery (already an interesting field) and used them to unlock a small amount of transcendent musical knowledge. This study shows that tens of thousands of song clips isn’t a typical scientific data set, until someone says so. By taking what was provided and forging it into something workable, Dr. Mauch and colleagues applied scientific methods to’s unrecognized and unexamined data repository.

Surviving a pandemic in the Wide, World of Warcraft

World of Warcraft (WoW) is a highly social video game that connects players globally. WoW is also arguably the last place anyone would look for scientific insight. Launched in 2004, WoW is one of the most popular games ever created, with around ten million subscribers at its peak popularity. WoW was designed as a “massively multiplayer online role-playing game”. When launched, players from all over the world began interacting in real time, throughout an intricately designed virtual world. The world was designed as a fantastical model of the real world, complete with dense urban areas and remote, low population zones. In 2005, a glitch that caused a highly contagious sickness to be spread between players revealed this game to be an apt model of human behavior under pandemic conditions. The glitch drastically affecting gameplay for gamers and piqued the interest of several epidemiologists.

The “Corrupted Blood Incident”

The glitch came to be known as the “Corrupted Blood Incident” in the parlance of the game. It originated from one of the many things present in WoW that are not present in the real world: “dungeons”. Dungeons in WoW are difficult areas populated by powerful “boss” characters that possess special abilities not normally found in the game. In 2005, one of these abilities, the “Corrupted Blood” spell, was modified by a glitch to have powers outside of the zone it normally resided in. Consequently, the highly contagious “Corrupted Blood” swept though WoW, killing many player characters and providing an accurate simulation of real-world pandemic conditions. “Corrupted Blood” infected player characters, pets, and non-player characters, which aided transmission throughout the virtual landscape. Only one boss character in one remote zone cast this spell, so its spread was a surprise to players and developers alike, adding to the accuracy of the “simulation”.

The glitch stayed active for about a week, and during that time, gameplay changed dramatically. Because pets and non-player characters carried the disease without symptoms, reservoirs of the plague existed in the environment and helped nourish the outbreak. Players avoided cities for fear of contracting the disease. Some players who specialized in healing stayed in cities, helping especially weak players stay alive long enough to do business. Weaker, low-level players who wanted to lend a hand posted themselves outside of cities and towns, warning other players of the infection ahead. After a week of the pandemic in the game, the developers updated the code and reset their servers, “curing” the WoW universe of this scourge of a glitch.

Some epidemiologists took note after observing the striking similarities between real world pandemics and the virtual pandemic in WoW. In the virtual pandemic, pets acted as an animal reservoir, as birds did in the case of avian flu. Additionally, air travel in WoW (which takes place on the back of griffins) proved analogous to air travel in the real world, thwarting efforts to quarantine those affected by the disease. Also, WoW is a social game full of tight-knit communities, and at the time had around 6.5 million subscribers, making it a reasonable virtual approximation of the social stratification that exist in real world society.

See Dr. Fefferman’s 2010 TED talk here.

The behavior observed in WoW was not taken as a prescription for how to handle a pandemic or a prediction of what will happen. Rather, as Dr. Nina Fefferman put it in a 2010 TED talk, this event provided “ inspiration about the sorts of things we should consider in the real world” when making epidemiological models. Dr. Fefferman’s group discovered two behaviors displayed by players experiencing the virtual pandemic empathy and curiosity, which are not normally taken into account by epidemiological models. Curiosity was the most notable, because it paralleled the behavior of journalists in real world pandemics. Journalists rush into the infected site to report, and then rush out, hopefully before becoming infected, which is exactly what many players did in the infected virtual cities of WoW.

The “Corrupted Blood Incident” is the first known time that an unplanned virtual plague spread in a similar way to a real world plague. Though at first, most looked at this instance simply an annoying video game glitch. It took some creative scientists decided to see what they knowledge they could glean from the incident. Their observations suggest that sometimes, the best agent-based model is the one where actual people control the agents, and that simulations similar to computer games might “bridge the gap between real world epidemiological studies and large scale computer simulations.” Epidemiological models are now richer as a result of this knowledge. To learn more about how the “Corrupted Blood Incident” changed scientific modeling for pandemics, head on over to the PLOS Public Health Perspectives blog to hear Atif Kukaswadia’s take on it.

Concluding Thoughts

The study and “Corrupted Blood Incident” show ways scientists can use esoteric corners of the Internet to illuminate interesting pieces of human history and behavior. New means of social interaction and new methods for collecting information bring about interesting, if slightly opaque, ways to discover new knowledge and advance scientific discovery. It is a credit that these scientists helped shed light on human history and interactions by looking past the traditional and finding data from novel sources.

Category: Gaming, PLoS Blogs, The Student Blog | Tagged , , , , , , , | Comments Off on Knowledge is where you find it: Leveraging the Internet’s unique data repositories

PLOS Computational Biology Community Going All Out to Cover #ISMB15 with the ISCB Student Council

Calling All Bloggers for ISMB/ECCB 2015

ISMB/ECCB 2015 in Dublin, Ireland, is fast approaching and we invite you to be involved in the live coverage of the event.

If you can’t make it to Dublin, follow our live collaborative blog coverage at

In previous years, ISMB has been way ahead of the social media curve with microblogging in 2008, one year before the launch of Flickr, one year after the launch (in the USA) of the Apple original iPhone 1, and just two years after Twitter was founded. Now at the last count, Twitter has averaged at 236 million users, three million blogs come online each month, and Tumblr owners publish approximately 27,778 new blog posts every minute. We all know that, in like-fashion, social media is a growing aspect of conferences, — read more in our Ten Simple Rules for Live Tweeting at Scientific Conferences, — and we think ISMB/ECCB 2015 is a great venue for progress.

How can you be involved?

We want you to take live blogging to ISMB/ECCB. If you are planning to attend the conference in Dublin and you blog or tweet, or even if you would like to try it for the first time, we want to hear from you — Everyone can get involved.

Our invitation extends to attendees from all backgrounds and experience who could contribute blog posts covering the conference. In addition we are looking for a number of ‘super bloggers’ who can commit to blogging two to three high-quality posts or who would be interested in interviewing certain speakers at the conference. If you are speaking at ISMB/ECCB and would like to be involved, please do also get in touch.

iscb sc

For the “next generation computational biologists”

We will be working on this collaborative blog project with the  ISCB Student Council.

In acknowledgment of the time and effort of all who work with us, each contributor will receive a select PLOS Computational Biology 10th Anniversary t-shirt (only available at ISMB/ECCB 2015) and your work will be shared on the PLOS Comp Biology Field Reports  page [], making it easier for all to contribute and collaborate.


Winning design for the PLOS Comp Biol 10th Anniversary T-shirt; get yours by blogging with us at ISMB15! PLOS/Kifayathullah Liakath-Ali

What are the next steps?

If you’re active on Twitter or the blogosphere and want to help us share the latest and greatest from ISMB/ECCB 2015 conference, please email us at with a bit about your background and how you’d like to contribute. See you in Dublin!

FOR MORE ON PLOS AT ISMB/ECCB 2015 read this post.

Category: The Student Blog | Tagged , , , , | Comments Off on PLOS Computational Biology Community Going All Out to Cover #ISMB15 with the ISCB Student Council

Walk the walk, talk the talk: Implications of dual-tasking on dementia research

A group of friends chat as they walk in St. Marlo, France. Photo courtesy of Antoine K (Flickr).

A group of friends chat as they walk in St. Marlo, France. Photo courtesy of Antoine K (Flickr).

By Ríona Mc Ardle

You turn the street corner and bump into an old friend. After the initial greetings and exclamations of “It’s so good to see you!” and “Has it been that long?”, your friend inquires as to where you are going. You wave your hands to indicate the direction of your desired location, and with a jaunty grin, your friend announces that they too are headed that way. And so, you both set off, exchanging news of significant others and perceived job performance, with the sound of your excited voices trailing behind you.

This scenario seems relatively normal, and indeed it occurs to hundreds of people in everyday life. But have you ever truly marvelled our ability to walk and talk at the same time? Most people discount gait as an automatic function. They spare no thought to the higher cognitive processes that we engage in, in order to place one foot in front of the next. The act of walking requires huge attentional and executive function resources, and it is through the sheer amount of practice we accumulate throughout our lives that makes it feel like such a mindless activity. Talking while walking recruits even more resources with the additional requirement of an attentional split – this is so we don’t overemphasize our attention on one task at the detriment of the other. And it’s not just talking! We can assume all manner of activities while walking – carrying trays, waving our hands, checking ourselves out in the nearest shop window. But what sort of costs does such multitasking afford us?

Dual-Tasking: How well can we really walk and talk?

Dual-tasking refers to the simultaneous execution of two tasks – usually a walking task and a secondary cognitive or motor task. It is commonly used to assess the relationship between attention and gait. Although there is no consensus yet on the manner in which dual-tasking may hinder gait, dual-task studies have demonstrated decreased walking speed and poorer performance of the secondary task. This has been particularly seen in healthy older adults, who struggle more with the secondary task when prioritizing walking – this is likely to maintain balance and avoid falls. Such findings have supported the role of executive function and attention in gait, as both are implicated as part of frontal lobe function. Age-related changes in the brain have demonstrated a focal atrophy of the frontal cortex, with reductions of 10-17%  observed in the over-65 age group. When compared to the reported 1% atrophy of the other lobes, it can be postulated that the changes to the frontal lobes contribute to gait disturbances experienced by the elderly. The older generation must recruit many of the remaining attentional resources in order to maintain gait, and thus have few left to carry out other activities at the same time.

It has been incredibly difficult to confirm the frontal cortex’s role in gait from a neurological perspective. Imaging techniques have largely been found unsatisfactory for such an endeavor, as many rely on an unmoving subject lying in a fixed position. However, a recent study in PLOS ONE has shed some light in the area. Lu and colleagues (2015) employed Functional Near-Infrared Spectroscopy (fNIRS) to capture activation in the brain’s frontal regions: specifically, the prefrontal cortex (PFC), the premotor cortex (PMC) and the supplementary motor areas (SMA). Although obtaining a similar image of the cortex to that of functional magnetic resonance imaging (fMRI), fNIRS can acquire these during motion. These researchers laid out three investigatory aims:

  1. To assess if declining gait performance was due to different forms of dual task interference.
  2. To observe if there was any differences in cortical activation in the PFC, PMC and SMA during dual-task and normal walking.
  3. To evaluate the relationship between such activation and gait performance during dual-tasking.

The research team predicted that the PFC, PMC and SMA would be more active during dual-task due to the increased cognitive or motor demand on resources.

A waiter balances a tray of waters. Photo courtesy of Tom Wachtel (Flickr).

A waiter balances a tray of waters. Photo courtesy of Tom Wachtel (Flickr).

Lu and colleagues recruited 17 healthy, young individuals to take part in the study. Each subject underwent the three conditions three times each, with a resting condition in between. The conditions were as follows: a normal walking condition (NW) in which the participant would be instructed to walk at their usual pace, a walking while carrying out a cognitive task condition (WCT) in which the subject had to engage in a subtraction task, and a walking while carrying out a motor task condition (WMT) in which the participant had to carry a bottle of water on a tray without spilling it. Results showed that both the WCT and WMT induced a slower walk than the NW. Interestingly, WMT displayed a higher number of steps per minute and a shorter stride time – this could be attributed to the intentional alteration of gait in order not to spill the water. An analysis of the fNIRS data revealed that all three frontal regions were activated during dual-task conditions. The PFC showed the strongest, most continuous activation during WCT. As the PFC is highly implicated in attention and executive function, its increased role in the cognitive dual-task condition seems reasonable. The SMA and PMC were found to be most strongly activated during the early stages of the WMT. Again, this finding makes sense as both these areas are associated with the planning and initiation of movement – in this case, the researchers postulated that this activity may reflect a demand for stability of the body in order to carry out the motor task. This study has successfully demonstrated the frontal cortex’s role in maintaining gait, particularly when concerned with a secondary task.

Why is this important?

Although gait disturbances are extremely rare in young people, their prevalence increases with age. 30% of adults over 65  fall at least once a year, with the incidence rate climbing to 50% in the over-85 population. Falls carry a high risk of critical injuries in the elderly, and often occur during walking. This latest study has provided evidence for the pivotal roles of executive function and attention in maintaining gait, and has allowed us to gain insight into the utilities of dual-task studies. Having correlated activation of the frontal cortex with carrying out a secondary task while walking, poor performance of one of these tasks may reveal a subtle deficit in attention or executive function. Therefore, gait abnormalities may be able to act as a predictor of mild cognitive impairment (MCI) and even dementia. Studies have reported that a slowing of gait may occur up to 12 years prior to the onset of MCI, with longitudinal investigations observing that gait irregularities significantly increased individuals’ likelihood of developing dementia 6-10 years later.

But what does the relationship between gait and cognition tell us about dementia? Dementia is one of the most prevalent diseases to afflict the human population, with 8% of over-65s and over 35% of the over-85s suffering from it. It is incredibly important for researchers to strive to reduce the amount of time individuals must live their lives under the grip of such a crippling disorder. Our growing knowledge of gait and cognition can allow us to do so in two different ways: through early diagnosis of dementia and through developments in interventions.

For the former, a drive in research has begun regarding the correlation of different gait deficits with dementia subtypes. The principle behind this is that the physical manifestation of the gait disturbance could lend clinicians a clue as to the lesion site in the brain – for example, if a patient is asked to prioritize an executive function task when walking, and displays a significant gait impairment while doing so, this may predict vascular dementia. This is because executive function relies on frontal networks, which are highly susceptible to vascular risk. Studies have shown that this type of dual-task will often cause a decrease in velocity and stride length and are associated with white matter disorders and stroke. To the latter point, practice of either gait or cognition may benefit the other. It has been acknowledged that individuals who go for walks daily have a significantly reduced risk of dementia. This is attributed to gait’s engagement of executive function and attention, which exercises the neural networks associated with them. Similarly, improving cognitive function may in turn help one to maintain a normal gait. Verghese and colleagues (2010) demonstrated this in a promising study, in which cognitive remediation improved older sedentary individuals’ mobility.

Closing Thoughts

As both a neuroscientist and a citizen of the world, one of my personal primary concerns is the welfare of the older generation. The aging population is growing significantly as life expectancy increases and these individuals are susceptible to a range of medical issues. While any health problem is hard to see in our loved ones, dementia and failing cognition are a particularly difficult brunt on both the individual, their family and society as a whole. Our exploration into the relationship between gait and cognition has so far offered a glimmer of hope into progressing our understanding and fight against dementia, and I personally hope this intriguing area continues to do just that.


Beurskens, R., & Bock, O. (2012). Age-related deficits of dual-task walking: a review. Neural plasticity2012.

Holtzer, R., Mahoney, J. R., Izzetoglu, M., Izzetoglu, K., Onaral, B., & Verghese, J. (2011). fNIRS study of walking and walking while talking in young and old individuals. The Journals of Gerontology Series A: Biological Sciences and Medical Sciences, glr068.

Li, K. Z., Lindenberger, U., Freund, A. M., & Baltes, P. B. (2001). Walking while memorizing: age-related differences in compensatory behavior.Psychological Science12(3), 230-237.

Lu, C. F., Liu, Y. C., Yang, Y. R., Wu, Y. T., & Wang, R. Y. (2015). Maintaining Gait Performance by Cortical Activation during Dual-Task Interference: A Functional Near-Infrared Spectroscopy Study. PloS one10(6), e0129390.

Montero-Odasso, M., & Hachinski, V. (2014). Preludes to brain failure: executive dysfunction and gait disturbances. Neurological Sciences35(4), 601-604.

Montero‐Odasso, M., Verghese, J., Beauchet, O., & Hausdorff, J. M. (2012). Gait and cognition: a complementary approach to understanding brain function and the risk of falling. Journal of the American Geriatrics Society,60(11), 2127-2136.

Sparrow, W. A., Bradshaw, E. J., Lamoureux, E., & Tirosh, O. (2002). Ageing effects on the attention demands of walking. Human movement science,21(5), 961-972.

Verghese, J., Mahoney, J., Ambrose, A. F., Wang, C., & Holtzer, R. (2010). Effect of cognitive remediation on gait in sedentary seniors. The Journals of Gerontology Series A: Biological Sciences and Medical Sciences65(12), 1338-1343.

Yogev‐Seligmann, G., Hausdorff, J. M., & Giladi, N. (2008). The role of executive function and attention in gait. Movement disorders23(3), 329-342.

Category: The Student Blog | Tagged , , , , , , , | 2 Comments

Highlights from the 2015 Meeting of the Vision Sciences Society

Enjoying the view from the VSS Conference in St. Pete Beach, Florida. Photo courtesy of Minjung Kim.

Enjoying the view from the VSS Conference in St. Pete Beach, Florida. Photo courtesy of Minjung Kim.

Going to conferences is one of my favorite aspects about being a scientist. As a PhD student, I spend a lot of my life in solitude: when I read new literature, when I program new experiments, or when I conduct new analyses, I am very much alone in my thoughts. Outside of scheduled meetings with my advisors, there is little reason for interaction with other people. The loneliness and boredom that swallows up every graduate student – yes, it will happen to you, too – is an unfortunate, little-discussed aspect of being a PhD student. The isolation, together with the drive to succeed and the illusion that everyone else is doing better than you, can at times wreak havoc on mental health.
But, there is a cure: talking to people! Of course, the number one confidants are lab mates or department friends (this is why it’s important to have a friendly, accepting lab culture). Another perhaps less obvious outlet for fun and relief are conferences. At conferences, you will:

1) Meet people who find your research novel and interesting
2) Learn about other people’s research
3) Discover that other people have experience the same graduate school woes that you may be experiencing, and survived.

In short, being at a conference allows you to step back from the details of your own graduate research and think again about why it is you decided to become a scientist in the first place. It’s stimulating. In fact, I cured my ennui with this year’s meeting of the Vision Sciences Society (VSS).

What is VSS?
VSS is an annual conference dedicated to studying the function of the human visual system, and took place this year from May 15th to May 20th in idyllic St. Pete Beach, Florida. With 1400 attendees, VSS focused on all things visual perception, including 3-dimensional (3D) perception, attention, and methodologies spanning from psychophysics to neuroimaging to computational modeling.

This year’s VSS was filled with interesting panels, exhibits, and demos devoted to visual perception, including the viral phenomenon #theDress. I’ve highlighted in my round-up below some of the talks and posters that I found the most interesting.

The research round-up

Optimal camouflage under different lighting conditions
As a person who studies lighting and shading, I loved Penacchio, Lovell, Sanghera, Cuthill, Ruxton and Harris’s poster on how the effectiveness of countershading, a type of camouflage, varies with lighting condition. Countershading refers to shading patterns commonly found in aquatic animals, where the belly is coloured a brighter shade than the back: viewed from below, the animal’s bright belly is camouflaged against the bright light from the sun above, and viewed from above, the animal’s dark back is camouflaged against the darkness of the water.

But, should the transition between back and belly be sharp or smooth? Penacchio et al.’s answer is that it depends on whether the lighting condition is sunny or cloudy. On a sunny day, the shading is characterized by high contrast and sharp shadows, whereas on a cloudy day, the shading is characterized by low contrast and soft shadows. Penacchio et al. found that, when the target item’s countershading was matched to the lighting (e.g., target was sharply countershaded and was in a sunny scene), people had difficulty finding the target, whereas when the target was mismatched to the lighting (e.g., target was in a cloudy scene), people were very good. Interestingly, birds behaved the same as humans, showing that optimal camouflage works across different species.

I really enjoy ecologically based studies like this, because they help me understand how biological systems exploit constraints posed by the environment.

Can computers discriminate between glossy and matte surfaces?
Tamura and Nakauchi won the student poster prize for the Monday morning session for answering this question. Glossy materials are interesting, because unlike matte materials, they are characterized by white highlights. It’s an old painting trick: to make objects look glossy and shiny, add white highlights. However, the location of the highlights matters: if highlights are haphazardly placed on the surface without attention to surface geometry, they will look like streaks of paint (Anderson & Kim, 2009).

Tamura and Nakauchi examined whether a computer algorithm (a “classifier”) without sophisticated understanding of scene geometry could nevertheless learn to discriminate between images of matte, glossy, and textured (painted) surfaces. This does not mean that scene geometry is unimportant for glossiness perception, but rather, that the image representation they were using (Portilla & Simoncelli, 2000), conveys at least information about the surface geometry without explicitly encoding the shape.

I think this study is an excellent example of combining knowledge in two different fields, in this case human vision and computer vision, to answer an interesting question.

The shrunken finger illusion
One of my favorite talks at VSS this year was the talk on the shrunken finger illusion by Ekroll, Sayim, van der Hallen and Wagemans, a novel illusion that demonstrates how something as basic as your knowledge of your finger length can be overridden by visual cues.

Drawing by Rebecca Chamberlain. Reproduced from Ekroll, V., Sayim, B., Van der Hallen, R., & Wagemans, J. (2015). The shrunken finger illusion: Unseen sights can make your finger feel shorter. Manuscript in revision. Copyright by Ekroll et al. (2015).

Drawing by Rebecca Chamberlain. Reproduced from Ekroll, V., Sayim, B., Van der Hallen, R., & Wagemans, J. (2015). The shrunken finger illusion: Unseen sights can make your finger feel shorter. Manuscript in revision. Copyright by Ekroll et al. (2015).

Ekroll et al. gave human observers hollow hemispherical shells (imagine a ping pong ball cut in half), who wore them on their fingers. When viewed from the top, the observers experienced two illusions: (1) they saw a full sphere, not a hemisphere, and (2) they felt their fingers were shorter.

The explanation has to do with amodal completion, the mental “filling in” of object parts that are hidden behind another object. When my cat peeks out from behind a door, I know that her body has not been truncated in half (perish the thought!); my visual system knows – has a representation of – the rest of her body. Amazingly, humans are not born knowing amodal completion, acquiring this ability at around four-to-six months of age (Kellman & Spelke, 1983).

So, the observers were amodally completing the hemisphere, thus seeing a sphere (Ekroll, Sayim & Wagemans, 2015). But, they also knew that their fingers started behind the “sphere.” So, the observers’ brains are “making room” for the back half of the sphere by assuming that the finger is shorter than usual.

This is a bizarre but interesting illusion that is consistent with previous work on the flexibility of body representations.

Demo Night and #theDress
The second night of VSS is demo night, where researchers and exhibitors share a new illusion, or new software, or anything fun that might not fit in the ordinary proceedings of the conference. At this conference, #theDress was a popular topic, with three demos dedicated to the viral sensation. I presented a demo of my own, as well, based on a project that I am working on with Dr. Richard Murray and Dr. Laurie Wilcox of York University Centre for Vision Research.

Let there be light
When people think of glowing objects, people typically assume that the object must be exceptionally bright. Our demo, an extension of my old master’s thesis work, showed that that’s not true — for some types of glow, it is the perceived shape of the object that determines whether it appears to glow.

We computer-rendered a random, bumpy disc under simulated cloudy lighting. From the front, this disc looks like an ordinary, solid, white object. However, as the disc rotates, revealing its underside, the disc takes a translucent appearance, and appears to glow.


Note that the luminance of the disc is the same from the front and the back – it is only left-right reversed. But, critically, the correlation between the luminance and the depth changes between the front view and the back view: viewed from the front, the peaks of the discs are bright and the valleys are dark; viewed from the back, the peaks are dark and the valleys are bright. Why are the valleys so bright? It must be because there is a light source either inside or behind the object!

The demo was very well received. For me, this was a highlight of the conference — talking to people about something that I am enthusiastic about, and convincing them that it is, in fact, cool. I imagine most scientists feel the same way about their work.

I still see it as white/gold
“The dress” refers to a photo of a dress that went viral in March 2015. To many people, the dress appeared to be white with gold fringes, whereas to others, it appeared to be blue with black fringes; a small minority reported seeing blue/brown. As an unrepentant white/gold perceiver, I was astounded to learn that the dress is in, real life, blue/black.

Most vision scientists agree that the dress “illusion” is an example of color constancy gone wrong. Color constancy is the visual system’s remarkable ability to “filter out” the effects of lighting. For example, my salmon-coloured shirt appears salmon-coloured regardless of whether I look at it indoors or outdoors, on a sunny day or a cloudy day – even though, if I were to take a photo of the shirt, the RGB value of the shirt would vary tremendously across the conditions. The predominant explanation of the dress is that different people’s visual systems are assuming different lighting conditions, and therefore filtering differently, resulting in different percepts. One of the dress demos (Rudd, Olkkonen, Xiao, Werner & Hurlbert, 2015) showed that, indeed, the same blue/black dress under different lighting can appear very different, and that, had the dress been white/black, the illusion would not have occurred. (I should note that there two other dress demos — Shapiro, Flynn & Dixon, and Lafer-Sousa & Conway — but sadly I did not get to see them as I was busy with my own demo.)
However, questions remain. Why is there such huge individual variability? Why are some people able to flip between percepts? I cannot answer all these questions, but I can direct you to this future issue of Journal of Vision dedicated to exploring the dress. If you have your own idea that you would like to test, the submission deadline is July 1, 2016. In the meantime, you can follow these links to see what vision scientists have said so far.
Gegenfurtner, Bloj & Toscani (2015)
Lafer-Sousa, Herman & Conway (2015)
Macknik, Martinez-Conde & Conway (2015)
Winkler, Spillman, Werner & Webster (2015)

David Knill Memorial Symposium
David Knill, renowned vision scientist, suddenly passed away in October last year, at age 53. He was known for his early work on Bayesian approaches to visual perception – that is, the notion that visual perception is the result of computation that optimally combines noisy information from the environment with loose prior knowledge about the environment. As Bayesian inference is such an important foundation in computational theories of vision, it is unbelievable now to imagine that there was ever a time when Bayesian perspective was the minority view in vision science.

In his memory, Weiji Ma – a former post-doctoral fellow of his who is now a professor at New York University – organized a symposium for celebrating Dr. Knill’s life and work. Speaker after speaker talked about his dedication to science, and about his kind and gentle personality. Dr. Ma’s tribute, in particular, made me realize that I am lucky to have met Dave Knill when I did, when I applied to work with him for my PhD. The symposium was respectful and touching, and was the perfect way to commemorate a brilliant scientist.
Dr. Knill’s Forever Missed page is here.

The real reason for going to conferences
But the most memorable aspects of VSS were not part of any scheduled proceedings. I remember: catching up with old friends at the Tiki bar, cooking with my friends in the hotel room, complimenting a speaker on his talk, him complimenting me back on the question I asked, commiserating about the lack of job prospects and sharing new-hire stories… and of course, I can’t forget the annual night-time ocean dip that marks the end of VSS.

As banal as it sounds, scientists are what drive science. Science is not done in a vacuum, and some of the best collaborations come out of friendships that you forge at conferences. And even if nothing productive comes out – so what? Maybe its reward enough to know that there are friendly nerds are out there who share your interests.

Anderson, B. & Kim, J. (2009). Image statistics do not explain the perception of gloss. Journal of Vision, 9(11):10, 1-17. doi:10.1167/9.11.10.

Ekroll, V., Sayim, B., van der Hallen, R. & Wagemans, J. (2015, May). The shrunken finger illusion: amodal volume completion can make your finger feel shorter. Talk presented at the annual meeting of the Vision Sciences Society, St. Pete Beach, FL.

Ekroll, V., Sayim, B. & Wagemans, J. (2015). Against better knowledge: the magical force of amodal volume completion. i-Perception, 4(8) 511–515. doi:10.1068/i0622sas.

Gegenfurtner, K.R., Bloj, M. & Toscani, M. (2015). The many colours of ‘the dress.’ Current Biology. doi:10.1016/j.cub.2015.04.043.

Kellman, P.J. & Spelke, E.S. (1983). Perception of partly occluded objects in infancy. Cognitive Psychology, 15(4). 483-524.

Kim, M., Wilcox, L. & Murray, R.F. (2015, May). Glow toggled by shape. Demo presented at the annual meeting of the Vision Sciences Society, St. Pete Beach, FL.

Lafer-Sousa, R., Hermann, K.L. & Conway, B.R. (2015). Striking individual differences in color perception uncovered by ‘the dress.’ Current Biology. doi:10.1016/j.cub.2015.04.053

Lafer-Sousa, R. & Conway, B.R. (2015, May). A color constancy color controversy. Demo presented at the annual meeting of the Vision Sciences Society, St. Pete Beach, FL.

Macknik, S.L., Martinez-Conde, S. & Conway, B.R. (2015). How “the dress” became an illusion unlike any other. Scientific American Mind, 26(4). Retrieved from

Penacchio, O., Lovell, P.G., Sanghera, S., Cuthill, I.C., Ruxton, G. & Harris, J.M. (2015, May).
Concealing cues to shape-from-shading using countershading camouflage. Poster presented at the annual meeting of the Vision Sciences Society, St. Pete Beach, FL.

Portilla, J. & Simoncelli, E.P. (2000). A Parametric Texture Model based on joint statistics of complex wavelet coefficients. International Journal of Computer Vision, 40(1), 49-71.

Rudd, M., Olkkonen, M., Xiao, B., Werner, A. & Hurlbert, A. (2015, May). The blue/black and gold/white dress pavilion. Demo presented at the annual meeting of the Vision Sciences Society, St. Pete Beach, FL.

Shapiro, A., Flynn, O. & Dixon, E. (2015, May). #theDress: an explanation based on simple spatial filter. Demo presented at the annual meeting of the Vision Sciences Society, St. Pete Beach, FL.

Tamura, H. & Nakauchi, S. (2015, May). Can the classifier trained to separate surface texture from specular infer geometric consistency of specular highlight? Poster presented at the annual meeting of the Vision Sciences Society, St. Pete Beach, FL.

Winkler, A.D., Spillman, L., Werner, J.S. & Webster, M.A. (2015). Asymmetries in blue-yellow color perception and in the color of ‘the dress.’ Current Biology. doi:10.1016/j.cub.2015.05.004.


Photo by Karen Meberg

Minjung (MJ) Kim is a PhD candidate at New York University (NYU) Department of Psychology, in the Cognition and Perception program. She studies the visual perception of light and colour, with a keen interest in material perception (e.g., what makes glowing objects appear to glow?). She is co-advised by Dr. Richard Murray at York University Centre for Vision Research and Dr. Laurence Maloney at NYU.

Category: The Student Blog | Tagged , , , , , , | 2 Comments

LAMP Diagnostics: The key to malaria elimination?

A medical researcher with the U.S. Army tests a patient for malaria in Kisumu, Kenya. Photo courtesy of the U.S. Army Africa.

A medical researcher with the U.S. Army tests a patient for malaria in Kisumu, Kenya. Photo courtesy of the U.S. Army Africa.

By Patrick McCreesh

Malaria elimination is possible within a generation. But controlling malaria and eliminating malaria are different, and each pose certain challenges. Overcoming the unique challenges of malaria elimination is essential to meeting this goal, and the barriers presented in malaria elimination settings will require different strategies and policies. Researchers at the Malaria Elimination Initiative (MEI) at the University of California San Francisco (UCSF) are gathering evidence to inform elimination efforts going forward. I am a Masters candidate in the UCSF Global Health Sciences program, and I am working in the field in collaboration with the University of Namibia on a cross-sectional survey of malaria prevalence.

Malaria 101

Malaria is caused by parasites of the genus Plasmodium. The parasites infect people through the bite of female Anopheles mosquitoes. The parasites hide inside the body’s cells before bursting into the bloodstream; leaving fever, chills, and body aches in their wake. Most people can recover from the infection, but in some severe cases the parasites invade the brain or cause organ failure. Children under five years old and pregnant women are at highest risk for severe complications.

History and Global Impact

The world has made substantial progress toward reducing the burden of malaria in the past 15 years thanks to new technological innovations in malaria prevention and treatment. For example, pyrethrin impregnated bed nets are a low cost and effective way to reduce malaria (For you basketball fans, Stephen Curry not only scores on the court, but assists around the world with his contributions to Nothing But Nets #gowarriors). Rapid diagnostic tests (RDTs) allow for point-of-care diagnosis of malaria. Artemisinin combination therapy (ACT) was a major breakthrough in malaria treatment. The global incidence of malaria has decreased by an average of 3.27% per year since 2000 thanks to expanded coverage of these interventions. Child mortality due to malaria in sub-Saharan Africa has decreased by 31.5% during the same time period. Due to this drop in malaria cases, many believe releasing the world from the grip of malaria permanently is possible. In a seminal conference in 2007, Bill and Melinda Gates galvanized global leaders to commit to malaria eradication.

Despite this pledge to fight malaria around the world, about 3.2 billion people are at still at risk of malaria infection. In 2013, there were an estimated 198 million cases and 584,000 deaths due to malaria. Transmission is most intense in Africa and severely affects those in poverty. There is an urgent need to expand malaria control and elimination efforts to ultimately eradicate malaria. The current malaria eradication strategy calls for expanding control efforts in high transmission settings, eliminating malaria gradually at the fringes of transmission, and developing a preventive vaccine. The estimated cost of malaria elimination is $5.3 billion, but this investment will save millions of lives and dollars in the long run.

A mother and child rest under an insecticide-treated bed net in Zambia. Photo courtesy of the Gates Foundation.

A mother and child rest under an insecticide-treated bed net in Zambia. Photo courtesy of the Gates Foundation.

Malaria Elimination
Elimination involves interrupting transmission in a defined geographical area and dropping incidence to zero. Eliminating malaria in low transmission setting settings has several unique concerns compared to high transmission settings.

• Changing epidemiology

The epidemiology of malaria changes as malaria incidence drops. Asymptomatic carriers, people who are infected by the malaria parasite but who do not show symptoms, are responsible for about 20-50% of transmission. Most of these carriers are adult men instead of children and pregnant women. The proportion of people with sub-microscopic infections increases, making diagnosis difficult. Imported malaria from highly endemic neighboring countries create a source of epidemics in countries with low transmission.

• Increased importance of surveillance

In malaria control settings, local clinics see a large number of cases, too many cases to investigate and intervene at the source of infection. Malaria surveillance in high burden settings relies on passive case detection, where cases are detected when patients with fever seek care at health facilities. The high volume of malaria cases makes investigating every case impractical, and reporting consists of aggregate reporting by district. As the number of malaria cases decrease, it becomes practical and necessary to investigate and find additional cases. Additional cases are often asymptomatic. The problem of asymptomatic cases illustrates the importance of understanding the difference of incidence versus prevalence in achieving malaria elimination. Incidence is the number of malaria cases out of the total population over a period of time. Prevalence is the number of people with malaria out of the total population at a set point in time. Passive surveillance does not measure the true prevalence of malaria, nor addresses the epidemiological and diagnostic challenges in settings preparing for elimination. Effective surveillance in elimination settings requires active surveillance. Active surveillance involves finding and testing individuals in a selected geographic area, regardless of symptoms, to identify and treat additional cases of malaria.

• Diagnostic difficulties

There are limitations to the currently accepted malaria diagnostic tools. Blood smear microscopy is time consuming and requires highly trained staff to be accurate at lower parasite densities. Although rapid diagnostic tests are quick and easy to use, sensitivity decreases for infections with less than 100 parasites per microliter. The most accurate method to diagnose malaria is polymerase chain reaction (PCR) but this requires expensive equipment and reagents, highly trained laboratory personnel, and has long turnaround times. Diagnosing asymptomatic cases of malaria is challenging because asymptomatic people do not seek care and asymptomatic cases are often undetectable by microscopy or RDTs. The changing epidemiology and increased importance of surveillance in elimination settings requires a diagnostic tool that can detect asymptomatic infections as well as PCR, but costs less and gives results faster.
Loop-mediated isothermal amplification (LAMP)

A promising new nucleic acid based method is loop mediated isothermal amplification (LAMP). Like PCR, LAMP amplifies a specific section of DNA. Unlike PCR, LAMP does not require a thermocycler, the reaction takes place at a constant temperature. This process uses multiple different primers and a DNA polymerase to create a stem-loop structure of DNA, which serves as a template for amplification. LAMP has a sensitivity similar to PCR at low parasite density, 92.7% for 10 parasites per microliter. LAMP has lower costs, faster turnaround times, and less technical requirements compared to PCR for malaria diagnosis. LAMP can detect malaria DNA for from dried blood spots, which are commonly used to collect and test for malaria in active surveillance. In the lab at the University of Namibia, the speed of LAMP has enabled our group to process about 150 samples per day. Fast and accurate results makes LAMP a good option for diagnosing malaria in elimination settings.

Closing thoughts

The new challenges presented in malaria elimination settings will require different surveillance and diagnostic tools. Researchers at the Malaria Elimination Initiative at UCSF are gathering evidence to inform elimination efforts going forward. As more countries transition from malaria control to elimination, elimination policy should be made with these challenges in mind. Most importantly, policy makers should ask “Does our health system have the tools to quickly investigate every case, test everyone around the case, and provide treatment to positives in a timely manner?” A recent article in PLOS Neglected Tropical Diseases about malaria elimination in Mesoamerica and Hispanola notes that LAMP could help more countries answer “yes” to this question, ultimately resulting in a malaria free world.

Patrick McCreesh is a Master’s candidate in the Global Health Sciences program at University of California, San Francisco. His research focuses on infectious disease, with an emphasis on malaria elimination efforts in partnership with the UCSF Malaria Elimination Initiative.

Category: Neglected Diseases, PLoS Neglected Tropical Diseases, Public Health, The Student Blog | Tagged , , , , , , , | Comments Off on LAMP Diagnostics: The key to malaria elimination?

Uncovering the biology of mental illness

The visual cortex of the brain is depicted here. Photo courtesy of Bryan Jones.

The visual cortex of the brain is depicted here. Photo courtesy of Bryan Jones.

By James Fink

The human brain is capable of complex processes. The brain senses time and visualizes space. It allows us to communicate through language and create beautiful works of art. But what about when these cognitive abilities go awry? The National Institute of Mental Health (NIMH) cites serious mental illness (SMI) as a mental, behavioral, or emotional disorder that interferes with or limits one or more major life activities. The cited survey estimated the prevalence of SMI in the United States as ~4%, with the estimated prevalence for any mental illness being ~18%.

Mental illnesses, also known as psychiatric disorders, include many diverse conditions, including: anxiety disorders, Obsessive Compulsive Disorder (OCD), and post-traumatic stress disorder (PTSD). This group of illnesses presents a major global burden. In the 2010 Global Burden of Disease report, mental and substance use disorders comprised 7.4% of total disability-adjusted life years (DALYS) globally, and 8.6 million years of life lost (YLL), the single greatest cause of YLL worldwide. Psychiatric disorders also pose a significant burden to individuals and their families, and a challenge for clinicians and scientists.

For clinicians, many psychiatric disorders are difficult to treat. Even individuals with the same disorder present with a spectrum of symptoms and symptom severity, and many of the drugs currently used to treat these disorders have a multitude of undesirable side effects. For scientists, the mechanisms underlying this family of illnesses are still being unveiled, and reliable biological explanations for these disorders are still unclear, though it is known that biology and genetics play a role.

The lack of insight into the determinants of these disorders may relate to the difficulty in developing effective pharmacological treatments for them. Though various treatments for each of these disorders exist (ranging from drugs to cognitive behavioral therapy (CBT)), these treatments can be greatly improved. The field of neuroscience in particular is providing insight into the brain systems, cellular deficits, and genetics behind many of these disorders, which may help the development of new therapies. A limitation to current research efforts is that many of these insights come from the study of animal models. Also, conflicting results are often found in the literature. Despite these obstacles, the future of neuroscience holds a wealth of promise for developing a better understanding of psychiatric illness, studying these disorders with a new set of model systems, and interesting new research techniques.

Schizophrenia – An example of what we know

Perhaps one of the most studied of the psychiatric disorders is schizophrenia, a neurological disorder affecting about 1% of the general population and characterized by a variety of cognitive impairments, including a loss of affect and motivation, and often, the presence of hallucinations and delusions. A search in PubMed for articles with “schizophrenia” in the title yields more than 50,000 results, an indication of just how much research focuses on schizophrenia. Researchers have identified several hereditary factors (genes) with diverse sets of functions, that may be tied to this disorder..

DISC 1 (disrupted in schizophrenia 1) is a gene that makes a protein with many interacting partners, and plays a role in a variety of pathways within cells — including the ability of cells to divide, mature, and move towards their final location within a tissue. Such processes have been shown to lead to neurological deficits and disorders if disturbed.
Neuregulin 1 is a member of a protein family that has three other types of neuregulins. Perhaps most interestingly (and to make matters more complex), neuregulin 1 itself can also undergo a type of processing in the cell, called alternative splicing, that winds up producing many alternative forms of neuregulin 1 (up to 31 forms!) which all perform slightly different functions. The main job of neuregulin 1 seems to be to aid in brain and nervous system development.
The CACNA1C gene is responsible for making a protein in the cell that forms part of an important calcium channel, playing a role in a variety of brain cell (neuron) function.
Shank 3 is used by the cell as a scaffold, providing support for other cellular molecules that are important for the signaling that goes on between individual neurons of the brain.

Genes are not the only story though; researchers identified deficits at the cellular and network levels of the brain. The brain is comprised of both neurons and supporting cells called glia, the two major cell types of the brain. But there are a few classes of neurons (which change depending on the classification system you use), and each class is known to play its own important role in proper brain function. For instance, excitatory neurons use a chemical called glutamate to signal to other cells and are responsible for promoting the activity of their partners, whereas inhibitory neurons use a chemical called GABA to signal to their partners and are responsible for quieting these cells. There have been many reports of disrupted function of both excitatory and inhibitory neurons in mouse models of schizophrenia. These disruptions have been found in multiple brain regions and at different ages. But there have also been reports that fail to find a disruption of either of these cells types.

So what is the primary mechanism? This is exactly the problem outlined above: the complexity of mental illness makes it difficult for researchers to pin down a single biological explanation. Variations in mouse models, experimental approaches, animal age, or brain region being studied may be factors that contribute to inconsistencies across findings. The problem is that the brain changes over time and each brain region behaves a little differently from even its neighboring brain region. These factors complicate finding accurate and meaningful deficits in psychiatric disorders, a problem that may disappear in the near future.

A microscopy image of a mouse brain. One of the major barriers to greater understanding of the biological mechanisms of mental illness is reliance on animal models, which have different experiences of mental illness than humans. Image courtesy of Zeiss Microscopy.

A microscopy image of a mouse brain. One of the major barriers to greater understanding of the biological mechanisms of mental illness is reliance on animal models, which have different experiences of mental illness than humans. Image courtesy of Zeiss Microscopy.

The new toolbox
A recent article published by the New Yorker profiles Karl Deisseroth, a psychiatrist and neuroscientist at Stanford University. In the article Deisseroth mentions the difficulty in treating and understanding neurologic and psychiatric disorders, asking: “When we have the complexity of any biological system — but particularly the brain — where do you start?”.

Dr.Deisseroth is known for more than just treating patients. He is one of the inventors of one of the most exciting and cutting-edge experimental techniques used in neuroscience today – optogenetics. Optogenetics is a technique that allows expression of light-sensitive channels in neurons. By combining this technique with genetic and viral approaches, researchers can insert these channels into very specific populations of neurons. Ultimately, this approach allows researchers to control distinct groups of neurons and individual circuits of the brain by using flashes of light, providing unprecedented control on cellular and circuit function.

The study of neural circuits underlying behaviors has been a main aim of the field of neurobiology. Various circuits that underlie many human behaviors and cognitive functions are now known. Also, the specific circuits that are affected in psychiatric illnesses are starting to be uncovered. Applying optogenetics to the study of these disorders will provide researchers with a much more accurate approach to probing how various circuits function in models of neuropsychiatric disorders without affecting surrounding circuits. This is important, as non-specific circuit stimulation can actually cause confounding results.

The advent of induced pluripotent stem cells (iPSC), a method published by Shinya Yamanaka’s group in 2007 in which skin cells can be reverted to a stem cell-like state via the expression of “reprogramming factors”, now provides a means of allowing researchers to use human cells to study disease. iPSC’s can be driven to form various cell types, including neurons, by exposing these cells to a cocktail of factors known to be important in driving the development of nervous tissue.

Before the discovery of iPSCs by the Yamanaka group, the only available way of studying human brain tissue was through the use of post-mortem tissue and human embryonic stem cells. Now, iPSC technology allows researchers to collect skin cells from large groups of patients via skin biopsy, samples that can then be used to form patient-specific neurons. These neurons can be derived from actual patients with mental disorders, allowing researchers to study these diseases using human neurons from these patients.

Cerebral Organoids
Understanding the neural circuits that are disrupted in neuropsychiatric disorders remains a huge goal for neuroscience research. This is highlighted by the BRAIN initiative, put forth by the Obama administration in 2013, an initiative that aims at understanding how individual cells and neural circuits work together in order for researchers and clinicians to better understand brain disorders. A few years ago, approaches to studying and probing brain circuits, even by optogenetics, was limited to animal models, because cells grown in a culturing dish in a lab fail to form the neural circuits that are observed in the brain. But a paper published in 2013 from the Knoblich lab, showed that iPSC-derived neurons can be used to create “cerebral organoids”, small bits (4mm diameter) of neural tissue that were found to express markers characteristic of various brain regions including cortical and hippocampal regions. Since the publication of this innovative technique, other groups have published similar methods, creating additional versions of 3-D neural cultures and even making cerebellar-like structures, a brain structure known to be important for movement and coordination. In fact, WIRED magazine recently published an article discussing a recent paper that created what the authors call “cortical spheroids” (and what WIRED calls “brain balls”), a different method for developing organoid-like structures. This technique cannot yet be used to study neural circuits as they truly exist in the actual mouse or human brain (the circuits and brain-like regions observed in culture are very rudimentary). However, the advancement of these techniques over the next decade or two could provide new and exciting ways to probe actual human brain circuits using patient cells.

Though everyday we gain greater insight into how the human brain works and how brains might be disrupted in psychiatric disorders, we are far away from uncovering the exact circuits and mechanisms that underlie each of these disorders. It is clear that tools such as optogenetics, iPSC-derived neurons, and cerebral organoids can be used to provide tremendous control and detailed study of human neurons from these patients. Together, these studies might be able to gain a better understanding of how human neurons and neural circuits go awry in these disorders; leading to identification of novel targets for the development of drug therapies, providing promise for these patients and finally allowing scientists and clinicians to uncover the biology behind mental illness.


1. Uhlhaas, P. J. & Singer, W. Abnormal neural oscillations and synchrony in schizophrenia. 1–14 (2010).
2. Brandon, N. J. & Sawa, A. Linking neurodevelopmental and synaptic theories of mental illness through DISC1. 1–16 (2011).
3. Green, E. K. et al. The bipolar disorder risk allele at CACNA1C also confers risk of recurrent major depression and of schizophrenia. Molecular Psychiatry 15, 1016–1022 (2009).
4. Mei, L. & Xiong, W.-C. Neuregulin 1 in neural development, synaptic plasticity and schizophrenia. Nature Publishing Group 9, 437–452 (2008).
5. Gauthier, J. et al. De novo mutations in the gene encoding the synaptic scaffolding protein SHANK3in patients ascertained for schizophrenia. Proceedings of the National Academy of Sciences 107, 7863–7868 (2010).
6. Feng, Y. & Walsh, C. A. Protein-protein interactions, cytoskeletal regulation and neuronal migration. Nat Rev Neurosci 2, 408–416 (2001).
7. Lewis, D. A., Curley, A. A., Glausier, J. R. & Volk, D. W. Cortical parvalbumin interneurons andcognitive dysfunction in schizophrenia. Trends in Neurosciences 35, 57–67 (2012).
8. Takahashi, K. et al. Induction of Pluripotent Stem Cells from Adult Human Fibroblasts by Defined Factors. Cell 131, 861–872 (2007).
9. Dolmetsch, R. & Geschwind, D. H. The Human Brain in a Dish: The Promise of iPSC-Derived Neurons. Cell 145, 831–834 (2011).
10. Lancaster, M. A. et al. Cerebral organoids model human brain development and microcephaly. Nature 501, 373–379 (2013).
11. Paşca, A. M. et al. Functional cortical neurons and astrocytes from human pluripotent stem cells in 3D culture. Nature Chemical Biology (2015).

Category: The Student Blog | Tagged , , , , , , , | Comments Off on Uncovering the biology of mental illness