Climate Capital: Assessing the hidden value of coastal ecosystems

By Gordon Ober

Le Nguyen throws a line to haul up a crab trap. The Alaskan King Crab industry has been threatened by increasing ocean acidification. Photo courtesy of the Seattle Times, 2013 (Steve Ringman).

Le Nguyen throws a line to haul up a crab trap. The Alaskan King Crab industry has been threatened by increasing ocean acidification. Photo courtesy of the Seattle Times, 2013 (Steve Ringman).

 

 

 

 

 

 

 

 

Measuring the fiscal value of ecosystems

Ecosystems provide both direct and indirect services to the environment. Direct services are the ones we can essentially see, and are often given financial value. Many conservationists cite the direct and tangible economic value of the environment in their fight, but this is just one valuation of ecology. Oftentimes, the indirect services, or “hidden” values of the environment are the most significant and compelling reasons for prioritizing conservation. While economic arguments for conservation certainly have merit, the intrinsic functions of an ecosystem are often the most valued. The International Union for Conservation of Nature (IUCN) has called upon the global community to quantify the total value of an ecosystem by combining the values of both the direct and indirect services an ecosystem offers to highlight the importance of protected ecosystems.

Arguably, the most critical service an ecosystem provides is its inherent ability to capture and store carbon. As the world faces pressure to reduce CO2 and mitigate climate change, ascribing economic value to the critical indirect services of the ecosystem is important. Particularly as research has shown that the amount of carbon preserved ecosystems capture pays off in economic gains.

The Kyoto Protocol, a binding agreement set by the United Nations Framework Convention, put the Global Carbon Market into motion, which has lead to evaluations of hidden and indirect ecosystems services. This has created financial incentive for the conservation of valuable ecosystems by putting a price on greenhouse gas emissions and biologically storing units of carbon. However, at the moment it only applies to terrestrial areas. In the past, the science and modeling for marine systems has lagged behind its terrestrial counterpart, but new efforts by scientists to quantify the indirect values of marine ecosystem function have helped the issue gain momentum.

The High Economic Value of Coastal Ecosystems

Coastal ecosystems have proven to be some of the most productive and valuable ecological repositories on the planet. These ecosystems are renowned for their biodiversity as well as their economic value. Fisheries, aquaculture, recreation, and eco-tourism are just some of the ways individuals have made money from the health of coastal ecosystems. Though the health of coastal ecosystems can be profitable, it should come as no surprise that man-made forces are threatening the survival of these ecosystems, inevitably reducing their economic value.

Just one example of the ecosystems threatened by climate change is the iconic king crab fisheries in Alaska. Despite being valued at $100 million USD, ocean acidification threatens the survival of these fisheries. The acidification is the result of increases in atmospheric CO2, which cause ocean pH levels to drop, and decrease the molecules important to calcification. In this environment, shelled creatures, like king crabs, aren’t able to fully develop due to the effects of ocean acidification and thus face increased predation risk and stunted growth. This has an almost immediate impact on the size and quality of crabs caught. If Alaskan crab industry were to crash, people would lose their jobs and the local economy would suffer the consequences. In this scenario, the economic value of this habitat is tangible and direct, and the quoted value of $100 million USD is enough to attract attention.

While the economic value of crab fisheries in Alaska hold fiscal weight, there are many other hidden, or indirect, values of coastal ecosystems. For example, salt marshes help to buffer the impact of storms and flooding, and also filter runoff before it hits the aquatic system. While these benefits are recognized as important, their financial value is difficult to quantify, which creates challenges around building an economic argument for the indirect implications of healthy ecosystems. As a result, many policymakers and resource allocators are less inclined to prioritize sustaining coastal ecosystems that offer intangible benefits.

The increasing value of blue carbon

Indirect services may be hard to quantify, but their importance is starting to attract attention, specifically when it comes to blue carbon. Blue carbon, or the carbon capture by marine ecosystems, differs from the carbon captured by its terrestrial counterpart. In recent years, researchers like Murray and colleagues have started to study the role of blue carbon in combating climate change. Marine systems have high rates of capturing and storing CO2, making them the largest carbon sinks in the world. Coastal marine systems, such as salt marshes, seagrass meadows, coral reefs, and mangroves are active carbon sinks due to their high productivity. In these zones, there are high densities of both microscopic and macroscopic photosynthetic organisms that actively consume CO2 and can effectively store it. In another study, Nellemann and colleagues found that marine ecosystems could capture 55% of all atmospheric CO2. The ability to absorb and store CO2 is a hidden but incredibly valuable aspect of these ecosystems, especially in the face of exponential increases in anthropogenic CO2 as a human-induced factor climate change.

A model for costing marine ecosystems

In a May 2015 PLOS ONE paper, authors Tatiana Zarate-Barrera and Jorge Maldonado were able to adapt and reconfigure a model to help put a fiscal value on indirect value of coastal marine ecosystems specific to their local ecosystems.

Globally, many countries have made efforts to protect their marine ecosystems and resources, often by establishing marine protected areas (MPAs), areas in which biodiversity is protected from further human influence. However, it is often hard to get the funding and support to create more MPAs, and maintain MPAs that are already established. The researchers began to investigate both the carbon-storage ability and potential economic value of that storage within MPAs along the coast of Colombia, with the ultimate goal of providing solid economic evidence for conserving and expanding MPAs.

Total Economic Value (TEV) and its components applied to marine and coastal ecosystems. Figure courtesy of PLOS ONE " Valuing Blue Carbon: Carbon Sequestration Benefits Provided by the Marine Protected Areas in Colombia" (2015).

Total Economic Value (TEV) and its components applied to marine and coastal ecosystems. Figure courtesy of PLOS ONE ” Valuing Blue Carbon: Carbon Sequestration Benefits Provided by the Marine Protected Areas in Colombia” (2015).

 

 

 

 

 

 

 

 

 

 

The authors adapted a model proposed in a 2012 PLOS ONE paper to fit their local system. Part of the model takes into account the rate of carbon capture by an ecosystem and the dominant biota, the size of the ecosystem, how carbon storage can be divided by sediments and living material, and the depth of the seabed. This part of the model generates an annual amount of carbon uptake by a specific ecosystem based on the size and the biota present. The model then incorporates the price of carbon per unit on the Global Carbon Market to generate a monetary value for the carbon storage for a known amount of coast. Using this model, the researchers were able to estimate that increasing the size and range of MPAs would have a significant and positive economic impact. This new model indicates that the value of these ecosystems is about 16 to 33 million EUR per year, and for the first time puts a concrete monetary value on an indirect service.

Conclusion

Models such as the TEV model pioneered by Pendelton and colleagues are pivotal in global conservation efforts and necessary to help bridge the gap between science and economics. These models can also be adapted to show how much money a country could lose by destroying an ecosystem, conveying a powerful message to policymakers who may otherwise neglect coastal ecosystems.

As climate change tightens its grip, dealing with excess carbon and quelling the global effects are increasingly important. Developing a way to give economic incentive for preserving coastal ecosystems will not only help conservation, but will also help the scientific community address climate change.

References

Harley, Christopher DG, et al. “The impacts of climate change in coastal marine systems.” Ecology letters 9.2 (2006): 228-241.

IUCN, TNC, World Bank. How Much is an Ecosystem Worth? Assessing the Economic Value of Conservation. Washington, DC; 2004.

Murray BC, Pendleton L, Jenkins WA, Sifleet S. Green Payments for Blue Carbon Economic Incentives for Protecting Threatened Coastal Habitats. Durham, NC: Nicholas Institute for Environmental Policy Solutions; 2011.

Nellemann C, Corcoran E, Duarte C, Valdés L, De Young C, Fonseca L, et al. Blue Carbon. A Rapid Response Assessment. GRID-Arendal: United Nations Environment Programme; 2009.

Pendleton L, Donato DC, Murray BC, Crooks S, Jenkins WA, Sifleet S, et al. Estimating Global “Blue Carbon” Emissions from Conversion and Degradation of Vegetated Coastal Ecosystems. PLOS ONE 2012; 7(9): e43542. doi: 10.1371/journal.pone.0043542 PMID: 22962585

Welch, Craig. “Sea Change: Lucrative crab industry in Danger.” Seattle Times, September 11, 2013.

Zarate-Barrera TG, Maldonado JH (2015) Valuing Blue Carbon: Carbon Sequestration Benefits Provided by the Marine Protected Areas in Colombia. PLoS ONE 10(5): e0126627. doi:10.1371/journal.pone.0126627

Category: The Student Blog | Tagged , , , , , , , , , , , | Leave a comment

Legionnaire’s in 2015: Cutting Edge Research Clashing with Public Health Unpreparedness

By Meredith Wright

In 1976, the American Legion, a veterans group still active today, met in Philadelphia, PA for a three-day convention. Shortly after the convention ended many of the Legionnaires became ill. By the end of the outbreak, 182 people contracted pneumonia and 29 people had died. Identification of the causative agent introduced Legionella pneumophila to the world, an intracellular bacteria which infects freshwater protozoa and causes bacterial pneumonia in humans. Almost 40 years later, New York City just experienced its worst outbreak of the disease, which started on July 12, 2015, in the Bronx and has grown to 124 reported cases, with 12 fatalities. While the outbreak is now considered over, questions remain concerning how a bacterium known to clinicians and public health officials since 1976 can cause such a large outbreak now. What do we know about L. pneumophila, and what is being done to prevent future outbreaks?

Image courtesty of NYC Department of Health and Mental Hygiene

Image courtesty of NYC Department of Health and Mental Hygiene

The Basics

L. pneumophila is an intracellular bacteria that naturally resides in warm water, where it is adapted to live inside protozoa, such as amoebae, and forms biofilms. Humans become infected by breathing in vapors or mist from contaminated water sources; it is not spread from human to human. Further, this bacterium is considered an opportunistic infection, meaning that it generally infects the elderly, those with chronic lung problems, or those who are immunocompromised for other reasons.

In humans, L. pneumophila infects alveolar macrophages, a cell type which is important for degrading pathogens and other foreign particles that have entered the lungs. Macrophages perform this vital task mainly through a process called phagosome-lysosome fusion, which involves the uptake of a foreign body, its packaging into a compartment called the phagosome, and then the maturing of the phagosome into the lysosome. The lysosome’s interior has harsher conditions than the phagosome and should result in the degradation of its contents (see Fig. 1b for a basic schematic of phagosome-lysosome fusion). Many pathogens have evolved methods for counteracting this human defense mechanism, including L. pneumophila. The Legionnaire’s disease bacterium co-opts phagosome-lysosome fusion to create a compartment that shelters the bacterium, known as the Legionella Containing Vacuole (LCV), depicted in Fig. 1a. L. pneumophila also impacts human macrophages through lipid remodeling, autophagy, and ubiquitin pathway changes. These processes work together to allow the replication of the bacterium at the cost of normal macrophage function, resulting in the pneumonia symptoms experienced by patients.

 Fig. 1: Legionella and phagosome-lysosome fusion. A: Basic schematic of the formation of the Legionella Containing Vacuole (LCV). B: Standard schematic of effective phagosome-lysosome fusion to contain a bacterium.


Fig. 1: Legionella and phagosome-lysosome fusion.
A: Basic schematic of the formation of the Legionella Containing Vacuole (LCV).
B: Standard schematic of effective phagosome-lysosome fusion to contain a bacterium. From Isberg et al, 2009.

The Latest Research

The molecular details of how L. pneumophila redirects the host cell for its nefarious purposes of building the LCV are not completely understood, but this is an area of active research. For example, recent work has identified Lpg0393, a previously unknown protein produced by the bacterium which activates Rab proteins, a family of proteins that play a critical role in the phagosome-lysosome fusion process. Additionally, recent structural studies on SidC, a known protein of L. pneumophila, has explained in better detail how this bacterial protein works to recruit necessary host proteins for sustaining the LCV and also lays the groundwork for utilizing proteins from this bacterium as a tool for other research. And perhaps most relevant for public health officials, it has recently been shown that not all decontamination techniques are created equal. A study published in PLOS ONE in August 2015 shows that suboptimal temperatures and chlorine concentrations reduce the benefits of treatments currently used for decontaminating water, and that these suboptimal conditions are usually what is found in the parts of water systems that are most likely to come in contact with human users.

Sources of Outbreaks

According to the CDC, common sources of the bacteria include hot tubs, cooling towers, hot water tanks, plumbing systems, and fountains. In the 1976 outbreak which gave L. pneumophila its name, Legionnaires who became ill were all staying in the same hotel; and while a precise source was not confirmed, it seems likely the bacteria came from contaminated water tanks connected to the air conditioning system, as the only hotel employee to become sick was an air conditioning repairman. According to testimony from Dr. Mary T. Bassett, Commissioner of the New York City Department of Health and Hygiene, a total of 18 cooling towers have tested positive for L. pneumophila in the South Bronx, an impoverished part of New York city. Cooling towers sit atop large buildings and are important for maintaining large air conditioning units. They have not previously been routinely tested for L. pneumophila in New York City, but this outbreak has spurred a Commissioner’s Order requiring all building owners to disinfect their cooling towers or provide evidence of prior disinfection. The city has also introduced legislation mandating regular testing of New York city cooling towers for the bacterium. The legislation, which was just signed into law on August 18th, plans to prevent and prepare for future outbreaks in a number of ways, with a heavy focus on the cooling towers which harbored the bacteria that started this outbreak. In particular, a registry of cooling towers will be created, and these towers will be inspected on a quarterly basis using guidelines issued by the New York City Health Department. According to NYC Mayor Bill deBlasio, this legislation is the first of its kind in any large city. Mandated testing will likely become more important over time, as an aging population creates a larger pool of susceptible individuals, and warmer temperatures in big cities require more large cooling towers.

Looking Ahead

It is unclear why these preventative testing measures were not already in place in New York city. There have been attempts to institute regular testing in the past, but they were unsuccessful. This outbreak in New York city hearkens back to the early days of the ongoing Ebola outbreak in West Africa, where officials scrambled to contain the outbreak after it was already well underway. But unlike Ebola, L. pneumophila is relatively well-understood by the scientific community, and there are simple measures which can be used to prevent the growth of L. pneumophila in urban water sources. In New York city’s defense, officials have been making efforts to educate the public about Legionnaire’s disease, holding a town hall meeting in the Bronx community on August 3rd and taking to social media to spread information (follow the New York City Health Department on Twitter at @nycHealthy).

 

Hopefully, this outbreak can serve as an example for other municipalities that systems for preventing and containing infectious disease are an essential part of a city’s infrastructure. As research uncovers more details about how L. pneumophila and other infectious diseases live and spread, these findings must be incorporated into continuously-updated public health policies.

References

1. David W. Fraser, M.D., Theodore R. Tsai, M.D., Walter Orenstein, M.D., William E. Parkin, D.V.M., DR. P.H., H. James Beecham, M.D., Robert G. Sharrar, M.D., John Harris, M.D., George F. Mallison, M.P.H., Stanley M. Martin, M.S., Joseph E. McDade, Ph.D., C, and the F. I. T. Legionnaire’s Disease: A Description of an Epidemic of Pneumonia. N. Engl. J. Med. 297, 1189–1197 (1977).

2. NYC Mayor_ Legionnaires’ Outbreak Has Claimed 12 Lives – ABC News.

3. Flannagan, R. S., Cosío, G. & Grinstein, S. Antimicrobial mechanisms of phagocytes and bacterial evasion strategies. 7, (2009).

4. Newton, H. J., Ang, D. K. Y., Van Driel, I. R. & Hartland, E. L. Molecular pathogenesis of infections caused by Legionella pneumophila. Clin. Microbiol. Rev. 23, 274–298 (2010).

5. Hubber, A. & Roy, C. R. Modulation of host cell function by Legionella pneumophila type IV effectors. Annu. Rev. Cell Dev. Biol. 26, 261–283 (2010).

6. Sohn, Y.-S. et al. Lpg0393 of Legionella pneumophila Is a Guanine-Nucleotide Exchange Factor for Rab5, Rab21 and Rab22. PLoS One 10, e0118683 (2015).

7. Luo, X. et al. Structure of the Legionella Virulence Factor, SidC Reveals a Unique PI(4)P-Specific Binding Domain Essential for Its Targeting to the Bacterial Phagosome. PLOS Pathog. 11, e1004965 (2015).

8. Isberg, R. R., O’Connor, T. J. & Heidtman, M. The Legionella pneumophila replication vacuole: making a cosy niche inside host cells. Nat. Rev. Microbiol. 7, 13–24 (2009).

9. Cervero-Aragó, S., Rodríguez-Martínez, S., Puertas-Bennasar, A. & Araujo, R. M. Effect of Common Drinking Water Disinfectants, Chlorine and Heat, on Free Legionella and Amoebae-Associated Legionella. PLoS One 10, e0134726 (2015).

10. NYC Dept. of Health and Mental Hygiene, Testimony of Mary T. Bassett regarding Cooling Towers Registration, and Inspection and Testing for Microbes and Preconsidered Intro. (2015).

11. Legionnaires’ death toll swells to 12; bacteria found in two more cooling towers – New York’s PIX11 – WPIX-TV.

12. Hygiene, N. D. of H. and M. Testimony of Mary T. Bassett regarding Cooling Towers Registration, and Inspection and Testing for Microbes. 1–5 (2015).

13. A Belated Look at New York’s Cooling Towers, Prime Suspect in Legionnaires’ Outbreak – The New York Times.

14. De Blasio Signs Cooling Tower Regulations Into Law In Wake Of Legionnaires’ Outbreak – CBS New York.

15. Mayor Bill de Blasio Signs First-In-Nation Legislation for Cooling Tower Maintenance – NYC.gov.

Category: Bacteria, The Student Blog | Tagged , , , , , , , , , , , , , | 1 Comment

In “My Virtual Dream”, art and science unite in unique study of neurofeedback

In 2013, art and science merged like never before at Toronto’s Nuit Blanche art festival when guests were given the opportunity to participate in an scientific experiment investigating neurofeedback. Following the initial success of the “My Virtual Dream” project, plans are being made to scale-up the experiment as scientists take the project on a world tour. In 2015, the My Virtual Dream world tour will kick-off in Amsterdam and travel to San Fransisco. This blog post examines the original experiment conducted in 2013 more closely, and highlights some potential concerns around this innovative new approach.

On October 5, 2013, visitors entered a large scale art installation and participated in neurofeedback, a process where participants see their brain activity in real time, and based on the reading, modulate their behavior. This event not only introduced people to the power of EEG headsets, but also demonstrated that neurofeedback can have an impact on how people learn within one minute of initial interaction. EEG recordings are often taken in a lab, an environment that does not mimic the complexity that we encounter every day, but “My Virtual Dream” in Toronto proved that the collection of physiological data does not need to be limited to a researcher’s workshop. Yet, as technology becomes more sophisticated and mobile, scientists will need to continue to consider the ethical implications of recording physiological data from the general public.

The “My Virtual Dream” Experience  

Over the course of 12 hours, electroencephalography (EEG) data, a measure of electrical activity from the brain, was recorded and analyzed from a total of 523 adults. Groups of people were brought into a 60-foot geodesic dome to participate in a two-part interactive experience. The dreaming portion of the evening involved projecting animated video clips with prerecorded music over the surface of the dome for the volunteers. There were four dream themes with different audio and video pairings, but the environment changed based on the EEG recordings from the participants. Only the gaming experience was reported in a 2015 PLOS ONE paper entitled “’My Virtual Dream’: Collective Neurofeedback in an Immersive Art Environment”. This blog post aims to serve as a recap of the event based on the paper, but I personally did not attend.

Volunteers were given a lesson about the EEG technology and the overall tenets of the game, but once inside the dome were divided into groups of five, and hooked up to a wireless Muse EEG headset. Each group was positioned in front of a LCD screen that displayed the five brainwaves. Electrical activity of the brain can be measured and displayed as a signal that looks like a wave, and for this part of the experiment, participants saw the signals that coordinated to their electrical activity (Figure 1, panel A). For the remainder of the experiment, the brain activity was displayed as an orb or firework (Figure 1, panel B – D). The game was divided into solo and collective neurofeedback, and measured alpha and beta bands throughout. Alpha bands (8-12 Hz) are generally associated with wakefulness and relaxation, while beta bands (18-30 Hz) correlate to periods of active and busy thinking. After a tutorial, which recorded individual baseline thresholds for the alpha and beta bands to be used subsequently in the evening, volunteers moved on to the game. The first two games were individual and lasted 50 seconds. Volunteers were instructed to relax and the visual feedback was based on gathering particles into an orb (Figure 1, panel B). When participants were relaxed and the alpha power went beyond the baseline, particles were added to the center of the glowing orb. For the concentration period, this orb again acted as the visual feedback (Figure 1, panel C). When the beta power, or concentration level, surpassed the threshold set during the tutorial, the orbs glowed and brightened. To finish the cycle of relaxation and concentration, the volunteers watched their individual orbs explode like fireworks. The height of the firework correlated to the alpha achievement, while the brightness corresponded to the beta achievement. For the group portion of the session, every individual again had their own orb, but were in a circle surrounding a group orb (Figure 1, panel D). The goal of the game was the same; participants were visually rewarded by gathering particles into an orb for the relaxation period and brightening the orb for the concentration period. The difference was that this time, the circle in the middle grew in size every time three participants were in the appropriate target state. This could be either relaxation or concentration, depending on what the game was asking for. After the game, the group orb was turned into a firework show that displayed the performance of the group as a whole. The final part of the game mimicked the previous group experience, except that there were no explicit instructions. The members of the group were told to attempt to synchronize with the other members, and the middle orb again grew in size if three members of the group were in the same target state. It didn’t matter if three members were relaxing or concentrating, but the members had to be synchronous.

journal.pone.0130129.g002 (1)

Figure 1: Screenshots of (A) initial training period and welcome message where brain activity was displayed as the signal (B) Solo game during relaxation period (C) Solo game during concentration period (D) Group game where giant orb in the middle is representative of participant’s collective brainstates. (Photo courtesy of Kovacevic et al.)

Results

EEG headsets are playing a more prominent role in the lab and as consumer devices. Examples include the Muse headset, the Melon headset funded by Kickstarter, and the Emotiv headset. EEG headsets are non-invasive, allow for mobility, and generally require a small amount of time for prep. In terms of applicability for both consumers and researchers who are studying EEG activity, it would be ideal if people learned how to modulate brain activity from EEG headsets as quickly as possible. Researchers had hypothesized, based on conducting the games with smaller samples (less than 10 people), that subtle, but detectable changes in brain activity would be observed during the neurofeedback training period. After analyzing the data, researchers reported that participants were learning to modulate their relative spectral power very quickly — within only 60 seconds for relaxation and 80 seconds for concentration. This was clear because there were gradual declines in the frequencies during the alpha period and gradual increases when the beta activity was being monitored. The entire gaming session was only seven minutes long, but even within that rapid time period, these changes in brain activity could be observed. The ability to draw this conclusion  relies partly on having a large sample size of more than 500 people.

Lab environments often do not mimic the complex experiences that we encounter every day. Although “My Virtual Dream” was still a research experiment, the environment inside the geodesic dome was completely unlike the muted, potentially bland, and less personable appearance of lab. While we may not encounter geodesic domes as part of our average day, the experience is at least more complex and stimulating that a typical lab environment, and this could be one reason why researchers chose this setting. “My Virtual Dream” functioned as a proof-of-concept that EEG headsets could feasibly be used to monitor brain activity in environments that we typically inhabit: our homes, social situations, and work.

Crowdsourcing for Neuroscience  

As technology has become more and more sophisticated, crowd-sourcing for neuroscience research has become more common. There are a number of programs and apps, including Lumosity and The Great Brain Experiment, which rely on crowd-sourcing. These two outlets, although approaching data collection in an unconventional way, have published papers in peer reviewed journals. Also, the game Eyewire is a citizen science project that allows any user to help map 3D neurons in the brain, and websites like Kickstarter and Experiment.com are helping non-scientists play a larger role in funding and often participating in science. As exciting as these new methods for data assimilation are, the collection of brainwaves and physiological data could quickly become complicated. It is notable that the “My Virtual Dream” experiment was only tracking alpha and beta wavelengths, and the data were de-identified. Volunteers were also given the option to remove their EEG recordings from the analysis, but only four participants selected this option. The event was organized by an academic center affiliated with the University of Toronto called Baycrest, and the event was in compliance with their Research Ethics Board. Tracking physiological data over a large sample size could become ethically problematic if we eventually come to a place where brain signals can be used as a signature for a neurological disorder. If EEG data could act as a biomarker for a disease state or even behavioral tendencies, the consequences for individuals could be severe.

Efforts have been made to protect genetic information under the Genetic Information Nondiscrimination Act (GINA), which state that genetic information cannot be used against someone for insurance or employment purposes. But these protections do not yet exist for neurological data. In my opinion, research could become ethically fraught if suddenly EEG data became a clue to a person’s capacities or neurological state. As it stands now, the insight gained from the research of EEG data most likely outweighs any privacy violations, but as technology becomes more and more sophisticated, researchers should continue to consider the implications of collecting physiological data from the masses.

Plans are being made to take “My Virtual Dream” on a world tour beginning with Amsterdam, followed by Washington, D.C. and San Francisco, CA. Toronto’s Nuit Blanche art festival will again take place in 2015, and although the specific events have not been announced yet, more than 45 novel, interactive, and engaging art installations will be displayed.

Category: The Student Blog | Tagged , , , , , , , | Leave a comment

Seeing like a computer: What neuroscience can learn from computer science

Screenshot from the original "Metropilis" trailer. Image credit: Paramount pictures, 1927. Public domain.

Screenshot from the original “Metropilis” trailer. Image credit: Paramount pictures, 1927. Public domain.

By Marta Kryven

What do computers and brains have in common? Computers are made to solve the same problems that brains solve. Computers, however, rely on a drastically different hardware, which makes them good at different kinds of problem solving. For example, computers do much better than brains at chess, while brains do much better than computers at object recognition. A study published in PLOS ONE found that even bumblebee brains are amazingly good at selecting visual images with color, symmetry and spatial frequency properties suggestive of flowers. Despite their differences, computer science and neuroscience often inform each other.

In this post, I will explain how do computer scientists and neuroscientists often learn from each others’ research and review the current applications of neuroscience-inspired computing.

Brains are good at object recognition

Can you see a figure among the dots? Image credit: Barbara Nordhjem, reprinted with permission.

Can you see a figure among the dots? Image credit: Barbara Nordhjem, reprinted with permission.

Look at the figure (image 2) composed of seemingly random dots. Initially the object in the image is blended with the background. Within seconds, however, you will see a curved line, and soon after the full figure. What is it? (The spoiler is at the end of this post.)

Computer vision algorithms, which are computer programs designed to identify objects in images and video, fail to recognize images like image 2. As for humans, it turns out that although people take time to recognize a hidden object, people start looking at the right location in the image very soon and long before they are able to identify it. That does not mean that people somehow know which object they are seeing, but cannot report the correct answer. Rather, people experience sudden perception of the object after continued observation. A 2014 study published in PLOS ONE found that this may be because the human visual system is amazingly good at picking out statistical irregularities in images.

Neuroscience-inspired computer applications

Like any experimental science neuroscience research starts with listing all possible hypotheses to explain results of experiments. Questions such as: “How can a magician hide movement in plain sight?” or “What causes optical illusions?” are probed by researchers.

Comparatively, computer science researchers begin their process by listing alternative methods to implement behavior, such as vision, in a computer. Instead of discovering how vision works, computer scientists develop software to solve problems such as: “How should a self-driving car respond to an apparent obstacle?” or “Are these two photographs of the same person?”

An acceptable computer-vision solution for artificial intelligence (AI), just like a living organism, must process information quickly and on a limited knowledge. For example, slow reaction time for a self-driving car might result in the death of a child, or stopping traffic because of a pothole. The processing speed of current computer vision algorithms is far behind the speed of visual processes humans employ daily, however, the technical solutions that computer scientists develop may have relevance to neuroscience, sometimes acting as a source of hypothesis about how biological vision might actually work.

Likewise, most AI, such as computer vision, speech recognition or even robotic navigation, addresses problems already solved in biology. Thus, computer scientists often face a choice between inventing a new way for a computer to see or modeling on a biological approach. A solution that is biologically plausible has the advantage of being resource-efficient and tested by evolution. Probably the best-known example of a biomimetic technology is Velcro, an artificial fabric recreating an attachment mechanism used by plants. Biomimetic computing, that is, recreating functions of biological brains in software, is just as ingenuous, but much less well known outside the specialized community.

The interrelated components of neuroscience and computer science compelled me to explore how computer scientists and neuroscientists learn from each other. After visiting the International Conference on Perceptual Organization (ICPO) in June 2015, I made a list of trends in neuroscience-inspired computer applications that I will explore in more detail in this post:

1. Computer vision based on features of early vision
2. Gestalt-based image segmentation (Levinshtein, Sminchisescu, Dickinson, 2012)
3. Shape from shading and highlights — which is described in more detail in a recent PLOS Student Blog post
4. Foveated displays (Jacobs et al. 2015)
5. Perceptually-plausible formal shape representations

My favorite example of the interlocking components of neuroscience and computer science is computer vision based on features of early vision. ( Note that there are also many other, not biologically-inspired approaches to computer vision.) I particularly like this example because in this case a discovery in neuroscience of vision of seemingly simple principles of how visual cortex processes information informed a whole new trend in computer science research. To explain how computer vision borrows from biology, lets begin by reviewing the basics of human vision.

Inside the visual cortex

Let me present a hypothetical situation. Suppose you are walking along a beach with palm trees, tropical flowers and brightly colored birds. As new objects enter your field of vision, they seem to enter your awareness instantly. But in reality, shape perception emerges in the first 80-150 milliseconds of exposure (Wagemans, 2015). So how long is 150 ms? For comparison, in 150 ms a car travels four meters at highway speed, and a human walking along a beach travels about 20 centimeters in the time it takes to form a mental representation of an object, such as a tree. Thus, as you observe the palm trees, the flowers and the birds, your brain gradually assembles familiar percepts. During the first 80-150 ms, before awareness of the object has emerged, your brain is hard at work assembling shapes from short and long edges in various orientations, which are coded by location-specific neurons in primary visual area, V1.

Today, we know a lot about primary visual area V1 thanks to pioneering research of Hubel and Wiesel, who both won the Nobel Prize for discovering scale and orientation-specific neurons in cat visual cortex in the late 1950s. As an aside, if you have not yet seen the original videos of their experiments demonstrating how a neuron in a cat’s visual cortex responds to a bar of light, I highly recommended viewing these classic videos!

Inside the computer

Approximation of a square wave with four Fourier components. Image credit: Jim.belk, Public Domain via Wikimedia Commons

Approximation of a square wave with four Fourier components. Image credit: Jim.belk, Public Domain via Wikimedia Commons

At about the time that Hubel and Wiesel made their breakthrough, mathematicians were looking for new tools for signal processing, to separate data from noise in a signal. For a mathematician, a signal may be a voice recording encoding a change of frequency over time. It may also be an image, encoding a change of pixel brightness over two dimensions of space.

When the signal is an image, signal processing is called image processing. Scientists care about image processing because it enables a computer “to see” a clear precept while ignoring the noise in sensors and in the environment, which is exactly what the brain does!

The classic tools of signal processing, Fourier transforms, were discovered by Joseph Fourier in the nineteenth century. A Fourier transform represents data as a weighted sum of sines and cosines. For example, it represents the sound of your voice as a sum of single frequency components! As illustrated in the figure above, the larger the quantity of frequency components that are used, the better is the approximation. Unfortunately, unlike brain encodings, Fourier transforms do not explicitly encode the edges that define objects.

To solve this problem, scientists experimented with sets of arbitrary basis functions encoding images for specific applications. Square waves, for example, encode low-resolution previews downloaded before a full-resolution image transfer is complete. Wavelet transforms of other shapes are used for image compression, detecting edges and filtering out lens scratches captured on camera.

What do computers see?

It turns out that a specially selected set of image transforms can model the scale and orientation-specific neurons in primary visual area, V1. The procedure can be visualized as follows. At first, we process the images by a progressively lower spatial frequency filters. The result is a pyramid of image layers, equivalent to seeing the image from further and further away. Then, each layer is filtered by several edge orientations in turn. The result is a computational model of the initial stage in early visual processing, which assumes that the useful data (the signal in the image) are edges within a frequency interval. Of course, such a model represents only a tiny aspect of biological vision. Nevertheless, it is a first step towards modeling more complex features and it answers an important theoretical question: If a brain could only see simple edges, how much would it see?

To sample a few applications, our computational brain could tell:

1. Whether a photograph is taken indoors or outdoors (Guérin-Dugué & Oliva, 2000)
2. Whether the material is glossy, matte or textured
3. Whether a painting is a forgery, by recognizing individual artist brushstrokes.

Moreover, a computer brain can also do something that a real brain cannot; it can analyze a three-dimensional signal, a video. You can think of video frames as slices perpendicular to time in a three-dimensional space-time volume. A computer interprets moving bright and dark patches in the time-space volume as edges in three dimensions.

Using this technique, MIT researchers discovered and amplified imperceptible motions and color changes captured by a video camera, making them visible to a human observer. The so-called motion microscope reveals changes as subtle as face color changing with heartbeat, a baby’s breath, and a crane swaying in the wind.Probably the most striking demonstration presented at ICPO 2015 last month showed a pipe vibrating into different shapes when struck by a hammer. Visit the project webpage for demo and technical details.

So, how far are computer scientists from modeling the brain? In 1950, early AI researchers expected computing technology to pass the Turing test by 2000. Today, computers are still used as tools for solving technically specific problems; a computer program can behave like a human only to the extent that human behavior is understood. The motivation for computer models based on biology, however, is twofold.

First, both computer scientists and computer users alike are much more likely to see the output of a computer program as valid if its decisions are based on biologically plausible steps. A computer AI recognizing objects using the same rules as humans will likely see the same categories and come to the same conclusions. Second, computer applications are a test bed for neuroscience hypotheses. Moreover, a computer implementation can tell us not only if a particular theoretical model is feasible, it may also, unexpectedly, reveal alternative ways in which evolution could work.

Answer to image one riddle: A rabbit

References
Daubechies, Ingrid. Ten lectures on wavelets. Vol. 61. Philadelphia: Society for industrial and applied mathematics, 1992.

Elder, James H., et al. “On growth and formlets: Sparse multi-scale coding of planar shape.” Image and Vision Computing 31.1 (2013): 1-13.

Freeman, William T., and Edward H. Adelson. “The design and use of steerable filters.” IEEE Transactions on Pattern Analysis & Machine Intelligence 9 (1991): 891-906.

Gerhard HE, Wichmann FA, Bethge M (2013) How Sensitive Is the Human Visual System to the Local Statistics of Natural Images? PLoS Comput Biol 9(1): e1002873.

Guérin-Dugué, Anne, and Aude Oliva. “Classification of scene photographs from local orientations features.” Pattern Recognition Letters 21.13 (2000): 1135-1140.

Kryven, Marta, and William Cowan. “Why Magic Works? Attentional Blink With Moving Stimuli” Proceedings of International Conference on Perceptual Organization, York University Centre for Vision Research , 2015.

Levinshtein, Alex, Cristian Sminchisescu, and Sven Dickinson. “Optimal image and video closure by superpixel grouping.” International journal of computer vision 100.1 (2012): 99-119.

Lyu, Siwei, Daniel Rockmore, and Hany Farid. “A digital technique for art authentication.” Proceedings of the National Academy of Sciences of the United States of America 101.49 (2004): 17006-17010.

Murata T, Hamada T, Shimokawa T, Tanifuji M, Yanagida T (2014) “Stochastic Process Underlying Emergent Recognition of Visual Objects Hidden in Degraded Images.” PLoS ONE 9(12): e115658.

Nordhjem, Barbara, et al. “Eyes on emergence: Fast detection yet slow recognition of emerging images.” Journal of vision 15.9 (2015): 8-8.

Nordhjem B, Kurman Petrozzelli CI, Gravel N, Renken R, Cornelissen FW (2014) “Systematic eye movements during recognition of emerging images.” J Vis 14:1293–1293.

Orbán LL, Chartier S (2015) “Unsupervised Neural Network Quantifies the Cost of Visual Information Processing”. PLoS ONE 10(7): e0132218.

Said CP, Heeger DJ (2013) “A Model of Binocular Rivalry and Cross-orientation Suppression. PLoS Comput Biol” 9(3): e1002991.

Tabei K-i, Satoh M, Kida H, Kizaki M, Sakuma H, Sakuma H, et al. (2015) “Involvement of the Extrageniculate System in the Perception of Optical Illusions: A Functional Magnetic Resonance Imaging Study.” PLoS ONE 10(6): e0128750.

Vandenbroucke ARE, Sligte IG, Fahrenfort JJ, Ambroziak KB, Lamme VAF (2012) “Non-Attended Representations are Perceptual Rather than Unconscious in Nature.” PLoS ONE 7(11): e50042.

Johan Wagemans, “Perceptual organization at object boundaries: More than meets the edge” Proceedings of International Conference on Perceptual Organization (2015)

Wu, Hao-Yu, et al. “Eulerian video magnification for revealing subtle changes in the world.” ACM Trans. Graph. 31.4 (2012): 65.

Category: The Student Blog | Tagged , , , , | 1 Comment

Using Modern Human Genetics to Study Ancient Phenomena

By Emma Whittington

We humans are obsessed with determining our origins, hoping to reveal a little of  “who we are” in the process. It is relatively simple to trace one’s genealogy back a few generations, and there are many companies and products offering such services. But what if we wanted to trace our origins further on an evolutionary timescale and study human evolution itself? In this case, there are no written records and censuses. Instead, the study of human evolution has so far relied heavily on fossil specimens and archaeological finds. Now, genetic tools and approaches are frequently used to answer evolutionary questions and reveal patterns of divergence that reflect different selective pressures and geographical movement. This is particularly true for studies of human migrations out of Africa, global population divergence, and its consequences for human health.

Humans Originated in Africa

The current best hypothesis suggests anatomically modern humans (AMH) arose in East Africa approximately 200,000 YBP (Years Before Present). AMH migrated from Africa around 100,000-60,000 years ago in a series of dispersals that expanded into Europe and Asia between 60,000 and 40,000 YBP. It has been scientifically proven that East Africa is the origin of humans, and supported by both archaeological and genetic data. Genetic diversity is greatest in East Africa and decreases in a step-wise fashion from the equator in a pattern reflecting sequential founder populations and bottlenecks. Figure 1 shows three populations with decreasing genetic diversity (represented by the colored circles) from left to right. The first population, with the greatest genetic diversity, represents Africa. A second population is shown migrating away from ‘Africa’ taking with it a sample of the existing genetic diversity. This forms the founding population for the next series of migrations. Each time a population migrates it represents only a sample of genetic variation existing in its founding population, and in doing so, sequential migration (such as those in Figure 1) leads to a reduction in genetic diversity with increasing distance from the first population.

Figure 1. Diagrammatic representation of the serial founder effect model. Image by Emma Whittington.

Figure 1. Diagrammatic representation of the serial founder effect model. Image by Emma Whittington.

Leaving Africa – Where do we go from here?

Although the location of human origin is generally accepted, there is a lack of consensus around the migration routes by which AMH left Africa and expanded globally. There are many studies using genetic tools to identify likely migration routes, one of which is a recent PLOS One article by Veerappa et al (2015). In this study, researchers characterized the global distribution of copy number variation, which is the variation in the number of copies of a particular gene, by high resolution genotyping of 1,115 individuals from 12 geographic populations, identifying 44,109 copy number variants (CNVs). The CNVs carried by an individual determined their CNV genotype and by comparing CNV genotypes between all individuals from all populations, the authors determined similarity and genetic distance between populations. The phylogenetic relationship between populations proposed a global migration map (Figure 2), in which an initial migration from the place of origin, Africa, formed a second settlement in East Asia, which is similar to a founding population seen in Figure 1. At least five further branching events took place in the second settlement, forming populations globally. The migration routes identified in this paper largely support those already proposed, but of particular interest this paper also proposes a novel migration route from Australia, across the Pacific, and towards the New World (shown in blue in Figure 2).

Figure 2. A global map showing CNV counts and possible migration routes. Photo courtesy of Veerappa et al (2015).

Figure 2. A global map showing CNV counts and possible migration routes. Photo courtesy of Veerappa et al (2015).

Global Migration Leads to Global Variation

As AMH spread across the globe, populations diverged and encountered novel selective pressures to which they had to adapt. This is reflected in the phenotypic (or observable) variation seen between geographically distant populations. At the genotype level, a number of these traits show evidence of positive selection, meaning they likely conferred some advantage in particular environments and were consequently favored by natural selection and increased in frequency. A well cited example of this is global variation in skin color, which is thought to reflect a balance between vitamin D synthesis and photoprotection (Figure 3). Vitamin D synthesis requires UV radiation, and a deficiency in vitamin D can result in rickets, osteoporosis, pelvic abnormalities, and a higher incidence of other diseases. At higher latitudes, where UV radiation is low or seasonal, experiencing enough UV radiation for sufficient vitamin D synthesis is a major concern. Presumably, as AMHs migrated from Africa, they experienced reduced levels of UV radiation, insufficient vitamin D synthesis, and severe health problems; resulting in selection for increased vitamin D synthesis and lighter skin pigmentation. Consistent with this, a number of pigmentation genes underlying variation in skin color show evidence of positive selection  in European and Asian populations, relative to Africa. On the flip side, populations near the equator experience no shortage of UV radiation and thus synthesize sufficient vitamin D; however the risk of UV damage is much greater. Melanin, the molecule determining skin pigmentation, acts as a photoprotective filter, reducing light penetration and damage caused by UV radiation, resulting in greater photoprotection in darkly pigmented skin. Selective pressure to maintain dark pigmentation in regions with high UV radiation is evident by the lack of genetic variation in pigment genes in areas such as Africa. This suggests selection has acted to remove mutation and maintain the function of these genes.

Figure 3. A map showing predominant skin pigmentation globally. Photo courtesy of Barsch (2003)

Figure 3. A map showing predominant skin pigmentation globally. Photo courtesy of Barsch (2003)

Can genetics and human evolution have a practical use in human health?

Beyond phenotypic consequences, genetic variation between populations has a profound impact on human health. It has a direct influence on an individual’s predisposition for certain conditions or diseases. For example, Type 2 diabetes is more prevalent in African Americans than Americans of European descent. Genome wide association studies (GWAS) analyze common genetic variants in different individuals and assess whether particular variants are more often associated with certain traits or diseases. Comparing the distribution and number of disease-associated variants between populations can assess if genetic risk factors underlie disparities in disease susceptibility. In the case of Type 2 diabetes, African Americans carry a greater number of risk variants than Americans of European descent at genetic locations (loci) associated with Type 2 diabetes. It is clear that an individual’s ethnicity affects their susceptibility and likely reaction to disease, and as such should be considered in human health policy. Understanding the genetic risk factors linking populations and disease can identify groups of individual at greater risk of developing certain diseases for the sake of prioritizing treatment and prevention.

Applying modern human genetics to human evolution has opened a door to studying ancient evolutionary phenomena and patterns. This area not only serves to quench the desire to understand our origins, but profoundly impacts human health in a way that could revolutionize disease treatment and prevention. In this blog, I have given a brief overview of how using genetic approaches can tell us a great deal about human origins, migration and variation between populations. In addition, I have outlined the complex genetic underpinnings behind ethnicity and disease susceptibility, which suggests an important role for population genetics in human health policy. This blog post covers only a fraction of the vast amount of ongoing work in this field, and the often ground breaking findings. It is unclear exactly how far genetics will take us in understanding human evolution, but the end is far from near. The potential for genetics in this field and broader feels limitless, and I for one am excited by the prospect.

References

Barsh, G.S. (2003). PLoS Biol. 1(1):e27.
Henn, B.M et al. (2012) The Great Human Expansion. PNAS. 109 (44): 17758-17764.
Ingman, M. et al. (2000). Mitochondrial genome variation and the origin of modern humans. Nature. 488: 708-712.
Jablonski, N.G. and Chaplin, G. (2000). The evolution of human skin coloration. Journal of Human Evolution. 39: 57–106.
Keaton, J.M. et al. (2014). A comparison of type 2 diabetes risk allele load between African Americans and European Americans. Human Genetics. 133:1487–1495.
Liu, F et al. (2013). Seminars in Cell & Developmental Biology. 24: 562-575.
Morand, S. (2012). Phylogeography helps with investigating the building of human parasite communities. Parasitology. 139: 1966–1974.
Parra, E.J. (2007). Human Pigmentation Variation: Evolution, Genetic Basis, and Implications for Public Health. Yearbook of Physical Anthropology. 50:85–105.
Veerappa, A.M. et al. (2015). Global Spectrum of Copy Number Variations Reveals Genome Organizational Plasticity and Proposes New Migration Routes. PLoS ONE 10(4): e0121846.

Category: The Student Blog | Tagged , , , , , , | 1 Comment

Three simple tips to survive grant writing

Tip 2: To survive grant writing, take a break from the computer and go on a walk with friends. Image via Core Athletics.

Tip 2: To survive grant writing, take a break from the computer and go on a walk with friends. Image via Core Athletics.

Like any field, working in research has its ups and downs. Ask any scientist and they will likely identify the opportunity to guide their own inquiries through research as an upside, but grant writing as one of the main downsides.

A recent PLOS ONE paper by Ted and Courtney von Hippel notes that the average principal investigator (PI) spends a minimum of 116 hours per grant submission. That’s at least one full month of work for the PI alone! Add to that the fact that grant funding can make or break a career and it’s no wonder that grant writing is stressful. To avoid burn out from writing a grant (or dissertation), try the following tips.

1. Give yourself a break

typewriter

Keep your mind energized by taking breaks from work. Image courtesy: Marcel Oosterwijk from Amsterdam, The Netherlands [CC BY-SA 2.0], via Wikimedia Commons

Grant writing can be an all encompassing process in positive and negative ways. Grant writing is a wonderful opportunity to take a deep dive into a body of literature. However, it demands time you might rather spend doing something else (e.g., conducting research as opposed to writing about research conducted by others). To avoid turning into a dull boy or girl, I suggest you engage in at least one small pleasurable activity per day. The activity depends on your interests, but there’s a whole body of literature on the benefits of this approach, so choose whatever is right for you and be sure to stick to it. If you find that you’re making excuses to cancel fun activities, try asking yourself, “What makes more sense, a 15-minute break now or a 2-hour breakdown later?

2. Give yourself an energy boost

When you’re on a deadline, it’s tempting to work around the clock, but it’s likely that this sort of schedule does more harm than good. For example, the evidence shows sleep deprivation reduces creative thinking. Without enough food or sleep you’re unlikely to have enough energy to engage in a task as cognitively complex as grant writing. There are at least three components to getting an energy boost. First, eat regularly. Don’t go for more than three to four hours without eating. Second, sleep regularly — go to sleep and wake up at the same time each day. Third, exercise regularly – engage in some physical activity every day, it can be as simple as going for a walk. You can even combine giving yourself a break with an energy boost; walking with a friend for a snack is a way to have fun, get some exercise, and get enough to eat.

3. Give yourself some props

Tip 3: Think positive by writing affirmations around your workspace. Image by Ignacio Palomo Duarte, via Flickr.

Tip 3: Think positive by writing affirmations around your workspace. Image by Ignacio Palomo Duarte, via Flickr.

Getting feedback from mentors and peers is an important part of grant writing. It also exposes you to a near constant stream of criticism, which, while (hopefully) constructive, can still take a toll on your confidence. To combat this, remind yourself of your past accomplishments and the exciting work you’ll do if the grant is awarded. It’s tempting to do this in your head, but it’s more effective to write down these positive statements and keep them on your phone or on a piece of paper by your computer, that way if you’re feeling down and can’t think of many positive qualities, you’ll have a cheat sheet. If nothing else, the affirmations can improve your mood (though research shows it may depend on your initial level of self-esteem).

These tips may seem simple, but they’re often overlooked and undervalued. Even as a clinical psychologist, it took me weeks to realize that taking care of myself made grant writing easier. While there’s no guarantee that following these tips will increase the likelihood of getting funded (as Drs. von Hippel note, ever-diminishing funds and the excellent quality of many grant applications makes winning funding a “roll of the dice”), they are important to preserving your well being and productivity. After all, having fun, eating and sleeping well, getting exercise, and building your confidence will probably improve your quality of life, which is ultimately more important than grant money. Right?

References
Dimidjian, S., Barrera Jr, M., Martell, C., Muñoz, R. F., & Lewinsohn, P. M. (2011). The origins and current status of behavioral activation treatments for depression. Annual review of clinical psychology, 7, 1-38.

Hames, J. L., & Joiner, T. E. (2012). Resiliency factors may differ as a function of self-esteem level: Testing the efficacy of two types of positive self-statements following a laboratory stressor. Journal of Social and Clinical Psychology, 31(6), 641-662.

Landmann, N., Kuhn, M., Maier, J. G., Spiegelhalder, K., Baglioni, C., Frase, L., … & Nissen, C. (2015). REM sleep and memory reorganization: Potential relevance for psychiatry and psychotherapy. Neurobiology of learning and memory.

von Hippel, T., & von Hippel, C. (2015). To Apply or Not to Apply: A Survey Analysis of Grant Writing Costs and Benefits. PloS one, 10(3), e0118494.

Category: Academia, The Student Blog | Tagged , , , , , , | 6 Comments

Snark-Hunters Once More: Rejuvenating the Comparative Approach in Modern Neuroscience

By Jeremy Borniger

65 years ago, the famed behavioral endocrinologist Frank Beach wrote an article in The American Psychologist entitled ‘The Snark was a Boojum’. The title refers to Lewis Carroll’s poem ‘The Hunting of the Snark’, in which several characters embark on a voyage to hunt species of the genus Snark. There are many different types of Snarks, some that have feathers and bite, and others that have whiskers and scratch. But, as we learn in Carroll’s poem, some Snarks are Boojums! Beach paraphrases Carroll’s writing outlining the problem with Boojums:

If your Snark be a Snark, that is right:
Fetch it home by all means—you may serve it
with greens
And it’s handy for striking a light.
 
**************
But oh, beamish nephew, beware of the day,
If your Snark be a Boojum! For then,
You will softly and suddenly vanish away,
And never be met with again!

View post on imgur.com

Instead of the Pied Piper luring the rats out of town with a magic flute, the tables have turned and the rat plays the tune and a large group of scientists follows. (From Beach, 1950)

 
Beach provides this metaphorical context to describe a problem facing comparative psychologists in the 1950’s: an increasing focus on a few ‘model species’ at the cost of reduced breadth in the field. The comparative psychologists were hunting a Snark called “Animal Behavior”, but this Snark too, turned out to be a Boojum. Instead of finding many animals on which to test their hypotheses, they settled on one: the albino rat. It was there that “the comparative psychologist suddenly and softly vanished away”.

Even in the mid-1900’s Beach recognized the funneling of biological and psychological research efforts towards a single or few ‘model species’. He even went as far as to suggest that the premier journal in the field be renamed The Journal of Rat Learning as its focus had almost entirely shifted to the rat. This trend has culminated in a true bottleneck, where the vast majority of research now focuses on phenomena occurring in a small amount of ‘model organisms’ like the laboratory mouse (Mus musculus), Norway rat (Rattus norvegicus), nematode worm (Caenorhabditis elegans), fruit fly (Drosophila melanogaster), or zebrafish (Danio rerio). Indeed, a 2008 analysis found that “75% of our research efforts are directed to the rat, mouse and human brain, or 0.0001% of the nervous systems on the planet”. Focusing on such a small fraction of the biological diversity available to us may be skewing or vastly restricting our conclusions.

The Genetics Revolution

In the last quarter of a century, the incredible advancement in genetic technology has pushed a few model organisms further towards the top. For example, the mouse was among the first mammals to have its genome sequenced, the results being published in 2002. Because of the presence of readily available sequence information, subsequent tools (primers, shRNA, oligonucleotides, etc…) and genetic techniques (conditional knockout/overexpression models) were developed specifically for use in the mouse. This further discouraged the use of other organisms in research as most of the ‘cutting edge’ tools were being developed almost exclusively in the mouse. It also promoted ‘shoe-horning’ of research that may not be appropriate for this model organism to take advantage of the genetic tools available. Indeed, this may be the case with research regarding the visual system, or many mental disorders in mouse models. The lab mouse primarily interprets environmental stimuli via olfactory (smell) cues, rather than sight (as it is nocturnal), making it a suboptimal organism in which to study visual function. Also, as mental disorders are poorly understood, developing a robust animal model in which to test treatments remains a significant obstacle. Trying to force the mouse to become the bastion of modern psychiatry research is potentially hampering progress in a field that could benefit from the comparative approach. For example, wild white-footed mice (Peromyscus leucopus), which are genetically distinct from their laboratory mouse relatives, show seasonal variations in many interesting behaviors. For instance, in response to the short days of winter, they enhance their fear responses and alter the cellular structure of their amygdala, a key brain region in the regulation of fear. Because these changes are reversible and controlled by a discrete environmental signal (day length), these wild mice contribute to the development of current translational models that involve amygdala dysfunction, such as post-traumatic stress disorder (PTSD).

What are the Benefits to the Comparative Approach?

Emphasis on a few model organisms became prevalent primarily due to their ease of use, rapid embryonic development, low cost, and accessible nervous systems. In the last number of decades, access to an organism’s genetic code provided another incentive to use it for research purposes. While these advantages are useful, they encourage researchers to become short-sighted and neglect previous contributions of diverse species to the advancement of science. As Brenowitz and Zakon write, “this myopia affects choice of research topic and funding decisions, and might cause biologists to miss out on novel discoveries”. Some examples of breakthroughs that were made possible through the use of the comparative approach include the understanding of the ionic basis of the action potential (squid), the discovery of adult neurogenesis (canary), conditioned reflexes (dog), dendritic spines (chicken), and the cellular basis of learning and memory (sea slug). More recently, incredible advancements in temporal control of neuronal function have been made (optogenetics) thanks to the characterization of channel rhodopsins in algae.

The revolution in genetics ushered in new tools that were only available to be used in the few organisms that have had their genomes sequenced. The comparative approach, however, is now gaining the tools necessary to become part of the 21st century genetic revolution. New gene-editing techniques, such as TALENS, TILLING, or CRISPR/Cas9 allow for fast, easy, and efficient genome manipulation in a wide variety of species. Indeed, this has already been accomplished in Atlantic salmon, tilapia, goats, sea squirts, and silkworms. Also, as the price to sequence an entire genome rapidly decreases, new tools will be developed for use in a wider variety of species than ever before. It is unlikely that many of the groundbreaking discoveries that stemmed from research on diverse and specialized organisms would be funded in the current ‘model species’ climate. We should not put all of our research ‘eggs’ in one model organism ‘basket’, and instead invest in a broad range of organisms that are fit for each question at hand. The time to revive the comparative approach has arrived. In the words of Brenowitz and Zakon, “Grad students, dust off your field boots!”

References

Beach, F. A. (1950). The Snark was a Boojum. American Psychologist5(4), 115.

Brenowitz, E. A., & Zakon, H. H. (2015). Emerging from the bottleneck: benefits of the comparative approach to modern neuroscience. Trends in neurosciences38(5), 273-278.

Chinwalla, A. T., Cook, L. L., Delehaunty, K. D., Fewell, G. A., Fulton, L. A., Fulton, R. S., … & Mauceli, E. (2002). Initial sequencing and comparative analysis of the mouse genome. Nature,420(6915), 520-562.

Edvardsen, R. B., Leininger, S., Kleppe, L., Skaftnesmo, K. O., & Wargelius, A. (2014). Targeted mutagenesis in Atlantic salmon (Salmo salar L.) using the CRISPR/Cas9 system induces complete knockout individuals in the F0 generation.

García-López, P., García-Marín, V., & Freire, M. (2007). The discovery of dendritic spines by Cajal in 1888 and its relevance in the present neuroscience.Progress in neurobiology83(2), 110-130.

Goldman, S. A. (1998). Adult neurogenesis: from canaries to the clinic. Journal of neurobiology36(2), 267-286.

Li, M., Yang, H., Zhao, J., Fang, L., Shi, H., Li, M., … & Wang, D. (2014). Efficient and heritable gene targeting in tilapia by CRISPR/Cas9. Genetics,197(2), 591-599.

Ni, W., Qiao, J., Hu, S., Zhao, X., Regouski, M., Yang, M., … & Chen, C. (2014). Efficient gene knockout in goats using CRISPR/Cas9 system.

Manger, P. R., Cort, J., Ebrahim, N., Goodman, A., Henning, J., Karolia, M., … & Štrkalj, G. (2008). Is 21st century neuroscience too focussed on the rat/mouse model of brain function and dysfunction?.Frontiers in neuroanatomy2: 5

Stolfi, A., Gandhi, S., Salek, F., & Christiaen, L. (2014). Tissue-specific genome editing in Ciona embryos by CRISPR/Cas9. Development141(21), 4115-4120.

Pavlov, I. P. (1941). Conditioned reflexes and psychiatry (Vol. 2). W. H. Gantt, G. Volborth, & W. B. Cannon (Eds.). New York: International publishers.

Walton, J. C., Haim, A., Spieldenner, J. M., & Nelson, R. J. (2012). Photoperiod alters fear responses and basolateral amygdala neuronal spine density in white-footed mice (Peromyscus leucopus).Behavioural brain research233(2), 345-350.

Wei, W., Xin, H., Roy, B., Dai, J., Miao, Y., & Gao, G. (2014). Heritable genome editing with CRISPR/Cas9 in the silkworm, Bombyx mori.

Category: The Student Blog | Tagged , , , , , , , , , | Leave a comment

Knowledge is where you find it: Leveraging the Internet’s unique data repositories

A Last.fm user shares the music recommendation system's representation of his listening habits over a month. Photo courtesy of Aldas Kirvaitis via Flickr.

A Last.fm user shares the music recommendation system’s representation of his listening habits over a month. Photo courtesy of Aldas Kirvaitis via Flickr.

By Chris Givens

Sometimes, data doesn’t look like data. But when circumstances conspire and the right researchers come along, interesting facets of human nature reveal themselves. Last.fm and World of Warcraft are two entities made possible by the Internet, both aimed at entertainment of consumers. However, through new means of social interaction and larger scales of data collection they also, perhaps unintentionally, advanced science. Scientific achievement may seem like a stretch for a music service and a video game, but these unlikely candidates for scientific study show that the information age constantly offers new ways to study human behavior. Last.fm and World of Warcraft are contemporary social constructions, part of the new way that humans interact in our rapidly changing digital world. By applying scientific rigor to the data unwittingly generated by two Internet-based companies, we see that knowledge is everywhere, but sometimes requires creative routes to coax it out of hiding.

Last.fm: more than a musical concierge

Last.fm is a music service that uses consumers’ listening data and genre tags to recommend new music to the user. It has a huge cache of song clips in its databases, which were not viewed as a data set until recently, when a group of computer scientists mined the songs for certain characteristics and created a phylogeny of popular music. The lead author on the study, Dr. Matthias Mauch, formerly worked on the Music Information Retrieval (MIR) team at Last.fm. MIR is essentially automated analysis of musical data, usually from audio samples. Uses for the data gleaned from audio samples include improved music search, organization, and recommendation. This kind of research has clear benefit to a company like Last.fm, whose main goal is to catalog users’ listening habits and recommend music they would like based on past listening patterns. Dr. Mauch, however, is interested in more than simply improving musical recommendations; he wants to trace the evolution of the variety of music from around the world. In a recent study, he used a huge data set obtained from his time at Last.fm to start cracking the code on musical evolution.

Hip-hop is a confirmed revolution

When hip-hop burst into the public consciousness in the late 1980’s, the music polarized Americans. Hip-hop music originally centered on themes of social ills in inner-city America, providing a creative outlet for the frustration felt by many working-class African Americans at the time. Gangsta rap eventually grew out of hip-hop, characterized by at times violent, masculine lyrical themes. After release of their seminal album, Straight Outta Compton, the hip-hop group N.W.A received a warning letter from the FBI as a result of controversial songs on the album. The explosive and politicized emergence of hip-hop created a new genre of popular music, thrusting a marginalized group of Americans into the pop culture spotlight. Starting from humble roots, hip-hop is now a multi-billion dollar industry. But even with all of the popular exposure and controversy, until Dr. Mauch’s study the degree to which hip-hop revolutionized popular music was hard to quantify.


See Dr. Mauch’s TED Talk about music infomatics here.

A group of researchers, led by Dr. Mauch, used MIR techniques on the Last.fm data set, and in doing so, found previously unknown relationships between hip-hop and other types of twentieth-century popular music. After recognizing the song clips obtained from Last.fm held a repository of data, the group devised a method of classifying songs based on two categories of attributes: harmonic and timbral. Harmonic attributes are quantifiable, encompassing chord changes and the melodic aspects of songs; timbral attributes are more subjective and focus on quality of sound, like bright vocals or aggressive guitar. The authors deemed these attributes “musically meaningful” and thus more appropriate for quantitative analysis than simple measures of loudness or tempo.

The researchers used modified text-mining techniques to carry out their analysis. They combined characteristics from the harmonic and timbral lists to create “topics” which could then be used to analyze each song based on the number of topics present. Next, the researchers analyzed 17,000 songs from the Billboard Hot 100 charts for the 50 years between 1960 and 2010. After finishing song analysis and clustering songs based on their harmonic and timbral characteristics, the researchers created a phylogenetic tree of popular music.

The tree empirically verified what we already knew — that hip-hop is in a league of its own. Out of four clusters on the tree, hip-hop is the most divergent. Using the tree of life as an analogy, if the genres of rock, soul, and easy listening are animals, fungi, and plants, hip-hop would be musical bacteria.

Using these data, extensive knowledge of musical history is possible. The authors state in their paper that instead of using anecdote and conjecture to understand musical evolution, their methods make it possible to pinpoint precisely where musical revolutions occurred. Due to their efforts, popular music now has a quantitative evolutionary history, and Dr. Mauch isn’t finished yet. He plans to do similar analyses on recordings of classical music and indigenous music from all over the world, in an attempt to trace the origins and spread of music predating the radio. I feel the innovative techniques and range of this study is incredible. Dr. Mauch and colleagues adapted research methods frequently used to improve of music delivery (already an interesting field) and used them to unlock a small amount of transcendent musical knowledge. This study shows that tens of thousands of song clips isn’t a typical scientific data set, until someone says so. By taking what was provided and forging it into something workable, Dr. Mauch and colleagues applied scientific methods to Last.fm’s unrecognized and unexamined data repository.

Surviving a pandemic in the Wide, World of Warcraft

World of Warcraft (WoW) is a highly social video game that connects players globally. WoW is also arguably the last place anyone would look for scientific insight. Launched in 2004, WoW is one of the most popular games ever created, with around ten million subscribers at its peak popularity. WoW was designed as a “massively multiplayer online role-playing game”. When launched, players from all over the world began interacting in real time, throughout an intricately designed virtual world. The world was designed as a fantastical model of the real world, complete with dense urban areas and remote, low population zones. In 2005, a glitch that caused a highly contagious sickness to be spread between players revealed this game to be an apt model of human behavior under pandemic conditions. The glitch drastically affecting gameplay for gamers and piqued the interest of several epidemiologists.

The “Corrupted Blood Incident”

The glitch came to be known as the “Corrupted Blood Incident” in the parlance of the game. It originated from one of the many things present in WoW that are not present in the real world: “dungeons”. Dungeons in WoW are difficult areas populated by powerful “boss” characters that possess special abilities not normally found in the game. In 2005, one of these abilities, the “Corrupted Blood” spell, was modified by a glitch to have powers outside of the zone it normally resided in. Consequently, the highly contagious “Corrupted Blood” swept though WoW, killing many player characters and providing an accurate simulation of real-world pandemic conditions. “Corrupted Blood” infected player characters, pets, and non-player characters, which aided transmission throughout the virtual landscape. Only one boss character in one remote zone cast this spell, so its spread was a surprise to players and developers alike, adding to the accuracy of the “simulation”.

The glitch stayed active for about a week, and during that time, gameplay changed dramatically. Because pets and non-player characters carried the disease without symptoms, reservoirs of the plague existed in the environment and helped nourish the outbreak. Players avoided cities for fear of contracting the disease. Some players who specialized in healing stayed in cities, helping especially weak players stay alive long enough to do business. Weaker, low-level players who wanted to lend a hand posted themselves outside of cities and towns, warning other players of the infection ahead. After a week of the pandemic in the game, the developers updated the code and reset their servers, “curing” the WoW universe of this scourge of a glitch.

Some epidemiologists took note after observing the striking similarities between real world pandemics and the virtual pandemic in WoW. In the virtual pandemic, pets acted as an animal reservoir, as birds did in the case of avian flu. Additionally, air travel in WoW (which takes place on the back of griffins) proved analogous to air travel in the real world, thwarting efforts to quarantine those affected by the disease. Also, WoW is a social game full of tight-knit communities, and at the time had around 6.5 million subscribers, making it a reasonable virtual approximation of the social stratification that exist in real world society.

See Dr. Fefferman’s 2010 TED talk here.

The behavior observed in WoW was not taken as a prescription for how to handle a pandemic or a prediction of what will happen. Rather, as Dr. Nina Fefferman put it in a 2010 TED talk, this event provided “ inspiration about the sorts of things we should consider in the real world” when making epidemiological models. Dr. Fefferman’s group discovered two behaviors displayed by players experiencing the virtual pandemic empathy and curiosity, which are not normally taken into account by epidemiological models. Curiosity was the most notable, because it paralleled the behavior of journalists in real world pandemics. Journalists rush into the infected site to report, and then rush out, hopefully before becoming infected, which is exactly what many players did in the infected virtual cities of WoW.

The “Corrupted Blood Incident” is the first known time that an unplanned virtual plague spread in a similar way to a real world plague. Though at first, most looked at this instance simply an annoying video game glitch. It took some creative scientists decided to see what they knowledge they could glean from the incident. Their observations suggest that sometimes, the best agent-based model is the one where actual people control the agents, and that simulations similar to computer games might “bridge the gap between real world epidemiological studies and large scale computer simulations.” Epidemiological models are now richer as a result of this knowledge. To learn more about how the “Corrupted Blood Incident” changed scientific modeling for pandemics, head on over to the PLOS Public Health Perspectives blog to hear Atif Kukaswadia’s take on it.

Concluding Thoughts

The Last.fm study and “Corrupted Blood Incident” show ways scientists can use esoteric corners of the Internet to illuminate interesting pieces of human history and behavior. New means of social interaction and new methods for collecting information bring about interesting, if slightly opaque, ways to discover new knowledge and advance scientific discovery. It is a credit that these scientists helped shed light on human history and interactions by looking past the traditional and finding data from novel sources.

Category: Gaming, PLoS Blogs, The Student Blog | Tagged , , , , , , , | Leave a comment

PLOS Computational Biology Community Going All Out to Cover #ISMB15 with the ISCB Student Council

Calling All Bloggers for ISMB/ECCB 2015

ISMB/ECCB 2015 in Dublin, Ireland, is fast approaching and we invite you to be involved in the live coverage of the event.

If you can’t make it to Dublin, follow our live collaborative blog coverage at http://plos.io/CBFieldReports

In previous years, ISMB has been way ahead of the social media curve with microblogging in 2008, one year before the launch of Flickr, one year after the launch (in the USA) of the Apple original iPhone 1, and just two years after Twitter was founded. Now at the last count, Twitter has averaged at 236 million users, three million blogs come online each month, and Tumblr owners publish approximately 27,778 new blog posts every minute. We all know that, in like-fashion, social media is a growing aspect of conferences, — read more in our Ten Simple Rules for Live Tweeting at Scientific Conferences, — and we think ISMB/ECCB 2015 is a great venue for progress.

How can you be involved?

We want you to take live blogging to ISMB/ECCB. If you are planning to attend the conference in Dublin and you blog or tweet, or even if you would like to try it for the first time, we want to hear from you — Everyone can get involved.

Our invitation extends to attendees from all backgrounds and experience who could contribute blog posts covering the conference. In addition we are looking for a number of ‘super bloggers’ who can commit to blogging two to three high-quality posts or who would be interested in interviewing certain speakers at the conference. If you are speaking at ISMB/ECCB and would like to be involved, please do also get in touch.

iscb sc

For the “next generation computational biologists”

We will be working on this collaborative blog project with the  ISCB Student Council.

In acknowledgment of the time and effort of all who work with us, each contributor will receive a select PLOS Computational Biology 10th Anniversary t-shirt (only available at ISMB/ECCB 2015) and your work will be shared on the PLOS Comp Biology Field Reports  page [http://plos.io/CBFieldReports], making it easier for all to contribute and collaborate.

CB-10thAnniversary_Collection-image-with-logo-300x300

Winning design for the PLOS Comp Biol 10th Anniversary T-shirt; get yours by blogging with us at ISMB15! PLOS/Kifayathullah Liakath-Ali

What are the next steps?

If you’re active on Twitter or the blogosphere and want to help us share the latest and greatest from ISMB/ECCB 2015 conference, please email us at ploscompbiol@plos.org with a bit about your background and how you’d like to contribute. See you in Dublin!

FOR MORE ON PLOS AT ISMB/ECCB 2015 read this post.

Category: The Student Blog | Tagged , , , , | Leave a comment

Walk the walk, talk the talk: Implications of dual-tasking on dementia research

A group of friends chat as they walk in St. Marlo, France. Photo courtesy of Antoine K (Flickr).

A group of friends chat as they walk in St. Marlo, France. Photo courtesy of Antoine K (Flickr).

By Ríona Mc Ardle

You turn the street corner and bump into an old friend. After the initial greetings and exclamations of “It’s so good to see you!” and “Has it been that long?”, your friend inquires as to where you are going. You wave your hands to indicate the direction of your desired location, and with a jaunty grin, your friend announces that they too are headed that way. And so, you both set off, exchanging news of significant others and perceived job performance, with the sound of your excited voices trailing behind you.

This scenario seems relatively normal, and indeed it occurs to hundreds of people in everyday life. But have you ever truly marvelled our ability to walk and talk at the same time? Most people discount gait as an automatic function. They spare no thought to the higher cognitive processes that we engage in, in order to place one foot in front of the next. The act of walking requires huge attentional and executive function resources, and it is through the sheer amount of practice we accumulate throughout our lives that makes it feel like such a mindless activity. Talking while walking recruits even more resources with the additional requirement of an attentional split – this is so we don’t overemphasize our attention on one task at the detriment of the other. And it’s not just talking! We can assume all manner of activities while walking – carrying trays, waving our hands, checking ourselves out in the nearest shop window. But what sort of costs does such multitasking afford us?

Dual-Tasking: How well can we really walk and talk?

Dual-tasking refers to the simultaneous execution of two tasks – usually a walking task and a secondary cognitive or motor task. It is commonly used to assess the relationship between attention and gait. Although there is no consensus yet on the manner in which dual-tasking may hinder gait, dual-task studies have demonstrated decreased walking speed and poorer performance of the secondary task. This has been particularly seen in healthy older adults, who struggle more with the secondary task when prioritizing walking – this is likely to maintain balance and avoid falls. Such findings have supported the role of executive function and attention in gait, as both are implicated as part of frontal lobe function. Age-related changes in the brain have demonstrated a focal atrophy of the frontal cortex, with reductions of 10-17%  observed in the over-65 age group. When compared to the reported 1% atrophy of the other lobes, it can be postulated that the changes to the frontal lobes contribute to gait disturbances experienced by the elderly. The older generation must recruit many of the remaining attentional resources in order to maintain gait, and thus have few left to carry out other activities at the same time.

It has been incredibly difficult to confirm the frontal cortex’s role in gait from a neurological perspective. Imaging techniques have largely been found unsatisfactory for such an endeavor, as many rely on an unmoving subject lying in a fixed position. However, a recent study in PLOS ONE has shed some light in the area. Lu and colleagues (2015) employed Functional Near-Infrared Spectroscopy (fNIRS) to capture activation in the brain’s frontal regions: specifically, the prefrontal cortex (PFC), the premotor cortex (PMC) and the supplementary motor areas (SMA). Although obtaining a similar image of the cortex to that of functional magnetic resonance imaging (fMRI), fNIRS can acquire these during motion. These researchers laid out three investigatory aims:

  1. To assess if declining gait performance was due to different forms of dual task interference.
  2. To observe if there was any differences in cortical activation in the PFC, PMC and SMA during dual-task and normal walking.
  3. To evaluate the relationship between such activation and gait performance during dual-tasking.

The research team predicted that the PFC, PMC and SMA would be more active during dual-task due to the increased cognitive or motor demand on resources.

A waiter balances a tray of waters. Photo courtesy of Tom Wachtel (Flickr).

A waiter balances a tray of waters. Photo courtesy of Tom Wachtel (Flickr).

Lu and colleagues recruited 17 healthy, young individuals to take part in the study. Each subject underwent the three conditions three times each, with a resting condition in between. The conditions were as follows: a normal walking condition (NW) in which the participant would be instructed to walk at their usual pace, a walking while carrying out a cognitive task condition (WCT) in which the subject had to engage in a subtraction task, and a walking while carrying out a motor task condition (WMT) in which the participant had to carry a bottle of water on a tray without spilling it. Results showed that both the WCT and WMT induced a slower walk than the NW. Interestingly, WMT displayed a higher number of steps per minute and a shorter stride time – this could be attributed to the intentional alteration of gait in order not to spill the water. An analysis of the fNIRS data revealed that all three frontal regions were activated during dual-task conditions. The PFC showed the strongest, most continuous activation during WCT. As the PFC is highly implicated in attention and executive function, its increased role in the cognitive dual-task condition seems reasonable. The SMA and PMC were found to be most strongly activated during the early stages of the WMT. Again, this finding makes sense as both these areas are associated with the planning and initiation of movement – in this case, the researchers postulated that this activity may reflect a demand for stability of the body in order to carry out the motor task. This study has successfully demonstrated the frontal cortex’s role in maintaining gait, particularly when concerned with a secondary task.

Why is this important?

Although gait disturbances are extremely rare in young people, their prevalence increases with age. 30% of adults over 65  fall at least once a year, with the incidence rate climbing to 50% in the over-85 population. Falls carry a high risk of critical injuries in the elderly, and often occur during walking. This latest study has provided evidence for the pivotal roles of executive function and attention in maintaining gait, and has allowed us to gain insight into the utilities of dual-task studies. Having correlated activation of the frontal cortex with carrying out a secondary task while walking, poor performance of one of these tasks may reveal a subtle deficit in attention or executive function. Therefore, gait abnormalities may be able to act as a predictor of mild cognitive impairment (MCI) and even dementia. Studies have reported that a slowing of gait may occur up to 12 years prior to the onset of MCI, with longitudinal investigations observing that gait irregularities significantly increased individuals’ likelihood of developing dementia 6-10 years later.

But what does the relationship between gait and cognition tell us about dementia? Dementia is one of the most prevalent diseases to afflict the human population, with 8% of over-65s and over 35% of the over-85s suffering from it. It is incredibly important for researchers to strive to reduce the amount of time individuals must live their lives under the grip of such a crippling disorder. Our growing knowledge of gait and cognition can allow us to do so in two different ways: through early diagnosis of dementia and through developments in interventions.

For the former, a drive in research has begun regarding the correlation of different gait deficits with dementia subtypes. The principle behind this is that the physical manifestation of the gait disturbance could lend clinicians a clue as to the lesion site in the brain – for example, if a patient is asked to prioritize an executive function task when walking, and displays a significant gait impairment while doing so, this may predict vascular dementia. This is because executive function relies on frontal networks, which are highly susceptible to vascular risk. Studies have shown that this type of dual-task will often cause a decrease in velocity and stride length and are associated with white matter disorders and stroke. To the latter point, practice of either gait or cognition may benefit the other. It has been acknowledged that individuals who go for walks daily have a significantly reduced risk of dementia. This is attributed to gait’s engagement of executive function and attention, which exercises the neural networks associated with them. Similarly, improving cognitive function may in turn help one to maintain a normal gait. Verghese and colleagues (2010) demonstrated this in a promising study, in which cognitive remediation improved older sedentary individuals’ mobility.

Closing Thoughts

As both a neuroscientist and a citizen of the world, one of my personal primary concerns is the welfare of the older generation. The aging population is growing significantly as life expectancy increases and these individuals are susceptible to a range of medical issues. While any health problem is hard to see in our loved ones, dementia and failing cognition are a particularly difficult brunt on both the individual, their family and society as a whole. Our exploration into the relationship between gait and cognition has so far offered a glimmer of hope into progressing our understanding and fight against dementia, and I personally hope this intriguing area continues to do just that.

References

Beurskens, R., & Bock, O. (2012). Age-related deficits of dual-task walking: a review. Neural plasticity2012.

Holtzer, R., Mahoney, J. R., Izzetoglu, M., Izzetoglu, K., Onaral, B., & Verghese, J. (2011). fNIRS study of walking and walking while talking in young and old individuals. The Journals of Gerontology Series A: Biological Sciences and Medical Sciences, glr068.

Li, K. Z., Lindenberger, U., Freund, A. M., & Baltes, P. B. (2001). Walking while memorizing: age-related differences in compensatory behavior.Psychological Science12(3), 230-237.

Lu, C. F., Liu, Y. C., Yang, Y. R., Wu, Y. T., & Wang, R. Y. (2015). Maintaining Gait Performance by Cortical Activation during Dual-Task Interference: A Functional Near-Infrared Spectroscopy Study. PloS one10(6), e0129390.

Montero-Odasso, M., & Hachinski, V. (2014). Preludes to brain failure: executive dysfunction and gait disturbances. Neurological Sciences35(4), 601-604.

Montero‐Odasso, M., Verghese, J., Beauchet, O., & Hausdorff, J. M. (2012). Gait and cognition: a complementary approach to understanding brain function and the risk of falling. Journal of the American Geriatrics Society,60(11), 2127-2136.

Sparrow, W. A., Bradshaw, E. J., Lamoureux, E., & Tirosh, O. (2002). Ageing effects on the attention demands of walking. Human movement science,21(5), 961-972.

Verghese, J., Mahoney, J., Ambrose, A. F., Wang, C., & Holtzer, R. (2010). Effect of cognitive remediation on gait in sedentary seniors. The Journals of Gerontology Series A: Biological Sciences and Medical Sciences65(12), 1338-1343.

Yogev‐Seligmann, G., Hausdorff, J. M., & Giladi, N. (2008). The role of executive function and attention in gait. Movement disorders23(3), 329-342.

Category: The Student Blog | Tagged , , , , , , , | 2 Comments