Archaeology and Conservation in the Tropical Forests of the Central African Republic

By Chris Kiahtipes

The tropical forests and savannas of Africa play a central, symbolic role in our modern dialogue about wilderness, conservation, and land management. However, efforts to establish reserves, parks, and wilderness areas often create conflicts with local societies who use the territory for subsistence as laborers, cash-croppers, pastoralists, horticulturalists, or hunter-gatherers. One of the major stumbling blocks for biological conservation in the African tropics is that we know little about the history of the region, which has important implications for our understanding of its ecosystems.


FIGURE 1: View of Lobaye River in Ngotto Forest, CAR. Photo by C. Kiahtipes

Before a coup d’etat toppled the government of the Central African Republic (CAR), I participated in three field projects in a forest reserve in the southern part of this little-known country where I was collecting archaeological and ecological data for my dissertation. Our team of American and African researchers recently published a piece in PLOS ONE titled “On Intensive Late Holocene Iron Mining and Production in the Northern Congo Basin” which described the results of our work in the region. Our research documents the development of a previously unknown iron-producing industry in the rain forest zone of Central Africa, its impacts on rain forest vegetation, and its collapse during French colonization.

Bridging The Gap Between History and Prehistory

Our team of American and African archaeologists completed archaeological surveys and test excavations in the Ngotto Forest Reserve, located on the CAR’s southern border, near the Republic of Congo (see Figure 2). This forest is a fascinating field site for a number of reasons: the region is still occupied by hunter-gatherer societies who live alongside subsistence agriculturalists; the forest contains enclosed “islands” of savanna woodland surrounded by dense rain forest; and satellite photos indicate that between 1976 and 2002 forested area has increased by 5.6%. Together these features point to past disturbances in forest cover and social reorganizations among local populations. These unique cultural and ecological features have attracted ethnographers and ecologists to the region, but almost nothing is known about how or when the modern forest’s social and ecological milieu came to be.

FIGURE-2: Map of Study Region in CAR from Lupo et al. (2015) showing archaeological sites (black circles/ovals) and one of the coring locations (open circle).

FIGURE 2. Map of Study Region in CAR from Lupo et al. (2015) showing archaeological sites (black circles/ovals) and one of the coring locations (open circle).

I specialize in archaeological palynology, a field dedicated to the recovery and analysis of plant pollen from sedimentary sequences and/or archaeological sites. Plant pollen is made from sporopollenin, a durable organic compound that is well preserved in inundated conditions, such as swamps or lake bottoms. Because plants produce diagnostic pollen types, the preserved pollen can be used to reconstruct vegetation cover from sedimentary records. I hypothesized that because rivers are essential to both ecological and cultural activities, their marshy floodplains would provide both the appropriate conditions for pollen preservation and a locally-specific record of vegetation change sensitive to human impacts. This approach seeks to bridge ecological and cultural histories in the Ngotto Forest by documenting vegetation responses to prehistoric cultural developments, such as the adoption of slash-and-burn farming methods or iron metallurgy.

FIGURE-3: FIGURE 3. Examples of grass (Poaceae/Gramineae, left) and forest (Uapaca sp., right) recovered from Ngotto sediment cores. Photos by C. Kiahtipes

FIGURE 3. Examples of grass (Poaceae/Gramineae, left) and forest (Uapaca sp., right) recovered from Ngotto sediment cores. Photos by C. Kiahtipes

Our research in CAR has produced some exciting results in regards to understanding the history of the Ngotto Forest region. A previous study from our team describes the changes in vegetation cover from a 1,200 year old sediment core associated with archaeological sites in the northern part of the forest. This record indicates that the most dramatic changes in forest cover took place during the last 800 years, well after the region was settled by farming societies. Because the change from closed canopy forest to open forest environments takes place after the arrival of farming communities, we argue that the impacts of farming societies alone are insufficient to explain rapid deforestation events. This vegetation record shows very little change in the amount of grass pollen, which we would expect if local agriculturalists were degrading forest cover or soils and encouraging transition to savanna. Analysis of stable carbon isotopes, which are fixed in organic molecules at different rates in tropical grasses and rain forest trees, shows a signature consistent with forest cover through the entire 1,200 year sequence. This evidence argues for some degree of forest resilience at low population densities using low-intensity resource extraction methods.

The Rise and Fall of Iron Smelting at Bagbaya

Our most recent findings, published in PLOS ONE, describe the results of archaeological and paleoecological study of the southeastern portion of the forest. This area yielded dramatically different results from our work in the northern part of the forest. Cooperating with the village elders of Bagbaya (or Bacbaya), we identified and mapped more than a dozen large mounds of iron slag. Slag is the byproduct of iron smelting, when ore is heated in a furnace to produce chemically reducing conditions, which separates the slag and workable iron. The biggest of these mounds measured 2.5 meters tall and more than 10 meters in diameter. Nearby, there is a large extensively mined ore deposit. Radiocarbon dates from charcoal in the mound suggests this ore deposit was mined somewhere between AD 1700-1900, although some earlier dates (ca. AD 1600) come from nearby sites. Oral traditions among the community of Bagbaya attribute the mounds to their ancestors, who fled to this part of the forest from slave raiding in what is now the Democratic Republic of Congo. We hypothesized that because iron smelting requires a great deal of charcoal, these activities likely led to substantial disturbances of forest area and the development of savanna “islands” near Bagbaya.

FIGURE 4: Photo of mound feature and DN Schmitt, by C. Kiahtipes on KONICA MINOLTA DIGITAL CAMERA

FIGURE 4: Photo of mound feature and DN Schmitt, by C. Kiahtipes.

In order to reconstruct the impacts of iron smelting on vegetation cover, I analyzed pollen remains in sediments from filled river channel in the immediate vicinity of the iron smelting sites and mined ore deposit. This core shows some dramatic changes in vegetation cover that overlap chronologically with the archaeological deposits and the oral traditions of the Bagbaya community. Mixed rain forest and riparian forests are best represented in the oldest deposits dating to the 16th century and the youngest deposits from late 19th century. During the 17th and 18th centuries, there is a distinct increase in the representation of grasses and trees that thrive in disturbed settings. These data suggest that after a period of increased ecological disturbance, closed rain forest vegetation made a substantial recovery during the last 50 years. The timing of forest regrowth suggests the observed expansion of forest area between 1970-1990 is part of an almost century-long trend of forest recovery in the wake of French colonial rule driven by depopulation, forced labor programs, and the collapse of local iron-production centers in the early 20th century. Our results suggest that aspects of the modern ecological and cultural features of the Ngotto Forest appeared as recently as the last century, illustrating both the depth of colonial-era disturbances among African societies and ecosystems and the complexity of Africa’s prehistoric peoples.

FIGURE-5: FIGURE 5: Diagram showing relative % of pollen from major types of vegetation cover in the Ngotto Forest from Lupo et al. (2015)

FIGURE 5: Diagram showing relative % of pollen from major types of vegetation cover in the Ngotto Forest from Lupo et al. (2015)

Resistance and Resilience in the Ngotto Forest

The most exciting finding from our research in the Ngotto Forest is the documentation of the 17th and 18th century iron smelting industry at Bagbaya and its environmental consequences. Iron smelting centers are known in other parts of west Africa, but few of them are located in or near the forest zone and none have been paired with palynological studies of vegetation change. The environmental evidence for forest disturbance at this time suggests that the Bagabaya community did not produce iron for local consumption only. They produced significant amounts of iron, which may have been traded to other parts of central Africa. Together, these lines of evidence indicate that the tropical forests of Central Africa are more than just wilderness. These forests are home to impressive indigenous industries and are sources of resilience and resistance for human and ecological communities alike.

Works Cited

Holl, AFC (2009) Early West African Metallurgies: New Data and Old Orothodoxy. Journal of World Prehistory 22:415-438. DOI 10.1007/s10963-009-9030-6

Lupo KD, Schmitt DN, Kiahtipes CA, Ndanga J-P, Young DC, et al. (2015) On Intensive Late Holocene Iron Mining and Production in the Northern Congo Basin and the Environmental Consequences Associated with Metallurgy in Central Africa. PLoS ONE 10(7): e0132632. doi: 10.1371/journal.pone.0132632

Kiahtipes, CA, Lupo KD, Schmitt DN, Ndanga J-P, Jones JG, and Lee R. (2011). Prehistory and the present: palaeoenvironments in the northern Congo Basin. Before Farming 2011:2, 1-14. DOI:

Runge, J. (2008) Remote sensing based forest assessment: recent dynamics (1973-2002) of forest-savanna boundaries at Ngotto Forest, Central African Republic (CAR). Paleoecology of Africa, vol. 28, ed. by J. Runge. pp. 237-244



Category: The Student Blog | 3 Comments

A new role for an old Alzheimer’s-related protein

A new role for an old Alzheimer’s-related protein

Loss of memory is a well-established symptom of Alzheimer’s disease. Also, cognitive impairment and personality changes are also observed in these patients [1]. Many studies investigating the causes of Alzheimer’s have focused on the biochemical pathways and molecules involved in the pathology of the disease [2]. One of the main goals of such studies is to understand how amyloid plaques and neurofibrillary tangles, two of the biochemical hallmarks of Alzheimer’s disease pathology, lead to the brain degeneration typically observed in these patients. Plaques and tangles are abnormal aggregates of proteins and are believed to be toxic to nearby neurons, leading to the loss of both brain cells and their connections, which may play a major role in the onset and progression of Alzheimer’s. Another popular avenue of research focuses on changes at the microscopic scale that may relate to the loss of memory with the progression of the disease. Such studies have focused on the connections between neurons, anatomy of the brain regions involved, and molecules that may participate in the signaling between cells of the brain. Recent research published in PLOS ONE out of the University of Illinois at Chicago from the lab of Orly Lazarov shows that the Presenilin-1 protein, a protein linked to familial Alzheimer’s and to plaque formation, plays an important role in the birth of new neurons in the hippocampus, a brain structure crucial for memory formation and implicated in the disease [3]. Moreover, this research shows how loss of Presenilin-1 in newborn hippocampal cells can cause deficits in behavioral tests associated with memory.

Fluorescently labeled cells in the hippocampus and dentate gyrus. Photo courtesy of Jason Snyder via Flickr.

Fluorescently labeled cells in the hippocampus and dentate gyrus. Photo courtesy of Jason Snyder via Flickr.

Presenilin-1 affects on neurons and behavior

To determine the role of Presenilin-1 in the birth of new neurons in the hippocampus, the lead author of the study, Jacqueline Bonds, and her colleagues used a technique that allowed them to fluorescently tag neurons while also reducing levels of Presenilin-1 in a specific part of the hippocampus known as the dentate gyrus. In order to study the effects of reduced Presenilin-1, behavioral tasks were carried out at three months post-manipulation, a time point when many fluorescently labeled cells were observed. The first task measured whether an animal is able to distinguish between two similar contexts based on different cues provided by the researchers. For example, think of two rooms with similar layouts and dimensions but with different colored walls and with different odors. Such tasks are known to depend on the generation of new neurons in the hippocampus. At the three month mark, lower levels of Presenilin-1 did not seem to have much of an affect on the ability of the mice to successfully perform the task, as both regular mice and mice with reduced Presenilin-1 were able to distinguish between the two contexts successfully. However, at six months post-injection, the injected animals showed impairments in their ability to distinguish the separate contexts. This suggests that Presenilin-1 is important for the performance of this task.

Since the scientists had shown that prolonged Presenilin-1 reduction affected a behavior that depends on the birth of new neurons in the hippocampus, they next wanted to test whether Presenilin-1 affects this process. Labeling cells with markers specific for newly generated neurons, Bonds et al. showed that there was a decrease in both mature neurons and newly generated mature neurons in the region of interest when Presenilin was reduced using the same method mentioned earlier. Similar decreases were observed in oligodendrocyte numbers, a supporting cell in the brain that was also examined. Next, the researchers took the study a step further and analyzed the shape of the surviving neurons to see whether or not this feature was also changed by the reduction of Presenilin-1. The authors noted significant decreases in the amount of branching in neurons as a result of reduced Presenilin-1 levels. Branches of these types of neurons are known to display tiny protrusions known as dendritic spines, which act as the site of contact and communication between two neurons. The authors also observed a decrease in the density of these dendritic spines. This decrease in spine density was observed anywhere from 10-20 microns from the cell soma. Mean length, surface area, and volume of spines were not significantly different when Presenilin was reduced. As mentioned, dendritic spines are important for communication between neurons, a process vital to normal brain function. Thus, the findings of this experiment suggest the ability of these neurons to signal to one another may be disrupted.

Evidence for the involvement of a signaling pathway

Up until this point, the paper had established clear affects of Presenilin-1 reduction on animal behavior (specifically behavior related to new hippocampal cells), cell number/survival in the hippocampus (as measured by the number of new neurons), as well as changes in dendritic branching and dendritic spine number. Such observations are important data in the understanding of Presenilin-1 involvement in neurogenesis and hippocampal-dependent memory function. However, such data provides little insight into the biochemical pathways linking Presenilin-1 to the changes observed in this study. Perhaps one of the most impressive pieces of this research is that Bond et al. were able to link Presenilin to a a specific pathway. To do this, the authors measured levels of molecules such as nestin, cyclin D1, EGFR, and neurofilament-L, and showed that they are changed in response to reduced Pesenilin-1. Specifically, neurofilament-L levels were significantly increased and nestin levels were significantly decreased. These molecules are involved in cell cycle regulation and thus the observed changes match previous experimental data generated during this study. The authors also mention they have previously observed changes in levels of Beta-catenin and notch-1 ligand delta when presenilin-1 is reduced [3]. In this study, using western blotting to quantify protein levels, the authors found that when presenilin-1 is reduced, levels of phospho-beta-catenin are changed, even though total levels of this protein are not different. Another protein, GSK3-beta was also probed as it is involved in the beta-catenin pathway; however, it was not altered. Finally, the authors also aimed to measure two additional role-players in the beta-catenin pathway, neither of which was convincingly altered, though seemingly consistent (but non-significant) trends in altered levels of these molecules were observed.

This study by Bond et al. provides a set of interesting results linking Presenilin-1 to important factors of hippocampal function at the molecular, cellular, network, and behavioral levels. Examining how such a protein can play a role at multiple levels earns this research group applause, as such a study is not always an easy task to complete. Though insights into the exact pathways mediating the cellular and behavioral deficits observed in the early experiments of this paper are incomplete, the authors have built a solid foundation for further investigating such pathways in the future. Convincing cellular phenotypes (i.e. reduced dendrite branching and reduced dendritic spines) as well as measurable behavioral outcomes in the behavioral task, provide Bond et al., as well as other researchers, with a set of deficits that can be used to measure how altering various molecules involved in relevant pathways either worsen or rescue this set of deficits. As Presenilin-1 is a known Alzheimer’s protein, future studies into these pathways may help lead to the identification of therapeutic targets for Alzheimer’s disease and other forms of dementia and neurodegeneration.

The big picture

Though overlooked for a long time, evidence for the birth of new neurons in the adult brain, especially in the hippocampus, has accumulated relatively recently. Though their exact role has not been fully worked out, it seems that these newborn cells may play a role in brain plasticity and learning and memory [4]. As Alzheimer’s disease involves progressive loss of memory and cell death in the hippocampus, the process of hippocampal neurogenesis has been a topic of interest for many Alzheimer’s researchers. Interestingly, various Alzheimer-associated proteins have been linked to this process, including the enzyme BACE and Presenilin-1 [5]. Earlier studies have shown that the loss of Presenilin-1 in mice can lead to impaired neurogenesis and disrupt the structural architecture of brain areas associated with newborn neurons [5]. In relation to these previous findings, the Bond study described above sheds some interesting light on the cellular and molecular changes that the loss of Presenilin-1 has on newborn neurons in the hippocampus. The deficits in neuron maturity and the finding of disrupted branching adds a new layer of insight into this growing body of literature.

Overall, Alzheimer’s represents a complex disease with some obvious genetic aspects (ie Presenilin-1) that may be harnessed to uncover common deficits or signaling pathways between genetic forms of Alzheimer’s and idiopathic forms. Moreover, the neurodegenerative effects of the disease make curbing, stopping, or even curing its progression a hard problem for the fields of medicine and science. Ideally, targeting cellular or chemical processes before the onset of degeneration may be helpful as the tissue may still be healthy and viable. This preventative approach is attractive, but before it can be realized, appropriate drug targets and disease processes must be pinned down. In this sense, studies like the ones discussed above provide a step forward in targeting useful cellular processes (such as neurogenesis) to cure disease. It will be interesting to see how other groups add to the set of data presented in the Bond paper and others to potentially take advantage of such processes.


Bonds, J. A., Kuttner-Hirshler, Y., Bartolotti, N., Tobin, M. K., Pizzi, M., Marr, R., & Lazarov, O. (2015). Presenilin-1 Dependent Neurogenesis Regulates Hippocampal Learning and Memory. PLoS ONE, 10(6), e0131266.

Lazarov, O., & Marr, R. A. (2010). Experimental Neurology. Experimental Neurology, 223(2), 267–281.

Rodríguez, J. J., & Verkhratsky, A. (2011). Neurogenesis in Alzheimer’s disease. Journal of Anatomy, 219(1), 78–89.

Sheng, M., Sabatini, B. L., & Sudhof, T. C. (2012). Synapses and Alzheimer’s Disease. Cold Spring Harbor Perspectives in Biology, 4(5), a005777–a005777.

Small, D. H. (2008). Network dysfunction in Alzheimer’s disease: does synaptic scaling drive disease progression? Trends in Molecular Medicine, 14(3), 103–108.


Category: Memory, Neuroscience, The Student Blog | Tagged , , | Leave a comment

Is Facebook making you sad? Research evaluating social media use and impacts on mental well-being inconclusive

Researchers have been evaluating whether Facebook use has impacts on mental well-being, but results continue to be inconclusive and at times contradictory. Photo by Anonymous, courtesy of Flickr.

Researchers have been evaluating whether Facebook use has impacts on mental well-being, but results continue to be inconclusive and at times contradictory. Photo by Anonymous, courtesy of Flickr.

There are more than 500 million people interacting with Facebook from countries all over the world every single day, and that number is growing. On August 24, 2015 Facebook reached a milestone when, for the first time, one billion users logged onto the site in a 24-hour window, which equates to one out of seven people in the entire world being active on the social networking website. Facebook is the world’s largest online social network, and has revolutionized how people interact and communicate. Yet, sometimes Facebook feels like a fog, which can hover over all of our social judgements and interactions. In fact, Facebook has grown so prevalent that it has even joined its ever-popular cousin Google and progressed into the ‘verb territory’; for example, the mark of a serious relationship is when it finally becomes “Facebook official”.

There are also cases where Facebook’s influence can have serious consequences for an individual’s future. Who hasn’t had an unflattering picture taken of them? Or shared information publicly when the information is best kept private and off the Internet record? While there are many obvious benefits to this form of fingertip social networking, there may be some downsides, which need to be considered.

PLOS ONE study finds an increase in Facebook use leads to decrease in self-reported well-being

Photo courtesy of Ze'ev Barkan, via Flickr.

Photo courtesy of Ze’ev Barkan, via Flickr.

A study published in PLOS ONE by Kross and colleagues found Facebook use predicted declines in self-reported well-being and life satisfaction among young adults. The study included 82 participants drawn from a sample of college students at University of Michigan, Ann Arbor. Participants answered questions about their feelings and life satisfaction five times a day over the course of two weeks. The questions assessed subjective feelings, social interactions, and Facebook usage time. A Satisfaction With Life Questionnaire was completed before and immediately after the 14-day sampling period to investigate how interacting with Facebook influenced subjective cognitive well-being (life satisfaction) and affective well-being (feelings). Information about personal motivations for using Facebook and perceptions of Facebook support were also gathered.

The authors found those who used Facebook more often over the study period reported a greater decline in well-being over time, specifically: a) how people felt moment-to-moment, and b) how satisfied they were with their own lives. Individuals who used Facebook most often over the two-week testing period showed a larger decline in life satisfaction levels over time. Some alternative explanations were ruled out, as direct interactions (face-to-face) did not predict any decline in well-being, and those who already reported feeling down or bad were not particularly more likely to engage with Facebook.

Study design has limitations, but significant findings calls for future research

As described in the “Future Research” section of this PLOS ONE paper, the study had a few limitations. First, the participants were university students who represent a main Facebook user demographic, but are not a sample generalizable to the global Facebook user population (after all, there are one billion users). A second caveat to this sample is that participants were recruited through fliers advertising the study on the University of Michigan campus. Due to the specific location and demographic, this sample may represent people who are more likely to feel bad after high Facebook use, and is a group that does not represent the general population overall. Second, the two-week time period is also very narrow and may impact the results. For example, given the characteristics of this sample, it is possible these two weeks overlapped with high-stress periods on the academic calendar, such as midterms or finals. Finally, the researchers didn’t ask about other sources of stress, and because the study was not a randomized trial, we cannot assume these potential confounders were evenly distributed.

In the future, a larger randomized controlled trial or a longitudinal cohort study could strengthen the evidence for the detrimental effect of Facebook on well-being. It would also be beneficial to increase the number of participants as the study size is fairly small at less than 100 people.

Other social networking studies evaluating mental health impacts find contradictory results

In the discussion, Kross et al. suggested that one potential mechanism for a reported decline in well-being is that Facebook use triggers damaging social comparisons, which is explored in more detail in a 2012 study published in a Cyberpsychology journal. Damaging social comparisons refers to how people compare themselves to others in their social network, while not taking into account that individuals share only the most favorable material online. It is most likely that individuals on social media traditionally share the most interesting or favorable parts of their lives on Facebook — the holidays, the nights out, and the lavish meals. We know this is a window to only a small part of an individual’s world, and constantly comparing your own life to these specific celebratory moments can potentially be harmful.

But don’t hastily delete your Facebook account just yet, not all studies show that Facebook use can create a negative effect on mental health. A recent study from the University of Wisconsin tested for a correlation between 190 university students’ social networking use and any signs of clinical depression. Clinical depression is a severe form of depression, and to be diagnosed you must meet the symptom criteria for major depressive disorder in the Official Psychiatric Manual (DSM). This study found that there was no evidence supporting a relationship between social network use and clinical depression.

It is notable that the impacts of Facebook use on self-reported well-being, which was measured in the PLOS ONE study, and Facebook use and clinical depression, which was measured in the University of Wisconsin study, are two very different outcomes. However, by synthesizing the results of these studies we find that Facebook use and impacts on mental well-being are inconclusive, at least so far.

Facebook as a tool for discussion and awareness of depression

Alternatively, Facebook has also been championed as a useful tool for combating the stigma around mental illness. One investigation followed students’ public Facebook disclosures over one year to evaluate for statements that meet the DSM criteria for a depressive symptom or episode. Overall, 25% of the profiles were found to broadcast content which displayed symptoms of depression. So, rather than social networking contributing to mood disorders, there are also ways in which it may be useful for publicly discussing them and raising further awareness.

Concluding thoughts

Based on my review of studies examining Facebook use and impacts on mental well-being, it seems that too much time spent on Facebook may negatively impact a person’s mood, but that is highly unlikely to actually cause or contribute any clinical symptoms. The effects of Facebook on mental health and well-being is far from clear. If anything, these conflicting results call for more research with a more rigorous study design to examine the effect of a relatively new force that has become almost omnipotent in our digital age. Some evidence also suggests young adults should put down their phones for a few hours of the day and leave the Facebook fog behind for more personal social interactions, like meeting some friends for walks in the great outdoors.

I believe it is important to find balance, and Facebook use can be part of a healthy and varied lifestyle, so long so it doesn’t compromise other important activities. We also see that when used effectively, Facebook is a platform to disseminate important information, including scientific findings, thus allowing individuals and researchers to education a large and diverse audience.

On that note, please don’t forget to share this article on social media with your various and diverse social networks. I guess it perhaps isn’t quite that simple to give up on Facebook just yet…

Category: The Student Blog | Tagged , , , , , , , , | Leave a comment

Pathogens and Defense: Speakers recap from the 6th EMBO Meeting

By Meredith Wright

From September 5-8, scientists converged on Birmingham, UK for the 6th European Molecular Biology Organization Meeting. EMBO is an organization which aims to support multiple branches of science, by holding courses, workshops, conferences, and publishing the EMBO journal. This year’s meeting consisted of a diverse array of speakers, as well as varied career development sessions. The meeting had three sets of plenary lectures; here, we highlight the speakers and their work from the ‘Pathogens and Defense’ set from Sunday, Sept. 6th.

PLOS had a booth at the EMBO meeting, and PLOS Student Blogger Meredith Wright recaps some of the key speakers from the 2015 meeting.

Gillian M. Griffiths

Gillian M. Griffiths, director of the Cambridge Institute for Medical Research and Professor of Immunology and Cell Biology at the University of Cambridge, was a speaker at the 6th EMBO meeting. Photo courtesy of University of Cambridge.

Gillian M. Griffiths, director of the Cambridge Institute for Medical Research and Professor of Immunology and Cell Biology at the University of Cambridge, was a speaker at the 6th EMBO meeting. Photo courtesy of University of Cambridge.

Griffiths is the director of the Cambridge Institute for Medical Research and Professor of Immunology and Cell Biology at the University of Cambridge. Her lab studies the immunological synapse; that is, the site of interaction between cytotoxic immune cells (such as T cells or NK cells) and antigen presenting cells. The immunological synapse can be compared to the more widely-known neuronal synapse in that these are both locations where adjacent cells can communicate. However, the neurons making up a synapse in the brain do not touch, and rather transmit signals via diffusible factors. Meanwhile, the immunological synapse is a true cell-cell junction, with both secretion of diffusible factors and binding of the two cells via cell surface receptors. Specifically, the immunological synapse consists of a central cluster of T cell receptors surrounded by adhesion molecules and further surrounded by an actin ring. These structures work together to very specifically secrete effector molecules to only the target cell, with the goal of killing the infected target cell.

The precise mechanics of the formation of the synapse remains an active area of research. The most recently published work from the Griffiths lab looks at granule secretion at the immunological synapse. They used high-resolution 4D imaging to study the timing and order of the key events that lead to secretion of cytotoxic granules during immunological synapse formation. The microscopy techniques used are as impressive as the actual results, as the “4D” imaging is achieved by utilizing multiple types of microscopy (spinning-disc confocal microscopy and lattice light-sheet microscopy capture multiple components of cytotoxic cells over time).

David Holden

David Holden is the Director of the MRC Centre for Molecular Biotechnology at the Imperial College London and a speaker at the 2015 EMBO meeting. Photo courtesy of Imperial College London.

David Holden is the Director of the MRC Centre for Molecular Biotechnology at the Imperial College London and a speaker at the 2015 EMBO meeting. Photo courtesy of Imperial College London.

Holden is the Director of the MRC Centre for Molecular Biotechnology at the Imperial College London. His lab studies bacterial virulence mechanisms primarily in the Salmonella bacterium. Salmonella is a genus of bacteria known for being a major cause of food poisoning, with Salmonella enterica being the particular species responsible for most human disease. S. enterica has a number of virulence factors, but it is perhaps most famous for its two Type III secretion systems. The T3SS is a secretion system that functions like a molecular needle; it injects effector proteins into the target-cell cytosol, which then alter signaling in the host cell to the benefit of the bacterium. The details of how and when the T3SS is of use to the bacterium are not completely understood, and the molecular mechanisms of the many different effector proteins are also still being studied.

The Holden lab is currently studying the effectors translocated by the T3SS encoded on Salmonella pathogenicity island 2 (SPI-2). This T3SS is essential for the bacterium to maintain the Salmonella-containing vacuole inside infected cells. Recent work from the lab shows that Salmonella effector SifA interferes with the lysosomal adaptor Pleckstrin homology domain-containing protein family member 1 (PLEKHM1). Depletion of PLEKHM1 leads to defective Salmonella-containing vacuole formation, which they term a “bag of Salmonella,” and reduced Salmonella growth. In addition to this work with effectors, the Holden lab has recently shown that Salmonella is able to induce changes to host cells that are associated with gallbladder carcinoma. This has global health implications, as this indicates that parts of the world where S. enterica infections are extremely common are at a higher risk for gallbladder cancer.

Mary N. Carrington

Mary N. Carrington is the Director of the Basic Science Program at the Frederick National Laboratory for Cancer Research at the NIH. Photo courtesy of NIH.

Mary N. Carrington is the Director of the Basic Science Program at the Frederick National Laboratory for Cancer Research at the NIH. Photo courtesy of NIH.

Carrington is the Director of the Basic Science Program at the Frederick National Laboratory for Cancer Research at the National Institutes of Health in the United States. Her work focuses on the connection between human leukocyte antigen (HLA) class I genes and human immunodeficiency virus (HIV) infection. It is already known that variation in HLA class I genes can lead to different outcomes of HIV infection; that is, certain HLA alleles result in slower progression from HIV infection to AIDS. However, work in her lab is demonstrating that variation near the HLA class I genes and in the regulation of the HLA class I genes also plays an important role in HIV infection.

For example, in a 2014 PLOS Genetics paper, her lab showed that polymorphisms in the HLA class I genes can alter how innate immune cells fight HIV. Specifically, different HLA-B alleles have different binding capabilities with leukocyte immunoglobulin-like receptors (LILR), which are found on dendritic cells. Binding strength of LILR to HLA class I types in patients positively associated with viral replication. More recently, her lab studied the connection between HLA class I and HIV in pediatric versus adult patients, and surprisingly found that the protective effects granted by some HLA genes in adults are not present in pediatric patients. This further supports the lab’s assertion that HLA class I allele alone cannot predict HIV outcome for all patients, and that additional work on nearby/regulatory genes is needed.

Reference List

Category: PLoS, PLoS Blogs, The Student Blog | Tagged , , , , , , , , , , , , | Leave a comment

Better surveillance and improved sampling tactics of wild animal populations could impact public health

By Joslyn Neiderer

L’hoest’s monkey in Bwindi Impenetrable Forest region, Uganda with nylon oral swab rope and attached retrieval string. (Photo by T. Smiley Evans)

L’hoest’s monkey in Bwindi Impenetrable Forest region, Uganda with nylon oral swab rope and attached retrieval string.
(Photo by T. Smiley Evans)

What do Ebola, rabies and bovine tuberculosis have in common? The answer is that they are all zoonotic diseases, which are diseases that originate in one species and then spillover into another. The Centers for Disease Control estimates that six out of 10 infections that occur in humans first appeared in animals. In fact, diseases that we would commonly think of as specialized human diseases, such measles and HIV/AIDS, first originated in animals. Oftentimes, zoonotic disease outbreaks seem to practically appear out of nowhere. Monitoring the health of animals that live near humans and interact with humans is important. But while monitoring the health of domestic animals is easy — they are used to us and are more tolerant of our poking and prodding — sampling wild animal populations presents a challenge.

Increased interaction between humans and wild animals leads to more opportunities for infection

The spread of disease from animals to humans and vice versa is due to species interaction, which can happen a number of ways. When wild animal populations lose their habitat, they have no choice than to learn to live in close contact with humans. In the case of animals that are genetically quite similar to humans such as Old World Monkeys, which include baboons and macaques, diseases can adapt pretty quickly to flow between species, and they flow both ways. Human diseases can also infect animal populations, and it is important to monitor the health of the animals also to protect the biodiversity of the planet. A pathogen does not care who or what they infect, but rather pathogens are just looking for a host, and can evolve quickly.

Old World Monkeys are very intelligent creatures, and traditional sampling techniques to gather blood and saliva presents a host of problems. Collecting these types of samples often involves sedating the animals. These animals often evade researchers and will rarely let scientists collect a sample more than once, even if the scientist is able to subdue them. This causes a problem in these data; repeated sampling of these animals is necessary to get a full picture of disease trends and overall health of a population. On top of that, Old World Monkeys often teach their offspring and other members of their communities to avoid researchers. Because of the difficulty of field sedation and sampling the most common method for monitoring the health of these animals is the collection of excrement; but this method does not tell the whole story since many pathogens are transmitted through bodily fluid, such as saliva, blood etc.

Saliva from wild animals spreads disease

Rhesus macaque from Thapatali temple complex with nylon oral swab rope. (Photo by T. Smiley Evans)

Rhesus macaque from Thapatali temple complex with nylon oral swab rope.
(Photo by T. Smiley Evans)

Wild animals bite and saliva is the perfect way to infect humans or other animals with disease. In the case of rabies, it is not transmitted by contact with blood, urine, or feces, it is transmitted orally with the saliva as it comes into contact with mucosal surfaces. Many diseases could be present in an animal that appears to be healthy, or asymptomatic, and that could be fatal when that same disease enters the human population. This represents an important public health issue.

Researchers need to sample populations of animals that live near humans in these potential hot zones may provide clues as to where the next human outbreak of disease may occur. It is critical that reliable, cheap, and easy techniques that would allow the monitoring of these animal populations be developed.

PLOS NTDs article introduces an adapted approach

Nylon oral swab rope disguised inside a banana for distribution to baboons in Queen Elizabeth National Park, Uganda. (Photo by O.R. Okello).

Nylon oral swab rope disguised inside a banana for distribution to baboons in Queen Elizabeth National Park, Uganda.
(Photo by O.R. Okello).

An article recently published in PLOS Neglected Tropical Diseases by Smiley Evans et al. details how a technique typically used in the United States to monitor the health of domestic pigs has been adapted to sampling primates. This non-invasive, interactive method actually involves the animals sampling themselves. First, the researchers introduce a rope that the animals could chew on. In most cases, the researchers threw a rope covered in strawberry jam to the animal, and allowed the animals to chew on it until it was voluntarily discarded. The rope was then collected for testing for any potential pathogens present in the animal’s saliva. The goal of this study was to evaluate rope distribution technique and to evaluate the quality of the oral samples from the ropes by looking for both DNA and RNA viruses.

Viruses were chosen for this study because of their fragility in the environment, especially in tropical areas, and their susceptibility to handling as compared to bacteria. Optimizing this technique for RNA viruses is especially important because these types of viruses have a high mutation rate — which provides greater likelihood of the virus jumping to another species.

The researchers tested their method by evaluating rope samples collected from laboratory animals — which they could sample with traditional oral swabbing methods as well — and compared them to the ropes collected from wild animal populations. While the controlled oral swabbing method provided more complete data, the ropes also had evidence of DNA and RNA viruses that could be detected. The researchers caution that with any new technique additional experiments are warranted, but their work indicates that this method could be used for disease surveillance in free-range primates and potentially other wildlife species when traditional sampling techniques may not be feasible.

Better data on wild animal populations could give way to greater knowledge

Human and wild animal interaction will become more prevalent as animal habitats continue to be destroyed by human activity. Habitat loss is often caused by humans altering the environment, not just through the direct destruction of habitat, but also by pollution, and also change of ecological structure caused by climate change (but that’s a different blog post). Loss of habitat means loss of wild animal food sources, which could lead to an increase in wild animal interactions with humans. More interactions between humans and wild animals raises the probability of pathogens jumping between the species. From a public health perspective, more crossover between humans and animals can increase the prevalence of diseases with an animal vector, including Ebola and the plague among others.

Places where these two communities intersect are prime locations for emerging zoonotic disease epidemics, and as seen by the 2013-2014 Ebola outbreak in West Africa, lack of surveillance and lack of preparation can lead to devastating consequences.

Understanding animal health is essential to protecting human health, and monitoring species that serve as vectors for zoonotic diseases can help researchers keep track of pathogens that humans could be exposed to, and study how these diseases are transmitted between species. Monitoring wild animal populations and gathering stronger baseline data is also important to knowing how these diseases may evolve in the future.

Beyond public health, wild animal populations can also serve as an indicator for shifts in the environment, because they can detect subtle changes humans may not be aware of yet. It helps to think of wild animals as the “canary in the coal mine”. Interactive sampling techniques, something that does not stress or intimated the animals, is a process that could significantly enhance our understanding of the health of humans, wild animals and of our environment.

Zoonoses Fact Sheet, World Health Organization.

Animal Bites Fact Sheet, World Health Organization.

Primates, Smithsonian National Zoological Park,

Zoonotic Diseases, Center for Disease Control Fact Sheet,

Center for Disease Control and Zoonotic Disease Facts,

Non-human Primates, Virginia Department of Health,

Smiley Evans et al., Optimization of a Novel Non-invasive Oral Sampling Technique for Zoonotic Pathogen Surveillance in Nonhuman Primates Published: June 5, 2015
DOI: 10.1371/journal.pntd.0003813

Category: PLoS Neglected Tropical Diseases, The Student Blog | Tagged , , , , , , , , , | Leave a comment

9-2-15 PLOS Science Wednesday AMA Preview: River blindness programs improve health outcomes: Evidence for increased prioritization of NTDs in post-2015 global health agenda

A man, 30-40 years old, blind due to onchocerciasis. Photo by Pak Sang Lee, originally published in: Revue de Santé Oculaire Communautaire Vol. 1 No. 1 2004. Photo courtesy of Flickr.

A man, 30-40 years old, blind due to onchocerciasis. Photo by Pak Sang Lee, originally published in: Revue de Santé Oculaire Communautaire Vol. 1 No. 1 2004. Photo courtesy of Flickr.

By Sara Kassabian

River blindness (onchocerciasis) is an onerous neglected tropical disease (NTD) and the second-leading cause of preventable blindness worldwide. Onchocerciasis is transmitted by a bite from the black fly, which creates an inflammatory response that leads to blindness and other adverse outcomes. Today, more than 100 million people are at high risk of onchocerciasis infection, the majority of who live in sub-Saharan African countries [7].

In spite of its prevalence and high burden, onchocerciasis, like all NTDs, receives substantially less attention and funding than the “big three” diseases: HIV/AIDS, malaria, and tuberculosis. But emerging evidence shows that people co-infected with one or more NTD may adversely and exponentially affect an individual’s prognosis of having one of these priority diseases.

A recent article in PLOS Neglected Tropical Diseases titled, “The Contributions of Onchocerciasis Control and Elimination Programs toward the Achievement of the Millennium Development Goals” examined through data analysis and literature review how Onchocerciasis control and elimination programs have benefits beyond disease-specific outcomes.

To discuss the success of river blindness control and elimination programs and new priorities for NTDs, authors Caitlin Dunn, Kelly Callahan, and Dr. Deborah McFarland will be participating in this week’s ‘PLOS Science Wednesday’ redditscience ‘Ask Me Anything’ (AMA). They will be answering your questions about onchocerciasis control, NTDs, and the global health impacts of these diseases and their positioning on the global health agenda on redditscience at 1pm ET (10am PT) on Wed September 2, 2015. You can register on redditscience in preparation for this upcoming AMA (or on the day of), so you’ll be able to add your questions and comments to the live conversation.

The authors will discuss how onchocerciasis programs, particularly the Community-Directed Treatment with Ivermectin (CDTI) system, strengthened the capacity for community-directed delivery systems, improved disease surveillance, and served as a model for public-private partnerships with the pharmaceutical industry.

Caitlin Dunn and colleagues found that programs to control and eliminate river blindness were among the most high-impact, low-cost public health campaigns ever. The author’s findings showed while many onchocerciasis programs intersected with the health and development outcomes sought by the United Nations Millennium Development Goals (MDGs), a disproportionate amount of resources continue to be directed to the “big three” diseases in global health. The disparity in funding is illustrated in the 2010 numbers for international development assistance, with HIV/AIDS programs receiving 37% of the total assistance, and NTDs receiving 0.6% of total funds [57].

As the global health community transitions from the MDGs to the launch of the post-2015 United Nations Sustainable Development Goals (SDGs), the NTD research community is pressuring policymakers to prioritize these neglected diseases, including river blindness, in the new health agenda.

Selected Q&A with co-author Kelly Callahan, Director of the Trachoma Control Program at The Carter Center.

Kelly Callahan is the Director of the Trachoma Control Program at The Carter Center, and a co-author on the PLOS NTDs article: “The Contributions of Onchocerciasis Control and Elimination Programs toward the Achievement of the Millennium Development Goals."

Kelly Callahan is the Director of the Trachoma Control Program at The Carter Center, and a co-author on the PLOS NTDs article: “The Contributions of Onchocerciasis Control and Elimination Programs toward the Achievement of the Millennium Development Goals.”

Q: The resources invested in “the big three” diseases of global health (HIV, Tuberculosis, and Malaria) through The Global Fund as well as PEPFAR had a huge impact on scaling-up community programs and national prevention strategies in high-burden settings. Considering how frequently NTDs coincide with these priority diseases, it seems that it would make sense to integrate treatment for NTDs afflicting local communities in disease response programs for the “big three”. Are you aware of any discussion or efforts to integrate NTD response in “big three” community programs? Also, what are some common barriers to scaling-up onchocerciasis programs in endemic settings?

A: Great question – The Carter Center has integrated when and where possible to achieve the greatest impact.

The Carter Center’s Hispaniola Initiative works with ministries of health in Haiti and the Dominican Republic to accelerate the elimination of malaria and lymphatic filariasis from the countries’ shared island, Hispaniola, by 2020.

The Carter Center also carried out integrated malaria and onchocerciasis and malaria and trachoma activities in Ethiopia with great success. In 2007 these integrated programs assisted the Federal Ministry of Health in the distribution of over 20 million bed nets in the fight against malaria. The integrated programs also carried out health education and treatment activities. The malaria and trachoma activities were combined to treat the entire regional population for trachoma with the antibiotic Zithromax®, donated by Pfizer Inc-a population approaching 20 million people annually. At the same time we screened every person for malaria. If they had a fever, they were tested with the best rapid diagnostic tests available and provided treatment. It was also an opportunity to reinforce the ongoing health education messages for preventing the two diseases.

Despite success in malaria and other NTD integration, The Carter Center is very careful about integration… Integration is a fantastic concept however, integration needs to be reviewed carefully by the ministry of health and partners to ensure decisions to integrate are data driven and focused on the greatest impact, make sense and are carried out in the most reasonable and cost efficient manner.

Q: The Carter Center has made huge gains toward eradicating guinea worm. In what ways has the Guinea Worm Eradication Program served as a blueprint for control, elimination, and eradication efforts for other NTDs, likes river blindness?

A: The NTD Programs at The Carter Center are modeled after the Guinea Worm Eradication Program. Each of these programs is data driven and outcome oriented. The progress of each program is measured annually during Program reviews, just like the Guinea Worm Eradication Program.

The International Task Force for Disease Eradication (ITFDE) formed at The Carter Center reviews and assesses disease that can be potentially eradicated. The ITFDE reviews the progress of disease selected for eradication or control and make recommendations based upon these reviews.

NOTE: The ITFDE identified onchocerciasis as one of three NTDs that are eligible for elimination or better control, but not eradication. The ITFDE includes more information about the key factors that determine if a disease is eligible for eradication.

Do you have more questions about river blindness? Kelly Callahan and co-authors Caitlin Dunn and Dr. Deborah McFarland will be taking your questions about river blindness, Neglected Tropical Diseases, and post-2015 global health priorities on RedditScience September 2 at 1pm ET (10am PT)!

Category: Global Health, PLoS Blogs, PLoS Neglected Tropical Diseases, The Student Blog | Tagged , , , , , , , , , , , , , | Leave a comment

Climate Capital: Assessing the hidden value of coastal ecosystems

By Gordon Ober

Le Nguyen throws a line to haul up a crab trap. The Alaskan King Crab industry has been threatened by increasing ocean acidification. Photo courtesy of the Seattle Times, 2013 (Steve Ringman).

Le Nguyen throws a line to haul up a crab trap. The Alaskan King Crab industry has been threatened by increasing ocean acidification. Photo courtesy of the Seattle Times, 2013 (Steve Ringman).









Measuring the fiscal value of ecosystems

Ecosystems provide both direct and indirect services to the environment. Direct services are the ones we can essentially see, and are often given financial value. Many conservationists cite the direct and tangible economic value of the environment in their fight, but this is just one valuation of ecology. Oftentimes, the indirect services, or “hidden” values of the environment are the most significant and compelling reasons for prioritizing conservation. While economic arguments for conservation certainly have merit, the intrinsic functions of an ecosystem are often the most valued. The International Union for Conservation of Nature (IUCN) has called upon the global community to quantify the total value of an ecosystem by combining the values of both the direct and indirect services an ecosystem offers to highlight the importance of protected ecosystems.

Arguably, the most critical service an ecosystem provides is its inherent ability to capture and store carbon. As the world faces pressure to reduce CO2 and mitigate climate change, ascribing economic value to the critical indirect services of the ecosystem is important. Particularly as research has shown that the amount of carbon preserved ecosystems capture pays off in economic gains.

The Kyoto Protocol, a binding agreement set by the United Nations Framework Convention, put the Global Carbon Market into motion, which has lead to evaluations of hidden and indirect ecosystems services. This has created financial incentive for the conservation of valuable ecosystems by putting a price on greenhouse gas emissions and biologically storing units of carbon. However, at the moment it only applies to terrestrial areas. In the past, the science and modeling for marine systems has lagged behind its terrestrial counterpart, but new efforts by scientists to quantify the indirect values of marine ecosystem function have helped the issue gain momentum.

The High Economic Value of Coastal Ecosystems

Coastal ecosystems have proven to be some of the most productive and valuable ecological repositories on the planet. These ecosystems are renowned for their biodiversity as well as their economic value. Fisheries, aquaculture, recreation, and eco-tourism are just some of the ways individuals have made money from the health of coastal ecosystems. Though the health of coastal ecosystems can be profitable, it should come as no surprise that man-made forces are threatening the survival of these ecosystems, inevitably reducing their economic value.

Just one example of the ecosystems threatened by climate change is the iconic king crab fisheries in Alaska. Despite being valued at $100 million USD, ocean acidification threatens the survival of these fisheries. The acidification is the result of increases in atmospheric CO2, which cause ocean pH levels to drop, and decrease the molecules important to calcification. In this environment, shelled creatures, like king crabs, aren’t able to fully develop due to the effects of ocean acidification and thus face increased predation risk and stunted growth. This has an almost immediate impact on the size and quality of crabs caught. If Alaskan crab industry were to crash, people would lose their jobs and the local economy would suffer the consequences. In this scenario, the economic value of this habitat is tangible and direct, and the quoted value of $100 million USD is enough to attract attention.

While the economic value of crab fisheries in Alaska hold fiscal weight, there are many other hidden, or indirect, values of coastal ecosystems. For example, salt marshes help to buffer the impact of storms and flooding, and also filter runoff before it hits the aquatic system. While these benefits are recognized as important, their financial value is difficult to quantify, which creates challenges around building an economic argument for the indirect implications of healthy ecosystems. As a result, many policymakers and resource allocators are less inclined to prioritize sustaining coastal ecosystems that offer intangible benefits.

The increasing value of blue carbon

Indirect services may be hard to quantify, but their importance is starting to attract attention, specifically when it comes to blue carbon. Blue carbon, or the carbon capture by marine ecosystems, differs from the carbon captured by its terrestrial counterpart. In recent years, researchers like Murray and colleagues have started to study the role of blue carbon in combating climate change. Marine systems have high rates of capturing and storing CO2, making them the largest carbon sinks in the world. Coastal marine systems, such as salt marshes, seagrass meadows, coral reefs, and mangroves are active carbon sinks due to their high productivity. In these zones, there are high densities of both microscopic and macroscopic photosynthetic organisms that actively consume CO2 and can effectively store it. In another study, Nellemann and colleagues found that marine ecosystems could capture 55% of all atmospheric CO2. The ability to absorb and store CO2 is a hidden but incredibly valuable aspect of these ecosystems, especially in the face of exponential increases in anthropogenic CO2 as a human-induced factor climate change.

A model for costing marine ecosystems

In a May 2015 PLOS ONE paper, authors Tatiana Zarate-Barrera and Jorge Maldonado were able to adapt and reconfigure a model to help put a fiscal value on indirect value of coastal marine ecosystems specific to their local ecosystems.

Globally, many countries have made efforts to protect their marine ecosystems and resources, often by establishing marine protected areas (MPAs), areas in which biodiversity is protected from further human influence. However, it is often hard to get the funding and support to create more MPAs, and maintain MPAs that are already established. The researchers began to investigate both the carbon-storage ability and potential economic value of that storage within MPAs along the coast of Colombia, with the ultimate goal of providing solid economic evidence for conserving and expanding MPAs.

Total Economic Value (TEV) and its components applied to marine and coastal ecosystems. Figure courtesy of PLOS ONE " Valuing Blue Carbon: Carbon Sequestration Benefits Provided by the Marine Protected Areas in Colombia" (2015).

Total Economic Value (TEV) and its components applied to marine and coastal ecosystems. Figure courtesy of PLOS ONE ” Valuing Blue Carbon: Carbon Sequestration Benefits Provided by the Marine Protected Areas in Colombia” (2015).











The authors adapted a model proposed in a 2012 PLOS ONE paper to fit their local system. Part of the model takes into account the rate of carbon capture by an ecosystem and the dominant biota, the size of the ecosystem, how carbon storage can be divided by sediments and living material, and the depth of the seabed. This part of the model generates an annual amount of carbon uptake by a specific ecosystem based on the size and the biota present. The model then incorporates the price of carbon per unit on the Global Carbon Market to generate a monetary value for the carbon storage for a known amount of coast. Using this model, the researchers were able to estimate that increasing the size and range of MPAs would have a significant and positive economic impact. This new model indicates that the value of these ecosystems is about 16 to 33 million EUR per year, and for the first time puts a concrete monetary value on an indirect service.


Models such as the TEV model pioneered by Pendelton and colleagues are pivotal in global conservation efforts and necessary to help bridge the gap between science and economics. These models can also be adapted to show how much money a country could lose by destroying an ecosystem, conveying a powerful message to policymakers who may otherwise neglect coastal ecosystems.

As climate change tightens its grip, dealing with excess carbon and quelling the global effects are increasingly important. Developing a way to give economic incentive for preserving coastal ecosystems will not only help conservation, but will also help the scientific community address climate change.


Harley, Christopher DG, et al. “The impacts of climate change in coastal marine systems.” Ecology letters 9.2 (2006): 228-241.

IUCN, TNC, World Bank. How Much is an Ecosystem Worth? Assessing the Economic Value of Conservation. Washington, DC; 2004.

Murray BC, Pendleton L, Jenkins WA, Sifleet S. Green Payments for Blue Carbon Economic Incentives for Protecting Threatened Coastal Habitats. Durham, NC: Nicholas Institute for Environmental Policy Solutions; 2011.

Nellemann C, Corcoran E, Duarte C, Valdés L, De Young C, Fonseca L, et al. Blue Carbon. A Rapid Response Assessment. GRID-Arendal: United Nations Environment Programme; 2009.

Pendleton L, Donato DC, Murray BC, Crooks S, Jenkins WA, Sifleet S, et al. Estimating Global “Blue Carbon” Emissions from Conversion and Degradation of Vegetated Coastal Ecosystems. PLOS ONE 2012; 7(9): e43542. doi: 10.1371/journal.pone.0043542 PMID: 22962585

Welch, Craig. “Sea Change: Lucrative crab industry in Danger.” Seattle Times, September 11, 2013.

Zarate-Barrera TG, Maldonado JH (2015) Valuing Blue Carbon: Carbon Sequestration Benefits Provided by the Marine Protected Areas in Colombia. PLoS ONE 10(5): e0126627. doi:10.1371/journal.pone.0126627

Category: The Student Blog | Tagged , , , , , , , , , , , | Leave a comment

Legionnaire’s in 2015: Cutting Edge Research Clashing with Public Health Unpreparedness

By Meredith Wright

In 1976, the American Legion, a veterans group still active today, met in Philadelphia, PA for a three-day convention. Shortly after the convention ended many of the Legionnaires became ill. By the end of the outbreak, 182 people contracted pneumonia and 29 people had died. Identification of the causative agent introduced Legionella pneumophila to the world, an intracellular bacteria which infects freshwater protozoa and causes bacterial pneumonia in humans. Almost 40 years later, New York City just experienced its worst outbreak of the disease, which started on July 12, 2015, in the Bronx and has grown to 124 reported cases, with 12 fatalities. While the outbreak is now considered over, questions remain concerning how a bacterium known to clinicians and public health officials since 1976 can cause such a large outbreak now. What do we know about L. pneumophila, and what is being done to prevent future outbreaks?

Image courtesty of NYC Department of Health and Mental Hygiene

Image courtesty of NYC Department of Health and Mental Hygiene

The Basics

L. pneumophila is an intracellular bacteria that naturally resides in warm water, where it is adapted to live inside protozoa, such as amoebae, and forms biofilms. Humans become infected by breathing in vapors or mist from contaminated water sources; it is not spread from human to human. Further, this bacterium is considered an opportunistic infection, meaning that it generally infects the elderly, those with chronic lung problems, or those who are immunocompromised for other reasons.

In humans, L. pneumophila infects alveolar macrophages, a cell type which is important for degrading pathogens and other foreign particles that have entered the lungs. Macrophages perform this vital task mainly through a process called phagosome-lysosome fusion, which involves the uptake of a foreign body, its packaging into a compartment called the phagosome, and then the maturing of the phagosome into the lysosome. The lysosome’s interior has harsher conditions than the phagosome and should result in the degradation of its contents (see Fig. 1b for a basic schematic of phagosome-lysosome fusion). Many pathogens have evolved methods for counteracting this human defense mechanism, including L. pneumophila. The Legionnaire’s disease bacterium co-opts phagosome-lysosome fusion to create a compartment that shelters the bacterium, known as the Legionella Containing Vacuole (LCV), depicted in Fig. 1a. L. pneumophila also impacts human macrophages through lipid remodeling, autophagy, and ubiquitin pathway changes. These processes work together to allow the replication of the bacterium at the cost of normal macrophage function, resulting in the pneumonia symptoms experienced by patients.

 Fig. 1: Legionella and phagosome-lysosome fusion. A: Basic schematic of the formation of the Legionella Containing Vacuole (LCV). B: Standard schematic of effective phagosome-lysosome fusion to contain a bacterium.

Fig. 1: Legionella and phagosome-lysosome fusion.
A: Basic schematic of the formation of the Legionella Containing Vacuole (LCV).
B: Standard schematic of effective phagosome-lysosome fusion to contain a bacterium. From Isberg et al, 2009.

The Latest Research

The molecular details of how L. pneumophila redirects the host cell for its nefarious purposes of building the LCV are not completely understood, but this is an area of active research. For example, recent work has identified Lpg0393, a previously unknown protein produced by the bacterium which activates Rab proteins, a family of proteins that play a critical role in the phagosome-lysosome fusion process. Additionally, recent structural studies on SidC, a known protein of L. pneumophila, has explained in better detail how this bacterial protein works to recruit necessary host proteins for sustaining the LCV and also lays the groundwork for utilizing proteins from this bacterium as a tool for other research. And perhaps most relevant for public health officials, it has recently been shown that not all decontamination techniques are created equal. A study published in PLOS ONE in August 2015 shows that suboptimal temperatures and chlorine concentrations reduce the benefits of treatments currently used for decontaminating water, and that these suboptimal conditions are usually what is found in the parts of water systems that are most likely to come in contact with human users.

Sources of Outbreaks

According to the CDC, common sources of the bacteria include hot tubs, cooling towers, hot water tanks, plumbing systems, and fountains. In the 1976 outbreak which gave L. pneumophila its name, Legionnaires who became ill were all staying in the same hotel; and while a precise source was not confirmed, it seems likely the bacteria came from contaminated water tanks connected to the air conditioning system, as the only hotel employee to become sick was an air conditioning repairman. According to testimony from Dr. Mary T. Bassett, Commissioner of the New York City Department of Health and Hygiene, a total of 18 cooling towers have tested positive for L. pneumophila in the South Bronx, an impoverished part of New York city. Cooling towers sit atop large buildings and are important for maintaining large air conditioning units. They have not previously been routinely tested for L. pneumophila in New York City, but this outbreak has spurred a Commissioner’s Order requiring all building owners to disinfect their cooling towers or provide evidence of prior disinfection. The city has also introduced legislation mandating regular testing of New York city cooling towers for the bacterium. The legislation, which was just signed into law on August 18th, plans to prevent and prepare for future outbreaks in a number of ways, with a heavy focus on the cooling towers which harbored the bacteria that started this outbreak. In particular, a registry of cooling towers will be created, and these towers will be inspected on a quarterly basis using guidelines issued by the New York City Health Department. According to NYC Mayor Bill deBlasio, this legislation is the first of its kind in any large city. Mandated testing will likely become more important over time, as an aging population creates a larger pool of susceptible individuals, and warmer temperatures in big cities require more large cooling towers.

Looking Ahead

It is unclear why these preventative testing measures were not already in place in New York city. There have been attempts to institute regular testing in the past, but they were unsuccessful. This outbreak in New York city hearkens back to the early days of the ongoing Ebola outbreak in West Africa, where officials scrambled to contain the outbreak after it was already well underway. But unlike Ebola, L. pneumophila is relatively well-understood by the scientific community, and there are simple measures which can be used to prevent the growth of L. pneumophila in urban water sources. In New York city’s defense, officials have been making efforts to educate the public about Legionnaire’s disease, holding a town hall meeting in the Bronx community on August 3rd and taking to social media to spread information (follow the New York City Health Department on Twitter at @nycHealthy).


Hopefully, this outbreak can serve as an example for other municipalities that systems for preventing and containing infectious disease are an essential part of a city’s infrastructure. As research uncovers more details about how L. pneumophila and other infectious diseases live and spread, these findings must be incorporated into continuously-updated public health policies.


1. David W. Fraser, M.D., Theodore R. Tsai, M.D., Walter Orenstein, M.D., William E. Parkin, D.V.M., DR. P.H., H. James Beecham, M.D., Robert G. Sharrar, M.D., John Harris, M.D., George F. Mallison, M.P.H., Stanley M. Martin, M.S., Joseph E. McDade, Ph.D., C, and the F. I. T. Legionnaire’s Disease: A Description of an Epidemic of Pneumonia. N. Engl. J. Med. 297, 1189–1197 (1977).

2. NYC Mayor_ Legionnaires’ Outbreak Has Claimed 12 Lives – ABC News.

3. Flannagan, R. S., Cosío, G. & Grinstein, S. Antimicrobial mechanisms of phagocytes and bacterial evasion strategies. 7, (2009).

4. Newton, H. J., Ang, D. K. Y., Van Driel, I. R. & Hartland, E. L. Molecular pathogenesis of infections caused by Legionella pneumophila. Clin. Microbiol. Rev. 23, 274–298 (2010).

5. Hubber, A. & Roy, C. R. Modulation of host cell function by Legionella pneumophila type IV effectors. Annu. Rev. Cell Dev. Biol. 26, 261–283 (2010).

6. Sohn, Y.-S. et al. Lpg0393 of Legionella pneumophila Is a Guanine-Nucleotide Exchange Factor for Rab5, Rab21 and Rab22. PLoS One 10, e0118683 (2015).

7. Luo, X. et al. Structure of the Legionella Virulence Factor, SidC Reveals a Unique PI(4)P-Specific Binding Domain Essential for Its Targeting to the Bacterial Phagosome. PLOS Pathog. 11, e1004965 (2015).

8. Isberg, R. R., O’Connor, T. J. & Heidtman, M. The Legionella pneumophila replication vacuole: making a cosy niche inside host cells. Nat. Rev. Microbiol. 7, 13–24 (2009).

9. Cervero-Aragó, S., Rodríguez-Martínez, S., Puertas-Bennasar, A. & Araujo, R. M. Effect of Common Drinking Water Disinfectants, Chlorine and Heat, on Free Legionella and Amoebae-Associated Legionella. PLoS One 10, e0134726 (2015).

10. NYC Dept. of Health and Mental Hygiene, Testimony of Mary T. Bassett regarding Cooling Towers Registration, and Inspection and Testing for Microbes and Preconsidered Intro. (2015).

11. Legionnaires’ death toll swells to 12; bacteria found in two more cooling towers – New York’s PIX11 – WPIX-TV.

12. Hygiene, N. D. of H. and M. Testimony of Mary T. Bassett regarding Cooling Towers Registration, and Inspection and Testing for Microbes. 1–5 (2015).

13. A Belated Look at New York’s Cooling Towers, Prime Suspect in Legionnaires’ Outbreak – The New York Times.

14. De Blasio Signs Cooling Tower Regulations Into Law In Wake Of Legionnaires’ Outbreak – CBS New York.

15. Mayor Bill de Blasio Signs First-In-Nation Legislation for Cooling Tower Maintenance –

Category: Bacteria, The Student Blog | Tagged , , , , , , , , , , , , , | 1 Comment

In “My Virtual Dream”, art and science unite in unique study of neurofeedback

In 2013, art and science merged like never before at Toronto’s Nuit Blanche art festival when guests were given the opportunity to participate in an scientific experiment investigating neurofeedback. Following the initial success of the “My Virtual Dream” project, plans are being made to scale-up the experiment as scientists take the project on a world tour. In 2015, the My Virtual Dream world tour will kick-off in Amsterdam and travel to San Fransisco. This blog post examines the original experiment conducted in 2013 more closely, and highlights some potential concerns around this innovative new approach.

On October 5, 2013, visitors entered a large scale art installation and participated in neurofeedback, a process where participants see their brain activity in real time, and based on the reading, modulate their behavior. This event not only introduced people to the power of EEG headsets, but also demonstrated that neurofeedback can have an impact on how people learn within one minute of initial interaction. EEG recordings are often taken in a lab, an environment that does not mimic the complexity that we encounter every day, but “My Virtual Dream” in Toronto proved that the collection of physiological data does not need to be limited to a researcher’s workshop. Yet, as technology becomes more sophisticated and mobile, scientists will need to continue to consider the ethical implications of recording physiological data from the general public.

The “My Virtual Dream” Experience  

Over the course of 12 hours, electroencephalography (EEG) data, a measure of electrical activity from the brain, was recorded and analyzed from a total of 523 adults. Groups of people were brought into a 60-foot geodesic dome to participate in a two-part interactive experience. The dreaming portion of the evening involved projecting animated video clips with prerecorded music over the surface of the dome for the volunteers. There were four dream themes with different audio and video pairings, but the environment changed based on the EEG recordings from the participants. Only the gaming experience was reported in a 2015 PLOS ONE paper entitled “’My Virtual Dream’: Collective Neurofeedback in an Immersive Art Environment”. This blog post aims to serve as a recap of the event based on the paper, but I personally did not attend.

Volunteers were given a lesson about the EEG technology and the overall tenets of the game, but once inside the dome were divided into groups of five, and hooked up to a wireless Muse EEG headset. Each group was positioned in front of a LCD screen that displayed the five brainwaves. Electrical activity of the brain can be measured and displayed as a signal that looks like a wave, and for this part of the experiment, participants saw the signals that coordinated to their electrical activity (Figure 1, panel A). For the remainder of the experiment, the brain activity was displayed as an orb or firework (Figure 1, panel B – D). The game was divided into solo and collective neurofeedback, and measured alpha and beta bands throughout. Alpha bands (8-12 Hz) are generally associated with wakefulness and relaxation, while beta bands (18-30 Hz) correlate to periods of active and busy thinking. After a tutorial, which recorded individual baseline thresholds for the alpha and beta bands to be used subsequently in the evening, volunteers moved on to the game. The first two games were individual and lasted 50 seconds. Volunteers were instructed to relax and the visual feedback was based on gathering particles into an orb (Figure 1, panel B). When participants were relaxed and the alpha power went beyond the baseline, particles were added to the center of the glowing orb. For the concentration period, this orb again acted as the visual feedback (Figure 1, panel C). When the beta power, or concentration level, surpassed the threshold set during the tutorial, the orbs glowed and brightened. To finish the cycle of relaxation and concentration, the volunteers watched their individual orbs explode like fireworks. The height of the firework correlated to the alpha achievement, while the brightness corresponded to the beta achievement. For the group portion of the session, every individual again had their own orb, but were in a circle surrounding a group orb (Figure 1, panel D). The goal of the game was the same; participants were visually rewarded by gathering particles into an orb for the relaxation period and brightening the orb for the concentration period. The difference was that this time, the circle in the middle grew in size every time three participants were in the appropriate target state. This could be either relaxation or concentration, depending on what the game was asking for. After the game, the group orb was turned into a firework show that displayed the performance of the group as a whole. The final part of the game mimicked the previous group experience, except that there were no explicit instructions. The members of the group were told to attempt to synchronize with the other members, and the middle orb again grew in size if three members of the group were in the same target state. It didn’t matter if three members were relaxing or concentrating, but the members had to be synchronous.

journal.pone.0130129.g002 (1)

Figure 1: Screenshots of (A) initial training period and welcome message where brain activity was displayed as the signal (B) Solo game during relaxation period (C) Solo game during concentration period (D) Group game where giant orb in the middle is representative of participant’s collective brainstates. (Photo courtesy of Kovacevic et al.)


EEG headsets are playing a more prominent role in the lab and as consumer devices. Examples include the Muse headset, the Melon headset funded by Kickstarter, and the Emotiv headset. EEG headsets are non-invasive, allow for mobility, and generally require a small amount of time for prep. In terms of applicability for both consumers and researchers who are studying EEG activity, it would be ideal if people learned how to modulate brain activity from EEG headsets as quickly as possible. Researchers had hypothesized, based on conducting the games with smaller samples (less than 10 people), that subtle, but detectable changes in brain activity would be observed during the neurofeedback training period. After analyzing the data, researchers reported that participants were learning to modulate their relative spectral power very quickly — within only 60 seconds for relaxation and 80 seconds for concentration. This was clear because there were gradual declines in the frequencies during the alpha period and gradual increases when the beta activity was being monitored. The entire gaming session was only seven minutes long, but even within that rapid time period, these changes in brain activity could be observed. The ability to draw this conclusion  relies partly on having a large sample size of more than 500 people.

Lab environments often do not mimic the complex experiences that we encounter every day. Although “My Virtual Dream” was still a research experiment, the environment inside the geodesic dome was completely unlike the muted, potentially bland, and less personable appearance of lab. While we may not encounter geodesic domes as part of our average day, the experience is at least more complex and stimulating that a typical lab environment, and this could be one reason why researchers chose this setting. “My Virtual Dream” functioned as a proof-of-concept that EEG headsets could feasibly be used to monitor brain activity in environments that we typically inhabit: our homes, social situations, and work.

Crowdsourcing for Neuroscience  

As technology has become more and more sophisticated, crowd-sourcing for neuroscience research has become more common. There are a number of programs and apps, including Lumosity and The Great Brain Experiment, which rely on crowd-sourcing. These two outlets, although approaching data collection in an unconventional way, have published papers in peer reviewed journals. Also, the game Eyewire is a citizen science project that allows any user to help map 3D neurons in the brain, and websites like Kickstarter and are helping non-scientists play a larger role in funding and often participating in science. As exciting as these new methods for data assimilation are, the collection of brainwaves and physiological data could quickly become complicated. It is notable that the “My Virtual Dream” experiment was only tracking alpha and beta wavelengths, and the data were de-identified. Volunteers were also given the option to remove their EEG recordings from the analysis, but only four participants selected this option. The event was organized by an academic center affiliated with the University of Toronto called Baycrest, and the event was in compliance with their Research Ethics Board. Tracking physiological data over a large sample size could become ethically problematic if we eventually come to a place where brain signals can be used as a signature for a neurological disorder. If EEG data could act as a biomarker for a disease state or even behavioral tendencies, the consequences for individuals could be severe.

Efforts have been made to protect genetic information under the Genetic Information Nondiscrimination Act (GINA), which state that genetic information cannot be used against someone for insurance or employment purposes. But these protections do not yet exist for neurological data. In my opinion, research could become ethically fraught if suddenly EEG data became a clue to a person’s capacities or neurological state. As it stands now, the insight gained from the research of EEG data most likely outweighs any privacy violations, but as technology becomes more and more sophisticated, researchers should continue to consider the implications of collecting physiological data from the masses.

Plans are being made to take “My Virtual Dream” on a world tour beginning with Amsterdam, followed by Washington, D.C. and San Francisco, CA. Toronto’s Nuit Blanche art festival will again take place in 2015, and although the specific events have not been announced yet, more than 45 novel, interactive, and engaging art installations will be displayed.

Category: The Student Blog | Tagged , , , , , , , | Leave a comment

Seeing like a computer: What neuroscience can learn from computer science

Screenshot from the original "Metropilis" trailer. Image credit: Paramount pictures, 1927. Public domain.

Screenshot from the original “Metropilis” trailer. Image credit: Paramount pictures, 1927. Public domain.

By Marta Kryven

What do computers and brains have in common? Computers are made to solve the same problems that brains solve. Computers, however, rely on a drastically different hardware, which makes them good at different kinds of problem solving. For example, computers do much better than brains at chess, while brains do much better than computers at object recognition. A study published in PLOS ONE found that even bumblebee brains are amazingly good at selecting visual images with color, symmetry and spatial frequency properties suggestive of flowers. Despite their differences, computer science and neuroscience often inform each other.

In this post, I will explain how do computer scientists and neuroscientists often learn from each others’ research and review the current applications of neuroscience-inspired computing.

Brains are good at object recognition

Can you see a figure among the dots? Image credit: Barbara Nordhjem, reprinted with permission.

Can you see a figure among the dots? Image credit: Barbara Nordhjem, reprinted with permission.

Look at the figure (image 2) composed of seemingly random dots. Initially the object in the image is blended with the background. Within seconds, however, you will see a curved line, and soon after the full figure. What is it? (The spoiler is at the end of this post.)

Computer vision algorithms, which are computer programs designed to identify objects in images and video, fail to recognize images like image 2. As for humans, it turns out that although people take time to recognize a hidden object, people start looking at the right location in the image very soon and long before they are able to identify it. That does not mean that people somehow know which object they are seeing, but cannot report the correct answer. Rather, people experience sudden perception of the object after continued observation. A 2014 study published in PLOS ONE found that this may be because the human visual system is amazingly good at picking out statistical irregularities in images.

Neuroscience-inspired computer applications

Like any experimental science neuroscience research starts with listing all possible hypotheses to explain results of experiments. Questions such as: “How can a magician hide movement in plain sight?” or “What causes optical illusions?” are probed by researchers.

Comparatively, computer science researchers begin their process by listing alternative methods to implement behavior, such as vision, in a computer. Instead of discovering how vision works, computer scientists develop software to solve problems such as: “How should a self-driving car respond to an apparent obstacle?” or “Are these two photographs of the same person?”

An acceptable computer-vision solution for artificial intelligence (AI), just like a living organism, must process information quickly and on a limited knowledge. For example, slow reaction time for a self-driving car might result in the death of a child, or stopping traffic because of a pothole. The processing speed of current computer vision algorithms is far behind the speed of visual processes humans employ daily, however, the technical solutions that computer scientists develop may have relevance to neuroscience, sometimes acting as a source of hypothesis about how biological vision might actually work.

Likewise, most AI, such as computer vision, speech recognition or even robotic navigation, addresses problems already solved in biology. Thus, computer scientists often face a choice between inventing a new way for a computer to see or modeling on a biological approach. A solution that is biologically plausible has the advantage of being resource-efficient and tested by evolution. Probably the best-known example of a biomimetic technology is Velcro, an artificial fabric recreating an attachment mechanism used by plants. Biomimetic computing, that is, recreating functions of biological brains in software, is just as ingenuous, but much less well known outside the specialized community.

The interrelated components of neuroscience and computer science compelled me to explore how computer scientists and neuroscientists learn from each other. After visiting the International Conference on Perceptual Organization (ICPO) in June 2015, I made a list of trends in neuroscience-inspired computer applications that I will explore in more detail in this post:

1. Computer vision based on features of early vision
2. Gestalt-based image segmentation (Levinshtein, Sminchisescu, Dickinson, 2012)
3. Shape from shading and highlights — which is described in more detail in a recent PLOS Student Blog post
4. Foveated displays (Jacobs et al. 2015)
5. Perceptually-plausible formal shape representations

My favorite example of the interlocking components of neuroscience and computer science is computer vision based on features of early vision. ( Note that there are also many other, not biologically-inspired approaches to computer vision.) I particularly like this example because in this case a discovery in neuroscience of vision of seemingly simple principles of how visual cortex processes information informed a whole new trend in computer science research. To explain how computer vision borrows from biology, lets begin by reviewing the basics of human vision.

Inside the visual cortex

Let me present a hypothetical situation. Suppose you are walking along a beach with palm trees, tropical flowers and brightly colored birds. As new objects enter your field of vision, they seem to enter your awareness instantly. But in reality, shape perception emerges in the first 80-150 milliseconds of exposure (Wagemans, 2015). So how long is 150 ms? For comparison, in 150 ms a car travels four meters at highway speed, and a human walking along a beach travels about 20 centimeters in the time it takes to form a mental representation of an object, such as a tree. Thus, as you observe the palm trees, the flowers and the birds, your brain gradually assembles familiar percepts. During the first 80-150 ms, before awareness of the object has emerged, your brain is hard at work assembling shapes from short and long edges in various orientations, which are coded by location-specific neurons in primary visual area, V1.

Today, we know a lot about primary visual area V1 thanks to pioneering research of Hubel and Wiesel, who both won the Nobel Prize for discovering scale and orientation-specific neurons in cat visual cortex in the late 1950s. As an aside, if you have not yet seen the original videos of their experiments demonstrating how a neuron in a cat’s visual cortex responds to a bar of light, I highly recommended viewing these classic videos!

Inside the computer

Approximation of a square wave with four Fourier components. Image credit: Jim.belk, Public Domain via Wikimedia Commons

Approximation of a square wave with four Fourier components. Image credit: Jim.belk, Public Domain via Wikimedia Commons

At about the time that Hubel and Wiesel made their breakthrough, mathematicians were looking for new tools for signal processing, to separate data from noise in a signal. For a mathematician, a signal may be a voice recording encoding a change of frequency over time. It may also be an image, encoding a change of pixel brightness over two dimensions of space.

When the signal is an image, signal processing is called image processing. Scientists care about image processing because it enables a computer “to see” a clear precept while ignoring the noise in sensors and in the environment, which is exactly what the brain does!

The classic tools of signal processing, Fourier transforms, were discovered by Joseph Fourier in the nineteenth century. A Fourier transform represents data as a weighted sum of sines and cosines. For example, it represents the sound of your voice as a sum of single frequency components! As illustrated in the figure above, the larger the quantity of frequency components that are used, the better is the approximation. Unfortunately, unlike brain encodings, Fourier transforms do not explicitly encode the edges that define objects.

To solve this problem, scientists experimented with sets of arbitrary basis functions encoding images for specific applications. Square waves, for example, encode low-resolution previews downloaded before a full-resolution image transfer is complete. Wavelet transforms of other shapes are used for image compression, detecting edges and filtering out lens scratches captured on camera.

What do computers see?

It turns out that a specially selected set of image transforms can model the scale and orientation-specific neurons in primary visual area, V1. The procedure can be visualized as follows. At first, we process the images by a progressively lower spatial frequency filters. The result is a pyramid of image layers, equivalent to seeing the image from further and further away. Then, each layer is filtered by several edge orientations in turn. The result is a computational model of the initial stage in early visual processing, which assumes that the useful data (the signal in the image) are edges within a frequency interval. Of course, such a model represents only a tiny aspect of biological vision. Nevertheless, it is a first step towards modeling more complex features and it answers an important theoretical question: If a brain could only see simple edges, how much would it see?

To sample a few applications, our computational brain could tell:

1. Whether a photograph is taken indoors or outdoors (Guérin-Dugué & Oliva, 2000)
2. Whether the material is glossy, matte or textured
3. Whether a painting is a forgery, by recognizing individual artist brushstrokes.

Moreover, a computer brain can also do something that a real brain cannot; it can analyze a three-dimensional signal, a video. You can think of video frames as slices perpendicular to time in a three-dimensional space-time volume. A computer interprets moving bright and dark patches in the time-space volume as edges in three dimensions.

Using this technique, MIT researchers discovered and amplified imperceptible motions and color changes captured by a video camera, making them visible to a human observer. The so-called motion microscope reveals changes as subtle as face color changing with heartbeat, a baby’s breath, and a crane swaying in the wind.Probably the most striking demonstration presented at ICPO 2015 last month showed a pipe vibrating into different shapes when struck by a hammer. Visit the project webpage for demo and technical details.

So, how far are computer scientists from modeling the brain? In 1950, early AI researchers expected computing technology to pass the Turing test by 2000. Today, computers are still used as tools for solving technically specific problems; a computer program can behave like a human only to the extent that human behavior is understood. The motivation for computer models based on biology, however, is twofold.

First, both computer scientists and computer users alike are much more likely to see the output of a computer program as valid if its decisions are based on biologically plausible steps. A computer AI recognizing objects using the same rules as humans will likely see the same categories and come to the same conclusions. Second, computer applications are a test bed for neuroscience hypotheses. Moreover, a computer implementation can tell us not only if a particular theoretical model is feasible, it may also, unexpectedly, reveal alternative ways in which evolution could work.

Answer to image one riddle: A rabbit

Daubechies, Ingrid. Ten lectures on wavelets. Vol. 61. Philadelphia: Society for industrial and applied mathematics, 1992.

Elder, James H., et al. “On growth and formlets: Sparse multi-scale coding of planar shape.” Image and Vision Computing 31.1 (2013): 1-13.

Freeman, William T., and Edward H. Adelson. “The design and use of steerable filters.” IEEE Transactions on Pattern Analysis & Machine Intelligence 9 (1991): 891-906.

Gerhard HE, Wichmann FA, Bethge M (2013) How Sensitive Is the Human Visual System to the Local Statistics of Natural Images? PLoS Comput Biol 9(1): e1002873.

Guérin-Dugué, Anne, and Aude Oliva. “Classification of scene photographs from local orientations features.” Pattern Recognition Letters 21.13 (2000): 1135-1140.

Kryven, Marta, and William Cowan. “Why Magic Works? Attentional Blink With Moving Stimuli” Proceedings of International Conference on Perceptual Organization, York University Centre for Vision Research , 2015.

Levinshtein, Alex, Cristian Sminchisescu, and Sven Dickinson. “Optimal image and video closure by superpixel grouping.” International journal of computer vision 100.1 (2012): 99-119.

Lyu, Siwei, Daniel Rockmore, and Hany Farid. “A digital technique for art authentication.” Proceedings of the National Academy of Sciences of the United States of America 101.49 (2004): 17006-17010.

Murata T, Hamada T, Shimokawa T, Tanifuji M, Yanagida T (2014) “Stochastic Process Underlying Emergent Recognition of Visual Objects Hidden in Degraded Images.” PLoS ONE 9(12): e115658.

Nordhjem, Barbara, et al. “Eyes on emergence: Fast detection yet slow recognition of emerging images.” Journal of vision 15.9 (2015): 8-8.

Nordhjem B, Kurman Petrozzelli CI, Gravel N, Renken R, Cornelissen FW (2014) “Systematic eye movements during recognition of emerging images.” J Vis 14:1293–1293.

Orbán LL, Chartier S (2015) “Unsupervised Neural Network Quantifies the Cost of Visual Information Processing”. PLoS ONE 10(7): e0132218.

Said CP, Heeger DJ (2013) “A Model of Binocular Rivalry and Cross-orientation Suppression. PLoS Comput Biol” 9(3): e1002991.

Tabei K-i, Satoh M, Kida H, Kizaki M, Sakuma H, Sakuma H, et al. (2015) “Involvement of the Extrageniculate System in the Perception of Optical Illusions: A Functional Magnetic Resonance Imaging Study.” PLoS ONE 10(6): e0128750.

Vandenbroucke ARE, Sligte IG, Fahrenfort JJ, Ambroziak KB, Lamme VAF (2012) “Non-Attended Representations are Perceptual Rather than Unconscious in Nature.” PLoS ONE 7(11): e50042.

Johan Wagemans, “Perceptual organization at object boundaries: More than meets the edge” Proceedings of International Conference on Perceptual Organization (2015)

Wu, Hao-Yu, et al. “Eulerian video magnification for revealing subtle changes in the world.” ACM Trans. Graph. 31.4 (2012): 65.

Category: The Student Blog | Tagged , , , , | 1 Comment