Walk the walk, talk the talk: Implications of dual-tasking on dementia research

A group of friends chat as they walk in St. Marlo, France. Photo courtesy of Antoine K (Flickr).

A group of friends chat as they walk in St. Marlo, France. Photo courtesy of Antoine K (Flickr).

By Ríona Mc Ardle

You turn the street corner and bump into an old friend. After the initial greetings and exclamations of “It’s so good to see you!” and “Has it been that long?”, your friend inquires as to where you are going. You wave your hands to indicate the direction of your desired location, and with a jaunty grin, your friend announces that they too are headed that way. And so, you both set off, exchanging news of significant others and perceived job performance, with the sound of your excited voices trailing behind you.

This scenario seems relatively normal, and indeed it occurs to hundreds of people in everyday life. But have you ever truly marvelled our ability to walk and talk at the same time? Most people discount gait as an automatic function. They spare no thought to the higher cognitive processes that we engage in, in order to place one foot in front of the next. The act of walking requires huge attentional and executive function resources, and it is through the sheer amount of practice we accumulate throughout our lives that makes it feel like such a mindless activity. Talking while walking recruits even more resources with the additional requirement of an attentional split – this is so we don’t overemphasize our attention on one task at the detriment of the other. And it’s not just talking! We can assume all manner of activities while walking – carrying trays, waving our hands, checking ourselves out in the nearest shop window. But what sort of costs does such multitasking afford us?

Dual-Tasking: How well can we really walk and talk?

Dual-tasking refers to the simultaneous execution of two tasks – usually a walking task and a secondary cognitive or motor task. It is commonly used to assess the relationship between attention and gait. Although there is no consensus yet on the manner in which dual-tasking may hinder gait, dual-task studies have demonstrated decreased walking speed and poorer performance of the secondary task. This has been particularly seen in healthy older adults, who struggle more with the secondary task when prioritizing walking – this is likely to maintain balance and avoid falls. Such findings have supported the role of executive function and attention in gait, as both are implicated as part of frontal lobe function. Age-related changes in the brain have demonstrated a focal atrophy of the frontal cortex, with reductions of 10-17%  observed in the over-65 age group. When compared to the reported 1% atrophy of the other lobes, it can be postulated that the changes to the frontal lobes contribute to gait disturbances experienced by the elderly. The older generation must recruit many of the remaining attentional resources in order to maintain gait, and thus have few left to carry out other activities at the same time.

It has been incredibly difficult to confirm the frontal cortex’s role in gait from a neurological perspective. Imaging techniques have largely been found unsatisfactory for such an endeavor, as many rely on an unmoving subject lying in a fixed position. However, a recent study in PLOS ONE has shed some light in the area. Lu and colleagues (2015) employed Functional Near-Infrared Spectroscopy (fNIRS) to capture activation in the brain’s frontal regions: specifically, the prefrontal cortex (PFC), the premotor cortex (PMC) and the supplementary motor areas (SMA). Although obtaining a similar image of the cortex to that of functional magnetic resonance imaging (fMRI), fNIRS can acquire these during motion. These researchers laid out three investigatory aims:

  1. To assess if declining gait performance was due to different forms of dual task interference.
  2. To observe if there was any differences in cortical activation in the PFC, PMC and SMA during dual-task and normal walking.
  3. To evaluate the relationship between such activation and gait performance during dual-tasking.

The research team predicted that the PFC, PMC and SMA would be more active during dual-task due to the increased cognitive or motor demand on resources.

A waiter balances a tray of waters. Photo courtesy of Tom Wachtel (Flickr).

A waiter balances a tray of waters. Photo courtesy of Tom Wachtel (Flickr).

Lu and colleagues recruited 17 healthy, young individuals to take part in the study. Each subject underwent the three conditions three times each, with a resting condition in between. The conditions were as follows: a normal walking condition (NW) in which the participant would be instructed to walk at their usual pace, a walking while carrying out a cognitive task condition (WCT) in which the subject had to engage in a subtraction task, and a walking while carrying out a motor task condition (WMT) in which the participant had to carry a bottle of water on a tray without spilling it. Results showed that both the WCT and WMT induced a slower walk than the NW. Interestingly, WMT displayed a higher number of steps per minute and a shorter stride time – this could be attributed to the intentional alteration of gait in order not to spill the water. An analysis of the fNIRS data revealed that all three frontal regions were activated during dual-task conditions. The PFC showed the strongest, most continuous activation during WCT. As the PFC is highly implicated in attention and executive function, its increased role in the cognitive dual-task condition seems reasonable. The SMA and PMC were found to be most strongly activated during the early stages of the WMT. Again, this finding makes sense as both these areas are associated with the planning and initiation of movement – in this case, the researchers postulated that this activity may reflect a demand for stability of the body in order to carry out the motor task. This study has successfully demonstrated the frontal cortex’s role in maintaining gait, particularly when concerned with a secondary task.

Why is this important?

Although gait disturbances are extremely rare in young people, their prevalence increases with age. 30% of adults over 65  fall at least once a year, with the incidence rate climbing to 50% in the over-85 population. Falls carry a high risk of critical injuries in the elderly, and often occur during walking. This latest study has provided evidence for the pivotal roles of executive function and attention in maintaining gait, and has allowed us to gain insight into the utilities of dual-task studies. Having correlated activation of the frontal cortex with carrying out a secondary task while walking, poor performance of one of these tasks may reveal a subtle deficit in attention or executive function. Therefore, gait abnormalities may be able to act as a predictor of mild cognitive impairment (MCI) and even dementia. Studies have reported that a slowing of gait may occur up to 12 years prior to the onset of MCI, with longitudinal investigations observing that gait irregularities significantly increased individuals’ likelihood of developing dementia 6-10 years later.

But what does the relationship between gait and cognition tell us about dementia? Dementia is one of the most prevalent diseases to afflict the human population, with 8% of over-65s and over 35% of the over-85s suffering from it. It is incredibly important for researchers to strive to reduce the amount of time individuals must live their lives under the grip of such a crippling disorder. Our growing knowledge of gait and cognition can allow us to do so in two different ways: through early diagnosis of dementia and through developments in interventions.

For the former, a drive in research has begun regarding the correlation of different gait deficits with dementia subtypes. The principle behind this is that the physical manifestation of the gait disturbance could lend clinicians a clue as to the lesion site in the brain – for example, if a patient is asked to prioritize an executive function task when walking, and displays a significant gait impairment while doing so, this may predict vascular dementia. This is because executive function relies on frontal networks, which are highly susceptible to vascular risk. Studies have shown that this type of dual-task will often cause a decrease in velocity and stride length and are associated with white matter disorders and stroke. To the latter point, practice of either gait or cognition may benefit the other. It has been acknowledged that individuals who go for walks daily have a significantly reduced risk of dementia. This is attributed to gait’s engagement of executive function and attention, which exercises the neural networks associated with them. Similarly, improving cognitive function may in turn help one to maintain a normal gait. Verghese and colleagues (2010) demonstrated this in a promising study, in which cognitive remediation improved older sedentary individuals’ mobility.

Closing Thoughts

As both a neuroscientist and a citizen of the world, one of my personal primary concerns is the welfare of the older generation. The aging population is growing significantly as life expectancy increases and these individuals are susceptible to a range of medical issues. While any health problem is hard to see in our loved ones, dementia and failing cognition are a particularly difficult brunt on both the individual, their family and society as a whole. Our exploration into the relationship between gait and cognition has so far offered a glimmer of hope into progressing our understanding and fight against dementia, and I personally hope this intriguing area continues to do just that.


Beurskens, R., & Bock, O. (2012). Age-related deficits of dual-task walking: a review. Neural plasticity2012.

Holtzer, R., Mahoney, J. R., Izzetoglu, M., Izzetoglu, K., Onaral, B., & Verghese, J. (2011). fNIRS study of walking and walking while talking in young and old individuals. The Journals of Gerontology Series A: Biological Sciences and Medical Sciences, glr068.

Li, K. Z., Lindenberger, U., Freund, A. M., & Baltes, P. B. (2001). Walking while memorizing: age-related differences in compensatory behavior.Psychological Science12(3), 230-237.

Lu, C. F., Liu, Y. C., Yang, Y. R., Wu, Y. T., & Wang, R. Y. (2015). Maintaining Gait Performance by Cortical Activation during Dual-Task Interference: A Functional Near-Infrared Spectroscopy Study. PloS one10(6), e0129390.

Montero-Odasso, M., & Hachinski, V. (2014). Preludes to brain failure: executive dysfunction and gait disturbances. Neurological Sciences35(4), 601-604.

Montero‐Odasso, M., Verghese, J., Beauchet, O., & Hausdorff, J. M. (2012). Gait and cognition: a complementary approach to understanding brain function and the risk of falling. Journal of the American Geriatrics Society,60(11), 2127-2136.

Sparrow, W. A., Bradshaw, E. J., Lamoureux, E., & Tirosh, O. (2002). Ageing effects on the attention demands of walking. Human movement science,21(5), 961-972.

Verghese, J., Mahoney, J., Ambrose, A. F., Wang, C., & Holtzer, R. (2010). Effect of cognitive remediation on gait in sedentary seniors. The Journals of Gerontology Series A: Biological Sciences and Medical Sciences65(12), 1338-1343.

Yogev‐Seligmann, G., Hausdorff, J. M., & Giladi, N. (2008). The role of executive function and attention in gait. Movement disorders23(3), 329-342.

Category: The Student Blog | Tagged , , , , , , , | Leave a comment

Highlights from the 2015 Meeting of the Vision Sciences Society

Enjoying the view from the VSS Conference in St. Pete Beach, Florida. Photo courtesy of Minjung Kim.

Enjoying the view from the VSS Conference in St. Pete Beach, Florida. Photo courtesy of Minjung Kim.

Going to conferences is one of my favorite aspects about being a scientist. As a PhD student, I spend a lot of my life in solitude: when I read new literature, when I program new experiments, or when I conduct new analyses, I am very much alone in my thoughts. Outside of scheduled meetings with my advisors, there is little reason for interaction with other people. The loneliness and boredom that swallows up every graduate student – yes, it will happen to you, too – is an unfortunate, little-discussed aspect of being a PhD student. The isolation, together with the drive to succeed and the illusion that everyone else is doing better than you, can at times wreak havoc on mental health.
But, there is a cure: talking to people! Of course, the number one confidants are lab mates or department friends (this is why it’s important to have a friendly, accepting lab culture). Another perhaps less obvious outlet for fun and relief are conferences. At conferences, you will:

1) Meet people who find your research novel and interesting
2) Learn about other people’s research
3) Discover that other people have experience the same graduate school woes that you may be experiencing, and survived.

In short, being at a conference allows you to step back from the details of your own graduate research and think again about why it is you decided to become a scientist in the first place. It’s stimulating. In fact, I cured my ennui with this year’s meeting of the Vision Sciences Society (VSS).

What is VSS?
VSS is an annual conference dedicated to studying the function of the human visual system, and took place this year from May 15th to May 20th in idyllic St. Pete Beach, Florida. With 1400 attendees, VSS focused on all things visual perception, including 3-dimensional (3D) perception, attention, and methodologies spanning from psychophysics to neuroimaging to computational modeling.

This year’s VSS was filled with interesting panels, exhibits, and demos devoted to visual perception, including the viral phenomenon #theDress. I’ve highlighted in my round-up below some of the talks and posters that I found the most interesting.

The research round-up

Optimal camouflage under different lighting conditions
As a person who studies lighting and shading, I loved Penacchio, Lovell, Sanghera, Cuthill, Ruxton and Harris’s poster on how the effectiveness of countershading, a type of camouflage, varies with lighting condition. Countershading refers to shading patterns commonly found in aquatic animals, where the belly is coloured a brighter shade than the back: viewed from below, the animal’s bright belly is camouflaged against the bright light from the sun above, and viewed from above, the animal’s dark back is camouflaged against the darkness of the water.

But, should the transition between back and belly be sharp or smooth? Penacchio et al.’s answer is that it depends on whether the lighting condition is sunny or cloudy. On a sunny day, the shading is characterized by high contrast and sharp shadows, whereas on a cloudy day, the shading is characterized by low contrast and soft shadows. Penacchio et al. found that, when the target item’s countershading was matched to the lighting (e.g., target was sharply countershaded and was in a sunny scene), people had difficulty finding the target, whereas when the target was mismatched to the lighting (e.g., target was in a cloudy scene), people were very good. Interestingly, birds behaved the same as humans, showing that optimal camouflage works across different species.

I really enjoy ecologically based studies like this, because they help me understand how biological systems exploit constraints posed by the environment.

Can computers discriminate between glossy and matte surfaces?
Tamura and Nakauchi won the student poster prize for the Monday morning session for answering this question. Glossy materials are interesting, because unlike matte materials, they are characterized by white highlights. It’s an old painting trick: to make objects look glossy and shiny, add white highlights. However, the location of the highlights matters: if highlights are haphazardly placed on the surface without attention to surface geometry, they will look like streaks of paint (Anderson & Kim, 2009).

Tamura and Nakauchi examined whether a computer algorithm (a “classifier”) without sophisticated understanding of scene geometry could nevertheless learn to discriminate between images of matte, glossy, and textured (painted) surfaces. This does not mean that scene geometry is unimportant for glossiness perception, but rather, that the image representation they were using (Portilla & Simoncelli, 2000), conveys at least information about the surface geometry without explicitly encoding the shape.

I think this study is an excellent example of combining knowledge in two different fields, in this case human vision and computer vision, to answer an interesting question.

The shrunken finger illusion
One of my favorite talks at VSS this year was the talk on the shrunken finger illusion by Ekroll, Sayim, van der Hallen and Wagemans, a novel illusion that demonstrates how something as basic as your knowledge of your finger length can be overridden by visual cues.

Drawing by Rebecca Chamberlain. Reproduced from Ekroll, V., Sayim, B., Van der Hallen, R., & Wagemans, J. (2015). The shrunken finger illusion: Unseen sights can make your finger feel shorter. Manuscript in revision. Copyright by Ekroll et al. (2015).

Drawing by Rebecca Chamberlain. Reproduced from Ekroll, V., Sayim, B., Van der Hallen, R., & Wagemans, J. (2015). The shrunken finger illusion: Unseen sights can make your finger feel shorter. Manuscript in revision. Copyright by Ekroll et al. (2015).

Ekroll et al. gave human observers hollow hemispherical shells (imagine a ping pong ball cut in half), who wore them on their fingers. When viewed from the top, the observers experienced two illusions: (1) they saw a full sphere, not a hemisphere, and (2) they felt their fingers were shorter.

The explanation has to do with amodal completion, the mental “filling in” of object parts that are hidden behind another object. When my cat peeks out from behind a door, I know that her body has not been truncated in half (perish the thought!); my visual system knows – has a representation of – the rest of her body. Amazingly, humans are not born knowing amodal completion, acquiring this ability at around four-to-six months of age (Kellman & Spelke, 1983).

So, the observers were amodally completing the hemisphere, thus seeing a sphere (Ekroll, Sayim & Wagemans, 2015). But, they also knew that their fingers started behind the “sphere.” So, the observers’ brains are “making room” for the back half of the sphere by assuming that the finger is shorter than usual.

This is a bizarre but interesting illusion that is consistent with previous work on the flexibility of body representations.

Demo Night and #theDress
The second night of VSS is demo night, where researchers and exhibitors share a new illusion, or new software, or anything fun that might not fit in the ordinary proceedings of the conference. At this conference, #theDress was a popular topic, with three demos dedicated to the viral sensation. I presented a demo of my own, as well, based on a project that I am working on with Dr. Richard Murray and Dr. Laurie Wilcox of York University Centre for Vision Research.

Let there be light
When people think of glowing objects, people typically assume that the object must be exceptionally bright. Our demo, an extension of my old master’s thesis work, showed that that’s not true — for some types of glow, it is the perceived shape of the object that determines whether it appears to glow.

We computer-rendered a random, bumpy disc under simulated cloudy lighting. From the front, this disc looks like an ordinary, solid, white object. However, as the disc rotates, revealing its underside, the disc takes a translucent appearance, and appears to glow.


Note that the luminance of the disc is the same from the front and the back – it is only left-right reversed. But, critically, the correlation between the luminance and the depth changes between the front view and the back view: viewed from the front, the peaks of the discs are bright and the valleys are dark; viewed from the back, the peaks are dark and the valleys are bright. Why are the valleys so bright? It must be because there is a light source either inside or behind the object!

The demo was very well received. For me, this was a highlight of the conference — talking to people about something that I am enthusiastic about, and convincing them that it is, in fact, cool. I imagine most scientists feel the same way about their work.

I still see it as white/gold
“The dress” refers to a photo of a dress that went viral in March 2015. To many people, the dress appeared to be white with gold fringes, whereas to others, it appeared to be blue with black fringes; a small minority reported seeing blue/brown. As an unrepentant white/gold perceiver, I was astounded to learn that the dress is in, real life, blue/black.

Most vision scientists agree that the dress “illusion” is an example of color constancy gone wrong. Color constancy is the visual system’s remarkable ability to “filter out” the effects of lighting. For example, my salmon-coloured shirt appears salmon-coloured regardless of whether I look at it indoors or outdoors, on a sunny day or a cloudy day – even though, if I were to take a photo of the shirt, the RGB value of the shirt would vary tremendously across the conditions. The predominant explanation of the dress is that different people’s visual systems are assuming different lighting conditions, and therefore filtering differently, resulting in different percepts. One of the dress demos (Rudd, Olkkonen, Xiao, Werner & Hurlbert, 2015) showed that, indeed, the same blue/black dress under different lighting can appear very different, and that, had the dress been white/black, the illusion would not have occurred. (I should note that there two other dress demos — Shapiro, Flynn & Dixon, and Lafer-Sousa & Conway — but sadly I did not get to see them as I was busy with my own demo.)
However, questions remain. Why is there such huge individual variability? Why are some people able to flip between percepts? I cannot answer all these questions, but I can direct you to this future issue of Journal of Vision dedicated to exploring the dress. If you have your own idea that you would like to test, the submission deadline is July 1, 2016. In the meantime, you can follow these links to see what vision scientists have said so far.
Gegenfurtner, Bloj & Toscani (2015)
Lafer-Sousa, Herman & Conway (2015)
Macknik, Martinez-Conde & Conway (2015)
Winkler, Spillman, Werner & Webster (2015)

David Knill Memorial Symposium
David Knill, renowned vision scientist, suddenly passed away in October last year, at age 53. He was known for his early work on Bayesian approaches to visual perception – that is, the notion that visual perception is the result of computation that optimally combines noisy information from the environment with loose prior knowledge about the environment. As Bayesian inference is such an important foundation in computational theories of vision, it is unbelievable now to imagine that there was ever a time when Bayesian perspective was the minority view in vision science.

In his memory, Weiji Ma – a former post-doctoral fellow of his who is now a professor at New York University – organized a symposium for celebrating Dr. Knill’s life and work. Speaker after speaker talked about his dedication to science, and about his kind and gentle personality. Dr. Ma’s tribute, in particular, made me realize that I am lucky to have met Dave Knill when I did, when I applied to work with him for my PhD. The symposium was respectful and touching, and was the perfect way to commemorate a brilliant scientist.
Dr. Knill’s Forever Missed page is here.

The real reason for going to conferences
But the most memorable aspects of VSS were not part of any scheduled proceedings. I remember: catching up with old friends at the Tiki bar, cooking with my friends in the hotel room, complimenting a speaker on his talk, him complimenting me back on the question I asked, commiserating about the lack of job prospects and sharing new-hire stories… and of course, I can’t forget the annual night-time ocean dip that marks the end of VSS.

As banal as it sounds, scientists are what drive science. Science is not done in a vacuum, and some of the best collaborations come out of friendships that you forge at conferences. And even if nothing productive comes out – so what? Maybe its reward enough to know that there are friendly nerds are out there who share your interests.

Anderson, B. & Kim, J. (2009). Image statistics do not explain the perception of gloss. Journal of Vision, 9(11):10, 1-17. doi:10.1167/9.11.10.

Ekroll, V., Sayim, B., van der Hallen, R. & Wagemans, J. (2015, May). The shrunken finger illusion: amodal volume completion can make your finger feel shorter. Talk presented at the annual meeting of the Vision Sciences Society, St. Pete Beach, FL.

Ekroll, V., Sayim, B. & Wagemans, J. (2015). Against better knowledge: the magical force of amodal volume completion. i-Perception, 4(8) 511–515. doi:10.1068/i0622sas.

Gegenfurtner, K.R., Bloj, M. & Toscani, M. (2015). The many colours of ‘the dress.’ Current Biology. doi:10.1016/j.cub.2015.04.043.

Kellman, P.J. & Spelke, E.S. (1983). Perception of partly occluded objects in infancy. Cognitive Psychology, 15(4). 483-524.

Kim, M., Wilcox, L. & Murray, R.F. (2015, May). Glow toggled by shape. Demo presented at the annual meeting of the Vision Sciences Society, St. Pete Beach, FL.

Lafer-Sousa, R., Hermann, K.L. & Conway, B.R. (2015). Striking individual differences in color perception uncovered by ‘the dress.’ Current Biology. doi:10.1016/j.cub.2015.04.053

Lafer-Sousa, R. & Conway, B.R. (2015, May). A color constancy color controversy. Demo presented at the annual meeting of the Vision Sciences Society, St. Pete Beach, FL.

Macknik, S.L., Martinez-Conde, S. & Conway, B.R. (2015). How “the dress” became an illusion unlike any other. Scientific American Mind, 26(4). Retrieved from

Penacchio, O., Lovell, P.G., Sanghera, S., Cuthill, I.C., Ruxton, G. & Harris, J.M. (2015, May).
Concealing cues to shape-from-shading using countershading camouflage. Poster presented at the annual meeting of the Vision Sciences Society, St. Pete Beach, FL.

Portilla, J. & Simoncelli, E.P. (2000). A Parametric Texture Model based on joint statistics of complex wavelet coefficients. International Journal of Computer Vision, 40(1), 49-71.

Rudd, M., Olkkonen, M., Xiao, B., Werner, A. & Hurlbert, A. (2015, May). The blue/black and gold/white dress pavilion. Demo presented at the annual meeting of the Vision Sciences Society, St. Pete Beach, FL.

Shapiro, A., Flynn, O. & Dixon, E. (2015, May). #theDress: an explanation based on simple spatial filter. Demo presented at the annual meeting of the Vision Sciences Society, St. Pete Beach, FL.

Tamura, H. & Nakauchi, S. (2015, May). Can the classifier trained to separate surface texture from specular infer geometric consistency of specular highlight? Poster presented at the annual meeting of the Vision Sciences Society, St. Pete Beach, FL.

Winkler, A.D., Spillman, L., Werner, J.S. & Webster, M.A. (2015). Asymmetries in blue-yellow color perception and in the color of ‘the dress.’ Current Biology. doi:10.1016/j.cub.2015.05.004.


Photo by Karen Meberg

Minjung (MJ) Kim is a PhD candidate at New York University (NYU) Department of Psychology, in the Cognition and Perception program. She studies the visual perception of light and colour, with a keen interest in material perception (e.g., what makes glowing objects appear to glow?). She is co-advised by Dr. Richard Murray at York University Centre for Vision Research and Dr. Laurence Maloney at NYU.

Category: The Student Blog | Tagged , , , , , , | 1 Comment

LAMP Diagnostics: The key to malaria elimination?

A medical researcher with the U.S. Army tests a patient for malaria in Kisumu, Kenya. Photo courtesy of the U.S. Army Africa.

A medical researcher with the U.S. Army tests a patient for malaria in Kisumu, Kenya. Photo courtesy of the U.S. Army Africa.

By Patrick McCreesh

Malaria elimination is possible within a generation. But controlling malaria and eliminating malaria are different, and each pose certain challenges. Overcoming the unique challenges of malaria elimination is essential to meeting this goal, and the barriers presented in malaria elimination settings will require different strategies and policies. Researchers at the Malaria Elimination Initiative (MEI) at the University of California San Francisco (UCSF) are gathering evidence to inform elimination efforts going forward. I am a Masters candidate in the UCSF Global Health Sciences program, and I am working in the field in collaboration with the University of Namibia on a cross-sectional survey of malaria prevalence.

Malaria 101

Malaria is caused by parasites of the genus Plasmodium. The parasites infect people through the bite of female Anopheles mosquitoes. The parasites hide inside the body’s cells before bursting into the bloodstream; leaving fever, chills, and body aches in their wake. Most people can recover from the infection, but in some severe cases the parasites invade the brain or cause organ failure. Children under five years old and pregnant women are at highest risk for severe complications.

History and Global Impact

The world has made substantial progress toward reducing the burden of malaria in the past 15 years thanks to new technological innovations in malaria prevention and treatment. For example, pyrethrin impregnated bed nets are a low cost and effective way to reduce malaria (For you basketball fans, Stephen Curry not only scores on the court, but assists around the world with his contributions to Nothing But Nets #gowarriors). Rapid diagnostic tests (RDTs) allow for point-of-care diagnosis of malaria. Artemisinin combination therapy (ACT) was a major breakthrough in malaria treatment. The global incidence of malaria has decreased by an average of 3.27% per year since 2000 thanks to expanded coverage of these interventions. Child mortality due to malaria in sub-Saharan Africa has decreased by 31.5% during the same time period. Due to this drop in malaria cases, many believe releasing the world from the grip of malaria permanently is possible. In a seminal conference in 2007, Bill and Melinda Gates galvanized global leaders to commit to malaria eradication.

Despite this pledge to fight malaria around the world, about 3.2 billion people are at still at risk of malaria infection. In 2013, there were an estimated 198 million cases and 584,000 deaths due to malaria. Transmission is most intense in Africa and severely affects those in poverty. There is an urgent need to expand malaria control and elimination efforts to ultimately eradicate malaria. The current malaria eradication strategy calls for expanding control efforts in high transmission settings, eliminating malaria gradually at the fringes of transmission, and developing a preventive vaccine. The estimated cost of malaria elimination is $5.3 billion, but this investment will save millions of lives and dollars in the long run.

A mother and child rest under an insecticide-treated bed net in Zambia. Photo courtesy of the Gates Foundation.

A mother and child rest under an insecticide-treated bed net in Zambia. Photo courtesy of the Gates Foundation.

Malaria Elimination
Elimination involves interrupting transmission in a defined geographical area and dropping incidence to zero. Eliminating malaria in low transmission setting settings has several unique concerns compared to high transmission settings.

• Changing epidemiology

The epidemiology of malaria changes as malaria incidence drops. Asymptomatic carriers, people who are infected by the malaria parasite but who do not show symptoms, are responsible for about 20-50% of transmission. Most of these carriers are adult men instead of children and pregnant women. The proportion of people with sub-microscopic infections increases, making diagnosis difficult. Imported malaria from highly endemic neighboring countries create a source of epidemics in countries with low transmission.

• Increased importance of surveillance

In malaria control settings, local clinics see a large number of cases, too many cases to investigate and intervene at the source of infection. Malaria surveillance in high burden settings relies on passive case detection, where cases are detected when patients with fever seek care at health facilities. The high volume of malaria cases makes investigating every case impractical, and reporting consists of aggregate reporting by district. As the number of malaria cases decrease, it becomes practical and necessary to investigate and find additional cases. Additional cases are often asymptomatic. The problem of asymptomatic cases illustrates the importance of understanding the difference of incidence versus prevalence in achieving malaria elimination. Incidence is the number of malaria cases out of the total population over a period of time. Prevalence is the number of people with malaria out of the total population at a set point in time. Passive surveillance does not measure the true prevalence of malaria, nor addresses the epidemiological and diagnostic challenges in settings preparing for elimination. Effective surveillance in elimination settings requires active surveillance. Active surveillance involves finding and testing individuals in a selected geographic area, regardless of symptoms, to identify and treat additional cases of malaria.

• Diagnostic difficulties

There are limitations to the currently accepted malaria diagnostic tools. Blood smear microscopy is time consuming and requires highly trained staff to be accurate at lower parasite densities. Although rapid diagnostic tests are quick and easy to use, sensitivity decreases for infections with less than 100 parasites per microliter. The most accurate method to diagnose malaria is polymerase chain reaction (PCR) but this requires expensive equipment and reagents, highly trained laboratory personnel, and has long turnaround times. Diagnosing asymptomatic cases of malaria is challenging because asymptomatic people do not seek care and asymptomatic cases are often undetectable by microscopy or RDTs. The changing epidemiology and increased importance of surveillance in elimination settings requires a diagnostic tool that can detect asymptomatic infections as well as PCR, but costs less and gives results faster.
Loop-mediated isothermal amplification (LAMP)

A promising new nucleic acid based method is loop mediated isothermal amplification (LAMP). Like PCR, LAMP amplifies a specific section of DNA. Unlike PCR, LAMP does not require a thermocycler, the reaction takes place at a constant temperature. This process uses multiple different primers and a DNA polymerase to create a stem-loop structure of DNA, which serves as a template for amplification. LAMP has a sensitivity similar to PCR at low parasite density, 92.7% for 10 parasites per microliter. LAMP has lower costs, faster turnaround times, and less technical requirements compared to PCR for malaria diagnosis. LAMP can detect malaria DNA for from dried blood spots, which are commonly used to collect and test for malaria in active surveillance. In the lab at the University of Namibia, the speed of LAMP has enabled our group to process about 150 samples per day. Fast and accurate results makes LAMP a good option for diagnosing malaria in elimination settings.

Closing thoughts

The new challenges presented in malaria elimination settings will require different surveillance and diagnostic tools. Researchers at the Malaria Elimination Initiative at UCSF are gathering evidence to inform elimination efforts going forward. As more countries transition from malaria control to elimination, elimination policy should be made with these challenges in mind. Most importantly, policy makers should ask “Does our health system have the tools to quickly investigate every case, test everyone around the case, and provide treatment to positives in a timely manner?” A recent article in PLOS Neglected Tropical Diseases about malaria elimination in Mesoamerica and Hispanola notes that LAMP could help more countries answer “yes” to this question, ultimately resulting in a malaria free world.

Patrick McCreesh is a Master’s candidate in the Global Health Sciences program at University of California, San Francisco. His research focuses on infectious disease, with an emphasis on malaria elimination efforts in partnership with the UCSF Malaria Elimination Initiative.

Category: Neglected Diseases, PLoS Neglected Tropical Diseases, Public Health, The Student Blog | Tagged , , , , , , , | Leave a comment

Uncovering the biology of mental illness

The visual cortex of the brain is depicted here. Photo courtesy of Bryan Jones.

The visual cortex of the brain is depicted here. Photo courtesy of Bryan Jones.

By James Fink

The human brain is capable of complex processes. The brain senses time and visualizes space. It allows us to communicate through language and create beautiful works of art. But what about when these cognitive abilities go awry? The National Institute of Mental Health (NIMH) cites serious mental illness (SMI) as a mental, behavioral, or emotional disorder that interferes with or limits one or more major life activities. The cited survey estimated the prevalence of SMI in the United States as ~4%, with the estimated prevalence for any mental illness being ~18%.

Mental illnesses, also known as psychiatric disorders, include many diverse conditions, including: anxiety disorders, Obsessive Compulsive Disorder (OCD), and post-traumatic stress disorder (PTSD). This group of illnesses presents a major global burden. In the 2010 Global Burden of Disease report, mental and substance use disorders comprised 7.4% of total disability-adjusted life years (DALYS) globally, and 8.6 million years of life lost (YLL), the single greatest cause of YLL worldwide. Psychiatric disorders also pose a significant burden to individuals and their families, and a challenge for clinicians and scientists.

For clinicians, many psychiatric disorders are difficult to treat. Even individuals with the same disorder present with a spectrum of symptoms and symptom severity, and many of the drugs currently used to treat these disorders have a multitude of undesirable side effects. For scientists, the mechanisms underlying this family of illnesses are still being unveiled, and reliable biological explanations for these disorders are still unclear, though it is known that biology and genetics play a role.

The lack of insight into the determinants of these disorders may relate to the difficulty in developing effective pharmacological treatments for them. Though various treatments for each of these disorders exist (ranging from drugs to cognitive behavioral therapy (CBT)), these treatments can be greatly improved. The field of neuroscience in particular is providing insight into the brain systems, cellular deficits, and genetics behind many of these disorders, which may help the development of new therapies. A limitation to current research efforts is that many of these insights come from the study of animal models. Also, conflicting results are often found in the literature. Despite these obstacles, the future of neuroscience holds a wealth of promise for developing a better understanding of psychiatric illness, studying these disorders with a new set of model systems, and interesting new research techniques.

Schizophrenia – An example of what we know

Perhaps one of the most studied of the psychiatric disorders is schizophrenia, a neurological disorder affecting about 1% of the general population and characterized by a variety of cognitive impairments, including a loss of affect and motivation, and often, the presence of hallucinations and delusions. A search in PubMed for articles with “schizophrenia” in the title yields more than 50,000 results, an indication of just how much research focuses on schizophrenia. Researchers have identified several hereditary factors (genes) with diverse sets of functions, that may be tied to this disorder..

DISC 1 (disrupted in schizophrenia 1) is a gene that makes a protein with many interacting partners, and plays a role in a variety of pathways within cells — including the ability of cells to divide, mature, and move towards their final location within a tissue. Such processes have been shown to lead to neurological deficits and disorders if disturbed.
Neuregulin 1 is a member of a protein family that has three other types of neuregulins. Perhaps most interestingly (and to make matters more complex), neuregulin 1 itself can also undergo a type of processing in the cell, called alternative splicing, that winds up producing many alternative forms of neuregulin 1 (up to 31 forms!) which all perform slightly different functions. The main job of neuregulin 1 seems to be to aid in brain and nervous system development.
The CACNA1C gene is responsible for making a protein in the cell that forms part of an important calcium channel, playing a role in a variety of brain cell (neuron) function.
Shank 3 is used by the cell as a scaffold, providing support for other cellular molecules that are important for the signaling that goes on between individual neurons of the brain.

Genes are not the only story though; researchers identified deficits at the cellular and network levels of the brain. The brain is comprised of both neurons and supporting cells called glia, the two major cell types of the brain. But there are a few classes of neurons (which change depending on the classification system you use), and each class is known to play its own important role in proper brain function. For instance, excitatory neurons use a chemical called glutamate to signal to other cells and are responsible for promoting the activity of their partners, whereas inhibitory neurons use a chemical called GABA to signal to their partners and are responsible for quieting these cells. There have been many reports of disrupted function of both excitatory and inhibitory neurons in mouse models of schizophrenia. These disruptions have been found in multiple brain regions and at different ages. But there have also been reports that fail to find a disruption of either of these cells types.

So what is the primary mechanism? This is exactly the problem outlined above: the complexity of mental illness makes it difficult for researchers to pin down a single biological explanation. Variations in mouse models, experimental approaches, animal age, or brain region being studied may be factors that contribute to inconsistencies across findings. The problem is that the brain changes over time and each brain region behaves a little differently from even its neighboring brain region. These factors complicate finding accurate and meaningful deficits in psychiatric disorders, a problem that may disappear in the near future.

A microscopy image of a mouse brain. One of the major barriers to greater understanding of the biological mechanisms of mental illness is reliance on animal models, which have different experiences of mental illness than humans. Image courtesy of Zeiss Microscopy.

A microscopy image of a mouse brain. One of the major barriers to greater understanding of the biological mechanisms of mental illness is reliance on animal models, which have different experiences of mental illness than humans. Image courtesy of Zeiss Microscopy.

The new toolbox
A recent article published by the New Yorker profiles Karl Deisseroth, a psychiatrist and neuroscientist at Stanford University. In the article Deisseroth mentions the difficulty in treating and understanding neurologic and psychiatric disorders, asking: “When we have the complexity of any biological system — but particularly the brain — where do you start?”.

Dr.Deisseroth is known for more than just treating patients. He is one of the inventors of one of the most exciting and cutting-edge experimental techniques used in neuroscience today – optogenetics. Optogenetics is a technique that allows expression of light-sensitive channels in neurons. By combining this technique with genetic and viral approaches, researchers can insert these channels into very specific populations of neurons. Ultimately, this approach allows researchers to control distinct groups of neurons and individual circuits of the brain by using flashes of light, providing unprecedented control on cellular and circuit function.

The study of neural circuits underlying behaviors has been a main aim of the field of neurobiology. Various circuits that underlie many human behaviors and cognitive functions are now known. Also, the specific circuits that are affected in psychiatric illnesses are starting to be uncovered. Applying optogenetics to the study of these disorders will provide researchers with a much more accurate approach to probing how various circuits function in models of neuropsychiatric disorders without affecting surrounding circuits. This is important, as non-specific circuit stimulation can actually cause confounding results.

The advent of induced pluripotent stem cells (iPSC), a method published by Shinya Yamanaka’s group in 2007 in which skin cells can be reverted to a stem cell-like state via the expression of “reprogramming factors”, now provides a means of allowing researchers to use human cells to study disease. iPSC’s can be driven to form various cell types, including neurons, by exposing these cells to a cocktail of factors known to be important in driving the development of nervous tissue.

Before the discovery of iPSCs by the Yamanaka group, the only available way of studying human brain tissue was through the use of post-mortem tissue and human embryonic stem cells. Now, iPSC technology allows researchers to collect skin cells from large groups of patients via skin biopsy, samples that can then be used to form patient-specific neurons. These neurons can be derived from actual patients with mental disorders, allowing researchers to study these diseases using human neurons from these patients.

Cerebral Organoids
Understanding the neural circuits that are disrupted in neuropsychiatric disorders remains a huge goal for neuroscience research. This is highlighted by the BRAIN initiative, put forth by the Obama administration in 2013, an initiative that aims at understanding how individual cells and neural circuits work together in order for researchers and clinicians to better understand brain disorders. A few years ago, approaches to studying and probing brain circuits, even by optogenetics, was limited to animal models, because cells grown in a culturing dish in a lab fail to form the neural circuits that are observed in the brain. But a paper published in 2013 from the Knoblich lab, showed that iPSC-derived neurons can be used to create “cerebral organoids”, small bits (4mm diameter) of neural tissue that were found to express markers characteristic of various brain regions including cortical and hippocampal regions. Since the publication of this innovative technique, other groups have published similar methods, creating additional versions of 3-D neural cultures and even making cerebellar-like structures, a brain structure known to be important for movement and coordination. In fact, WIRED magazine recently published an article discussing a recent paper that created what the authors call “cortical spheroids” (and what WIRED calls “brain balls”), a different method for developing organoid-like structures. This technique cannot yet be used to study neural circuits as they truly exist in the actual mouse or human brain (the circuits and brain-like regions observed in culture are very rudimentary). However, the advancement of these techniques over the next decade or two could provide new and exciting ways to probe actual human brain circuits using patient cells.

Though everyday we gain greater insight into how the human brain works and how brains might be disrupted in psychiatric disorders, we are far away from uncovering the exact circuits and mechanisms that underlie each of these disorders. It is clear that tools such as optogenetics, iPSC-derived neurons, and cerebral organoids can be used to provide tremendous control and detailed study of human neurons from these patients. Together, these studies might be able to gain a better understanding of how human neurons and neural circuits go awry in these disorders; leading to identification of novel targets for the development of drug therapies, providing promise for these patients and finally allowing scientists and clinicians to uncover the biology behind mental illness.


1. Uhlhaas, P. J. & Singer, W. Abnormal neural oscillations and synchrony in schizophrenia. 1–14 (2010).
2. Brandon, N. J. & Sawa, A. Linking neurodevelopmental and synaptic theories of mental illness through DISC1. 1–16 (2011).
3. Green, E. K. et al. The bipolar disorder risk allele at CACNA1C also confers risk of recurrent major depression and of schizophrenia. Molecular Psychiatry 15, 1016–1022 (2009).
4. Mei, L. & Xiong, W.-C. Neuregulin 1 in neural development, synaptic plasticity and schizophrenia. Nature Publishing Group 9, 437–452 (2008).
5. Gauthier, J. et al. De novo mutations in the gene encoding the synaptic scaffolding protein SHANK3in patients ascertained for schizophrenia. Proceedings of the National Academy of Sciences 107, 7863–7868 (2010).
6. Feng, Y. & Walsh, C. A. Protein-protein interactions, cytoskeletal regulation and neuronal migration. Nat Rev Neurosci 2, 408–416 (2001).
7. Lewis, D. A., Curley, A. A., Glausier, J. R. & Volk, D. W. Cortical parvalbumin interneurons andcognitive dysfunction in schizophrenia. Trends in Neurosciences 35, 57–67 (2012).
8. Takahashi, K. et al. Induction of Pluripotent Stem Cells from Adult Human Fibroblasts by Defined Factors. Cell 131, 861–872 (2007).
9. Dolmetsch, R. & Geschwind, D. H. The Human Brain in a Dish: The Promise of iPSC-Derived Neurons. Cell 145, 831–834 (2011).
10. Lancaster, M. A. et al. Cerebral organoids model human brain development and microcephaly. Nature 501, 373–379 (2013).
11. Paşca, A. M. et al. Functional cortical neurons and astrocytes from human pluripotent stem cells in 3D culture. Nature Chemical Biology (2015).

Category: The Student Blog | Tagged , , , , , , , | Leave a comment

Deepwater Horizon: Five years later, research continues but headlines fade

An aerial view of the oil leaked from Deepwater Horizon, taken May 6, 2010. Photo courtesy of The Boston Globe.

An aerial view of the oil leaked from Deepwater Horizon, taken May 6, 2010. Photo courtesy of The Boston Globe.

By Kristan Uhlenbrock

June 8th marked another World Ocean’s Day that has come and gone with Presidential Proclamations and local advocacy events. But the fanfare about the ocean doesn’t seem to last. It’s been five years since the Deepwater Horizon oil spill in the Gulf of Mexico and as scientific research continues to unfurl more details about the impacts of the spill, the public pays less attention as time moves on. Yet many questions still go unanswered. Where did all the oil go? How is it impacting the environment? What does this mean for future oil spill response?

Interest in the worst oil spill disaster in U.S. history has diminished as measured by the number of web searches performed over the past five years (See Figure 1).

Web search interest in “Deepwater Horizon oil spill” over time, as measured by Google Trends. Data from Google 2015.

Web search interest in “Deepwater Horizon oil spill” over time, as measured by Google Trends. Data from Google 2015.

Yet research is still unveiling interesting new findings about the impacts of the spill, and the legal process to determine how much BP will pay continues to play out in the courts.

4.9 million

This is how many barrels of oil spilled into the ocean during the Deepwater Horizon disaster in the summer of 2010. For comparison, this is more than seven times the amount of oil that is found annually in the Gulf of Mexico from natural seepage and other activities, like transportation.

As the oil spill was happening, researchers and other experts struggled to put an exact number on the amount of oil flowing out of the wellhead. Further complicating matters was the question of where the oil was going and where would it end up. Tourism to the Gulf region was impacted as nervous travelers cancelled their trips, which would ultimately have a lingering impact on the local economy for years after the spill.

Nonetheless, scientists today are making strides on some of those issues. In a recent PLOS ONE article, researchers from the University of South Florida found elevated hydrocarbon concentrations in the DeSoto Canyon — a 1500 m deep trench located in the Gulf of Mexico northeast of the spill site — in sediment samples taken from December 2010 to February 2011. This deep-sea environment is home to a unique ecosystem, but impacts from the oil spill on the corals and other organisms have been difficult to study. Therefore, finding elevated above normal amounts of hydrocarbons in this deep-sea region sheds some light on where some of the oil ended up and what this potentially means for the ecosystem.

Modeling where the oil went is a complicated process due to multiple modes of mixing and transport from currents, eddies, and three-dimensional circulation patterns. The researchers identified a couple of possible pathways of transmission for hydrocarbons to be transported from the well blowout site to the DeSoto Canyon. One is the interaction of the oil with microbes and particulates that bind to the oil and carry it to the sea floor as they sink, commonly referred to at “marine snow”. The other pathway of hydrocarbons into the canyon is through direct contact between the oil plume spilling out of the wellhead, and remaining below the surface, and the sediments on the trench slope.


The number of foraminifera species in the modern record, with the majority of them found on the bottom of the sea floor. Forams, short for foraminifera, are an essential part of the marine food web and can serve as indicators of coral health and other environmental conditions. Many forams consist of calcium carbonate shells, which is similar to the structure of coral reefs. How forams react to changing ocean chemistry can be a marker for other species. Forams are also sensitive to hydrocarbons and other toxins.

Another recent study, conducted by many of the same researchers and released in March 2015, examined how forams were affected by the oil spill. They found that the density of forams on the deep-sea floor of the Gulf of Mexico (1000-1200 m) decreased by 80-93%, at two sample sites taken in December 2010. Another set of samples taken in February 2011 showed that forams closest to the wellhead may have begun to recover, however the site located further away showed a decrease in the foram populations that almost reached zero. The researchers encouraged subsequent sampling of forams to determine if recovery continued or occurred at either of the sites.

The eastern region of the Gulf of Mexico, and the deepest parts of the sea floor, were not well mapped before the disaster, but are considered hot spots of abundance in diversity. As oil spilled into the ocean for five months, researchers scrambled to determine baseline estimates for the ecosystem, including sediment cores to look at pre-oil spill bottom habitat. They also used remotely operated vehicles to compare sites that were contaminated with oil to sites that were not. Located in the deep darkness of the DeSoto Canyon in the Gulf of Mexico are corals and invertebrates like the brittlestar. These unique ecosystems serve as feeding grounds, nurseries, and homes for commercially viable fisheries, so understanding how they are affected by the oil spill will help to determine how recovery funds get distributed.

13.7 billion

This is the amount of money that BP is facing in penalties for spill-related violations of the Clean Water Act. 80 percent of these penalties will go to a trust fund to support research, restoration, and recovery efforts in the region, as directed by the RESTORE Act, passed by Congress in 2012.

This large sum of money directed towards the Gulf region will fund research for many years in the future. And although public attention may not be focused on the aftermath of the Deepwater Horizon oil spill, scientists are gaining a little bit more understanding every day to deal with the decades long process of recovery.


McNutt, M.K., et al., 2012. Applications of science and engineering to quantify and control the Deepwater Horizon oil spill, Proc. Nat. Acad. Sci., doi:10.1073/pnas.1214389109

Paris, C.B. et al., 2012. Evolution of the Macondo Well Blowout: Simulating the Effects of the Circulation and Synthetic Dispersants on the Subsea Oil Transport, Environ. Sci. Technol., doi:10.1021/es303197h

 Romero, I.C., et al., 2015. Hydrocarbons in Deep-Sea Sediments following the 2010 Deepwater Horizon Blowout in the Northeast Gulf of Mexico, PLOS ONE, doi:10.1371/journal.pone.0128371

Schwing, P.T. et al., 2015. A Decline in Benthic Foraminifera following the Deepwater Horizon Event in the Northeastern Gulf of Mexico, PLOS ONE, doi:10.1371/journal.pone.0120565

Kristan Uhlenbrock is a marine scientist and graduate student in the Johns Hopkins science writing program. Follow her @oceankris.

Category: The Student Blog | Tagged , , , , , , , , , | Leave a comment

Celebrating 10 years of Athena SWAN Charter advancing women in science

A scientist performs some tests in a beaker for AIDS research. Photo courtesy of World Bank Photo Collection.

A scientist performs some tests in a beaker for AIDS research. Photo courtesy of World Bank Photo Collection.

By Sara Carvalhal

Gender inequality in science has been in the news lately, particularly around the fall-out of Sir Tim Hunt’s biased comments toward female scientists. Sir Hunt’s comments are not held in isolation, but rather indicate the need for greater efforts to promote gender equality and advance women’s roles in the scientific workplace. The Athena SWAN Charter, signed by the UK 10 years ago as of June 2015, is one such effort that merits recognition. The Athena SWAN charter is a policy designed to establish greater opportunities and representation of women in science, technology, engineering, mathematics and medicine (STEMM) fields. In recognition of this seminal event, I will take a closer look at how Athena SWAN has worked to better accommodate gender equality in STEMM.

Women are underrepresented in STEMM

In 2002 Patricia Hewitt, the Secretary of State for Trade and Industry, was concerned that women were not appropriately represented in science, technology, and engineering (SET). By then, Baroness Susan Greenfield CBE was appointed to examine the representation of women in SET and to advise the UK government on best practices to address the trend of women’s underrepresentation in of women on their scientific path, in both private and public sectors. Baroness Greenfield is a scientist, and by then she was director of Royal Institution, an organization devoted to science education and research.

In this report she concludes: “The problem is not just a social and cultural one, although inequity should be addressed in all its forms, but also economic, and as such cannot be ignored. If Britain is to remain a nation with successful and developing businesses of all sizes, it must make the most of its workforce”.

The British Government recognized that STEMM subjects were an important part of the UK economy, and that women were not adequately represented in STEMM, an issue affecting innovation, economic growth, and productivity. In response to the report, and to promote women’s engagement in STEMM subjects, the Government partnered with non-governmental organizations (NGOs) to launch a series of policies to address issues of women in SET and promote Science & Innovation Investment.

A joint effort to help elevate women in STEMM

In 2005 two organizations, Athena Project and the Scientific Women’s Academic Network (SWAN) signed a charter, known as The Athena SWAN Charter. The main goal of this charter was to reverse the consistent loss of women employed in STEMM.

This charter is particularly novel because it recognizes the achievements in recruiting, retaining or promoting women at each stage of STEMM subjects by an official award.

In general, organizations have been relying on enthusiastic programs based on events, websites and/or handbooks to tackle the problems with retaining and recruiting women in STEMM subjects. The Athena SWAN charter awards institutions that incorporate policies that support the career development of women in STEMM. The official recognition, which can be bronze, silver or gold award based on performance, also functions as (indirect) advertising of an institution’s internal policies.

Any British institution from the private or public sector can become an Athena SWAN member. By signing the charter, the institution promises to take actions to address the six principles of the charter. In summary, all Athena SWAN members should address gender inequalities at all levels of the organization by changing cultures and attitudes. Some key institutional policies can be reviewed to include the use of long-term contracts as opposed to short-term employment. Also, institutions can improve pathways from PhD to a sustainable academic career in science.

Each member has the possibility to submit an application based on a self-assessment and a plan for future actions. A peer review panel evaluates the assessment, and determines how many policies exist to support women in STEMM. Even if an institution is awarded by an official recognition, new assessments are made routinely to ensure that friendly policies towards women progress in STEMM are maintained.

The first awards were presented in 2006. Currently, there are 253 award-holding universities and departments from all 129 Athena SWAN members. This figure represents more than half of all higher education institutions in the UK.

University of Dundee — a bronze Athena SWAN Award

I am a PhD Student at the College of Life Sciences at the University of Dundee in Scotland, a recipient of a bronze award from Athena SWAN. Though we are only at the early stages, I have felt inspired by the significant changes in the academic environment since adopting Athena SWAN policies. The university frequently hosts talks from female and male scientists about their career paths, as well as Q&A sessions where we can address more personal issues. I also feel that departments are more open to hiring women since adopting the charter. The Athena SWAN policies had a significant effect on recruitment of qualified women in science; the number of women applying at SET departments increased almost 10% in a years time after the University of Dundee adopted more flexible working patterns and family-friendly policies. During this period, women who were selected for promotions in the SET departments increased by more than 30%, showing that Athena SWAN policies will lead to measured progress in gender equality in the sciences.

Female scientists have been taking to Twitter to poke fun at Sir Tim Hunt’s comments and bring attention to sexism in science under #distractinglysexy.

Athena SWAN Charter: The first step to changing social norms

When the Athena SWAN Charter was first created, statistics showed that the majority of graduate and post-graduate biology students were women, but less than 10% of these women were present at senior levels in higher education. Through the self-assessments, the national scheme found many women lacked career support between post-doc and tenure tracks, and it is during this period that women drop out of SET careers. This also coincides with the time many women elect to start families, which suggests that STEMM institutions are not adequately accommodating families.

Also, many young scientists believe that it is not feasible to be both a scientist and a mother. Athena SWAN members have helped change this notion by promoting and supporting more flexible working patterns to take care of dependents, including spouses, children and even parents. Athena SWAN members also provide funding for family support programs (e.g. the cost of childcare during a conference or other commitments).

Expanding Athena SWAN Charter to other subjects

Recently, Athena SWAN Charter policies were expanded to 10 key principles. Building on the initial six principles, Athena SWAN members recognize that academia cannot reach its full potential unless it benefits from talents of all individuals. New policies at institutions can be established to address the gender pay gap, and discrimination based on gender identity. Women’s lack of representation in senior roles is not limited to STEMM, women are also underrepresented in senior roles in arts, humanities, social sciences, business, and law (AHSSBL). I believe that these academic disciplines would see measured progress toward gender equality if the principles of the Athena SWANN charter were extended to these professional communities.

Some see the scheme as a feminist, or women-centered movement, and attempt to discredit programs designed to level the playing field. But I believe that by promoting greater equality, the principles of the Athena SWAN Charter works to the benefit of all people.

Official Athena-SWAN Charter: http://www.ecu.ac.uk/

WES (Women’s Engineering Society): http://www-womeninengineering.eng.cam.ac.uk/Athenaswan/history

British Department for Business Innovation & Skills: https://www.gov.uk/government/organisations/department-for-business-innovation-skills

Athena SWAN University of Dundee website: http://www.dundee.ac.uk/hr/athenaswan/

Baroness Greenfield. Set Fair. A Report on Women in SET, 2002. [ONLINE]:

Sarah Hawkes. Report on Athena SWAN Charter for Women in Science, 2011. [ONLINE]:

Joe Cullen, Kerstin Junge, Chris Ramsden. Evaluation of the UK Resource Centre for Women in Science, Engineering and Technology, 2008. [ONLINE]:

The Royal Society, Leading the way: Increasing diversity in the scientific workforce. [ONLINE]:

Alison Kingston-Smith. Wisdom, justice and skill in science, engineering and technology: Are the objectives of the Athena Project mythology? Bulletin The Society for Experimental Biology, March 2008. [ONLINE]:

Category: Academia, PLoS, PLoS Blogs, The Student Blog, Women | Tagged , , , , , , , , , | Leave a comment

Let’s talk cancer: New live imaging shows how cancer communicates with other cells

Extracellular vesicle carry RNA molecules which carry messages among cells. Photo courtesy of National Institutes of Health, Wired.

Extracellular vesicles carry RNA molecules which may ferry messages among cells. Photo courtesy of National Institutes of Health, Wired.

By Aditi Qamra

The ability to track and observe live cells in the body has offered unprecedented opportunities to the scientific community to understand key biological processes. Until now, reporter systems to track cells, especially in diseases like cancer, have largely been non-specific and difficult to implement. A team led by Dr. Jacco van Rheenen at the Hubrecht Institute in Utrecht, Netherlands became the first in the world to capture high-resolution in-vivo images of cancer cells interacting with other cells in the body. What they observed on film was even more interesting. Cancer cells were capable of transferring “malignancy” to pre-cancerous cells making them behave like malignant cells. The study was published in the journal Cell’s May issue.

How do cancer cells communicate?

Cells in our body, including cancer cells, are known to release small membrane bound sacs called Extracellular Vesicles (EVs) that carry proteins and genetic material in the form of DNA and RNA. These EVs released by the cells can be absorbed in surrounding cells in the body. EVs are known to play a vital role in cell-cell communication and signaling by delivering macromolecular messages. In cancer, many studies had observed transfer of these EVs between tumor cells to other recipient cells. But all of these studies were conducted in controlled culture mediums outside of the true biological microenvironment of tumors.

Rheenen and colleagues used a mouse model injected with highly metastatic breast cancer cells to study the transfer of EVs released from the injected breast cancer cells to the surrounding cells in various stages of malignancy. Cells that took up the cancer cell EV turned green using their reporter system, which was then recorded. They observed that less malignant tumor cells engulfed these released EVs. These cells then displayed high cell migration and metastasis, a well established hallmark of cancer, due to expression of EV-derived messenger RNA. This increased both the motility of recipient cells as well as their ability to spread to other organs. Interestingly, this vesicle transfer occurred both within the same tumor and also between distant ones. The findings implied that cancer cells are also capable of long range interaction and transfer of phenotypic traits; which would mean that metastatic cells of such tumors cannot be treated by alternative therapies (e.g. in chemotherapy resistant localized tumor cells).

The authors of the study also showed that uptake of motility-inducing mRNA were not biased towards faster cells, thus establishing the presence of a fair playground for cells to interact and influence behavior. This not only means that tumors increase their complexity by sharing genetic material, but also offers opportunities to exploit this association. Transferred genetic material could serve as biomarkers to study cancer staging and progression.

This study has come in quick succession of the recent landmark study by Berridge & Neuzil who established mitochondrial DNA movement between cells in breast cancer and melanoma mouse models. Cancer cells lacking mitochondrial DNA obtained it from nearby normal cells, something previously assumed to be impossible. Mitochondrial defects, established in cancer as well as in more than 200 diseases, can now be potentially treated with synthetic mitochondrial DNA targeted to inhibit diseased cells.

Rheenen et. al. uncovered a new vantage point of observing cells and also laid down groundwork of EV-driven cancer therapies. Researchers can now envision possibilities of interfering or inhibiting EV transfer to treat cancer. Cell-cell communication is important for a variety of biological processes — decoding this communication from one cell type to another in the context of the living tissue is the key for understanding disease biology.

For the actual videos of cancer cell EV transfer, you can visit the online manuscript and view the beauty of cell-cell communication for yourself.


  1. Zomer, A. et al. In Vivo imaging reveals extracellular vesicle-mediated phenocopying of metastatic behavior. Cell 161, 1046-57 (2015).
  2. O’Brien, K. et al. Exosomes from triple-negative breast cancer cells can transfer phenotypic traits representing their cells of origin to secondary cells. Eur J Cancer 49, 1845-59 (2013).
  3. Hanahan, D. & Weinberg, R.A. The hallmarks of cancer. Cell 100, 57-70 (2000).
  4. Hanahan, D. & Weinberg, R.A. Hallmarks of cancer: the next generation. Cell 144, 646-74 (2011).
  5. Tan, A.S. et al. Mitochondrial genome acquisition restores respiratory function and tumorigenic potential of cancer cells without mitochondrial DNA. Cell Metab 21, 81-94 (2015).
  6. Tuppen, H.A., Blakely, E.L., Turnbull, D.M. & Taylor, R.W. Mitochondrial DNA mutations and human disease. Biochim Biophys Acta 1797, 113-28 (2010).
Category: The Student Blog | Tagged , , , , , , | Leave a comment

Just Skin Deep — Your Immune System at the Surface

The skin is the human body’s largest organ. At 1.8 square meters for the average adult, skin covers about as much area as a large closet, and accounts for 12-15% of total body weight. The incredible variation in skin — oily, moist, or dry, exposed to light and cold, or dark and warm — even on an individual, creates unique habitats for the thousands of bacterial and fungal species (called commensal microbiota) that live on our skin. The skin immune system may control skin microbes, but our skin commensals can also educate our immune system. How our skin orchestrates this dialogue with microorganisms and physical insult is integral to its function, and to our health.

The skin is an immunologic organ. There are an estimated 20 billion T cells in human skin — far greater than the number of T cells in the blood — suggesting that immune defense in the skin is a high priority. The interaction among skin microorganisms and the immune system is likely not adversarial most of the time. Interestingly, the incidence of inflammatory skin conditions like atopic dermatitis in children has about doubled in the last thirty years, in parallel with the decreased exposure to microorganisms in early life.

How we understand the dynamic interactions among microbes and immune cells in human skin will have important implications not only for the treatment of autoimmune disorders, skin allergies, and skin malignancies, but also for the creation of better vaccine adjuvants exploiting skin immunity.

Major Players in the Skin Immune Landscape

Human skin is home to not only T cells and microorganisms, but also to a diverse group of cells with innate or innate-like functions. These include keratinocytes and Langerhans cells in the epidermis, and dermal dendritic cells, macrophages, and innate lymphoid cells in the dermis (Figure 1). While much work up to this point has elucidated the individual roles of these cells, describing how the total interactions among these cells maintain health or dysfunction in disease, will shape the ongoing skin research dialogue.

Keratinocytes are the major cell type that makes up the epidermis, and while not of myeloid or lymphoid origin, play important immune defense roles. Keratinocytes produce some antimicrobial peptides that control resident microorganisms on the skin, and also express some pattern recognition receptors, like toll-like receptors, that allow the activation of this cell type upon pathogen recognition or cell damage. Keratinocytes can produce pro-inflammatory cytokines to activate its neighbor, the Langerhans cell.

Langerhans cells are the first immune cells that any skin-invading pathogen or commensal will come in contact with, and are also activated in response to cell damage and UV light. Langerhans cells are the primary antigen-presenting cell in the epidermis, and are identified by the receptor Langerin, and the lipid-presenting molecule CD1a in humans. CD1a is highly abundant in human skin on Langerhans cells and dermal dendritic cells, and has been shown to bind skin oils and engage reactive T cells. Mice lack this antigen-presentation molecule, however, and the functional significance of CD1a in human skin is continuing to be explored by scientists.

just skin deep

Figure 1. Immune cells in the skin occupy distinct locations and functional roles. Figure created by Rachel Cotton, adapted from Pasparakis M et al, 2014

Dermal dendritic cells. A skin dendritic cell “samples” its surroundings, picking up antigen from a damaged cell, a pathogen, or a commensal microorganism, and then traveling to the skin-draining lymph node. There, the dendritic cell activates and instructs T lymphocytes to come back to the skin and carry out functions like secreting cytokines. There are several distinct dendritic cell subsets in the epidermis and dermis, but the DCs that seem to have the best-defined roles so far, are the CD141+ DCs and CD1c+ DCs. Interestingly, these subsets are relatively functionally equivalent in humans and in mice. The CD141+ DCs in humans, are best at migrating and presenting antigen to T cells in the draining lymph node. These DCs are also good “cross-presenters” meaning that these cells can also process and present exogenous antigen to CD8 T cells, which are abundant in the epidermis. The CD1c+ dendritic cell subset rather is better at “turning on” CD4+ T helper cells in the dermis. The CD1c+ DCs can produce a broad range of cytokines that fine-tune the T cell immune response in the skin. These cells have also been reported to induce T regulatory cells, which would help maintain tolerance to commensal skin microorganisms. Dermal dendritic cells are the key “middle men” between sensing commensal microbes and instructing a T cell response.

Skin-resident T cells.  The skin is home to roughly 20 billion T cells, making it the largest reservoirs of T cells in the body. The hallmark of T lymphocytes is their specificity and memory for a given antigen. While the skin dendritic cell populations are generally functionally equivalent between humans and mice, skin T cell populations differ considerably between mice and man, limiting the conclusions we can draw from mouse studies.

In human skin, T cells are described on several metrics: expression of CD4 (T helper cells) or CD8 (Cytotoxic T lymphocytes), if they stay put in the skin or migrate to and from the skin, what cytokines they produce, their T cell receptor (innate-like, αβ or γδ), and how they behave during health and disease. Greater than 95% percent of T cells in the skin have a “memory” phenotype, meaning that these cells have already experienced their cognate antigen, and are poised to rapidly respond to that antigen again. On the other hand, this site-specific memory means that misdirected T cell responses during autoimmune disease cause skin lesions that, even after treatment, recur in the same place.

CD8+ T cells, also known as cytotoxic T lymphocytes (Tc), live almost exclusively in the human epidermis, presumably to rapidly respond to viral infection or tissue damage. CD4+ T cells, called T helper (Th) cells, reside predominantly in the dermis and carry out a variety of effector functions. Both CD4+ and CD8+ T cells can produce key cytokines that mediate skin health or disease, like IL-17 (produced by Th17 or Tc17 cells), IL-22 (produced by Th22 or Tc22 cells), IFNγ (produced by Th1 cells), and IL-10, among others. The cells that exclusively produce IL-22 (Tc22 and Th22 cells) are unique to humans, and are hugely expanded in psoriasis. This overproduction of IL-22 activates epithelial cells and contributes to the red and flaky skin characteristic of psoriasis. A population of CD4+ cells called T regulatory cells (Tregs) on the other hand, help to limit inflammation in the skin, by the production of IL-10 and TGF-β.

The beauty of T lymphocytes is their specificity and memory for a given antigen. Interestingly, the T cell receptor repertoire of human skin is restricted — meaning that there are fewer unique T cell receptors — relative to blood, despite the large cell number. A major question that remains is the cognate antigens for these cells. Do these cells respond to a protein or lipid from a commensal microorganism? A self antigen? Recently, αβ T cells that recognize the human lipid antigen presenting molecule CD1a and some skin oils were found to home to skin, and are relatively common events in human skin and blood. These cells point to a new mechanism of antigen recognition and skin immunity which is still being explored. Other innate-like T cells that have a defined or restricted TCR and antigen, like Natural Killer T cells, are also found in human skin, but their function and roles in skin homeostasis and inflammation are continuing to be understood.

Innate lymphoid cells (ILCs) are cells of lymphoid origin like T cells, but they lack a T cell receptor and therefore lack the ability to respond to a specific cognate antigen. ILCs are rare, but potent, cells residing in the dermis and subcutaneous fat that orchestrate tissue homeostasis and inflammation. ILCs are divided into three groups based on what cytokines they produce, parallel to how CD4 T helper cells are classified. Group 1 ILCs preferentially produce the pro-inflammatory cytokines TNFa and IFNy, whereas group 2 ILCs produce the type 2 cytokines IL-4, IL-5, and IL-13 and are involved in allergy and immunity to helminths. Group 2 ILCs are the predominant ILCs in the human dermis. Group 3 ILCs produce IL-17 and IL-22 and are enriched in psoriatic skin compared to normal skin, suggesting that they play a role in pathology. It was recently demonstrated in the gut however, that group 3 ILCs are responsible for the negative selection of T lymphocytes that recognize gut commensal microbes. The distinct roles of ILCs in human skin at homeostasis and in skin diseases are active areas of research.

The Skin Microbiome and Immunity

The incredible variation in skin at different sites on the human body creates unique habitats for the over 1000 bacterial and fungal species – called commensal microbiota – that live on our skin. Human skin commensals have been extensively characterized in recent years by 16s rRNA sequencing with the Human Microbiome Project and human skin sites surveys, revealing striking site-specific microbial signatures depending on physical factors like oiliness, dampness, and exposure to sunlight. In other words, skin commensals on the bottom of your foot are a completely different population from the commensals on your forearm, which look completely different from the microbiota populations on your face.

just skin deep

Photo credit: Skin Microbiome Blog

Commensal microbes in general have been proposed to promote skin immunity by inducing a basal level of immune activation, mediated by IL-1, that protects against other infections. Indeed, skin studies in mouse models have demonstrated that skin microbiota can modulate skin-resident T cell populations. For example, Staphylococcus epidermidis (S. epi) colonization of mouse epidermis leads to the accumulation of Tc17 T cells in the epidermis that are protective against fungal infection. However, injection of this S. epi into the dermis results in inflammation and a visible wound, underscoring that for the mammalian skin immune system, location of a microbe matters.

In humans, changes in skin microbiota also track with skin diseases. For example, Staphylococcus aureus has been associated with atopic dermatitis flares in children. It is tempting to speculate that skin pathologies that show a predilection for certain tissue sites, like atopic dermatitis in the creases of the elbows and knees, are in part driven by the unique microbial signature at that site, or a shift in the population at that site.


The skin is packed with immune cells, in constant dialogue with commensal microorganisms that live on us and in us. There is still a lot to be understood about how specific microbes shape the skin immune system, how and why we are “tolerant” to our skin microorganisms in health, and how immune responses to these microorganisms contribute to skin disease.

Understanding how immune cells function diverse skin environments with distinct microbial communities will have implications for the treatment of inflammatory skin diseases, skin cancers, and the creation of vaccine adjuvants that take advantage of the unique qualities of human skin-resident immunity.

For more updates on the interactions among skin bacteria and your immune system from dermatologists, immunologists, and microbiologists, check out the Skin Microbiome Blog at skinmicrobiome.wordpress.com.

Rachel Cotton is a PhD student in the Immunology Program at Harvard, and is interested in global health and science policy. Her past research includes how parasites modulate skin immunity. Follow her on Twitter @RachCotton.

Category: Biology, Blog Pick of the Month, Body, Medicine, PLoS, PLoS Blogs, ResearchBlogging, Student Column, Students, The Student Blog | Tagged , , , , , , , , , , , , , | 4 Comments

Scientists Behaving Badly (On Social Media)

By Brett Buttliere

Operant conditioning is well established and suggests individuals will continue behavior that is rewarded (for instance with favorites, retweets, or replies).  Image courtesy of Psychology Notes Headquarters.

Operant conditioning is well established and suggests individuals will continue behavior that is rewarded (for instance with favorites, retweets, or replies). Image courtesy of Psychology Notes Headquarters.

It is generally undisputed that Twitter and other social information exchange websites are changing the landscape of science and communication. The value that these platforms offer is probably best evidenced by how much time the average user spends on them. For example, the average Twitter user is on the site 14 minutes per day, while the average Facebook user spends more than 55 minutes per day on the site .

While Twitter and Facebook have mostly been touted as a most excellent tool for discovering new literature and collaborations, it is frustrating when scientists are personally attacking one another, rather than actually debating the scientific ideas, on social media.

Though this aspect of these services has been talked about far less, some have begun asking for civility. You can see my own pleas on Facebook and on Twitter.

The goal here is to more officially and explicitly point out that these problems exist and to suggest some ways by which we can solve these problems for the improvement of all involved.

The problem
It is far too often that discussions between scientists break down into petty debate about how an idea is expressed, or where it is expressed, rather than the idea itself. The practice does not seem to be limited to any single field, as I have seen it everywhere (just look in your own newsfeed and you will find examples of this).

The problem is that attacking how, or where, an individual expressed an idea is not a good argument; these are #AcademicAdHominems and one of the fallacies of irrelevance. Attacking the individual expressing the idea is neither constructive nor productive, and I would go so far as to say that they are actually harmful to the scientific social media “environment”. Unfortunately, these tweets are oftentimes the ones getting the most discussion!

Though I know of no strong research on this relationship in academic Twitter thus far (I am working toward it, please email or comment if you know of one or want to help!), there are other examples from the literature that are suggestive of this notion. For instance, it has been shown that the longest discussions in the BBC forums are generally the most negative and that the most active users are also the most negative. It has also been shown that negative reviews of books and movies are longer and contain (subjectively rated) more useful information for readers, with readers even reading more negative reviews. See here and here.

There are arguments to be made that negative information is more useful than positive information, especially in science; but I would say there are there are at least three reasons why this sort of discourse is a problem for science:

  1. It hinders the discussion’s progress, distracting from the real (scientific) issues.
  2. It deters others from joining the discussion, because they are afraid of speaking up.
  3. It sets a bad precedent for others who do join on how to discuss productively.

Science is not politics. It is not the media industry. I believe science has a responsibility, as the greatest endeavor humans have ever taken part in, to do better by being better, more effective, communicators.

Probably the main thing is to remember that we are setting a behavioral example for everyone else, all of the time. It is important that the ‘leaders’ of the community set a good example from the beginning, rather than trying to go back and fix the bad norms later.

We can solve these problems… by using science!

I don’t think it’s useful to just complain about something without putting forward a potential solution to the problem. Here the main thing is that we can apply scientific theory, especially Psychology, to optimally understand the problems in science communication (on social media), and to design the most effective solutions.

Although the application of psychological science to the process of science has been happening in small ways for some time, it is starting to happen to a greater extent and applying more fundamental theories to the process, rather than demonstrating problems.

For instance, one of the basic tenants of Psychological Science is Operant Conditioning; the idea that individuals will repeat and continue a behavior that is rewarded. Humans are intelligent, and people (at least unconsciously) recognize that they are getting favorites, retweets, and replies for being mean or petty. We see this playing out in real time on Twitter. Even a week later Taleb’s prejudgements of Psychology are still being favorites and retweets, and I am still talking about Gilbert’s faux pas, even though it happened over a year ago.

There are many ways to apply science to the situation, but here, I will make three suggestions that are relatively well established in the literature.

  1. Notice this behavior in your own tweets. Nobody is perfect, but we can all work to reduce the number of times we attack the scientist instead of the science. Recognize that people are taking cues as to how to act from you, and if you are not a good example, you are a bad example.
  2. Notice #AcademicAdHominems in other researcher’s tweets, and be intentional with your Twitter behavior.This behavior is often not undertaken without reason, so pay attention to why this individual is engaging in this behavior: Is it because the person finds it easier to disparage the person rather than their viewpoints? Is it due to jealousy (for instance after a PLOS article gets featured or a paper published in Nature)? Could they be trying to impress another by ridiculing a common enemy? There are numerous reasons for this suboptimal behavior and understanding the motivations can help us inform our response.
  3. Be a constructive, positive, example. You are an example for others and others are taking cues on how to behave from you.

This is not always easy, but the research suggests that ridicule is simply not the most effective persuasion. To quote a recently published review of the literature on the subject, the most important antecedents for effective and persuasive online communication are: ‘argument quality, source credibility, source attractiveness, source perception and source style’.

Overall, it seems like the best way to convince another person of your idea or suggestion is to state the idea so simply and clearly that they wonder how they ever thought differently. This may not get the most favorites or retweets right now, but if we all are good examples, it will improve scientific discourse on social media.

The bottom line is that your scientific arguments should be able to speak for themselves, without needing to attack where, how, or who expressed the idea.

Have some thoughts about how science could be applied to improve science? I would very much enjoy hearing about it! Let’s talk below, on Twitter @BrettButtliere, or email me at brettbuttliere@gmail.com. You can read more of my writing on using science to improve science here or more general thoughts here.

Bennet, S., (2015, May 23). This is How Much Time we Spend on Social Networks Every Day. Retrieved from http://www.adweek.com/socialtimes/social-media-minutes-day/503160

Leonard, H., (2015, May 23). This Is What An Average User Does On Facebook. Retrieved from http://www.businessinsider.com/what-does-an-average-facebook-user-do-2013-3#ixzz3aynzN2NX

Costello, V., (2015, March 30). How Articles Get Noticed and Advance the Scientific Conversation. Retrieved from http://blogs.plos.org/plos/2015/03/get-paper-noticed-join-current-scientific-conversation/

Bert, A., (2015, 25 Feb, 2015). How to use social media for science — 3 views. Tips from science and journalism pros at the American Association for the Advancement of Science (AAAS) annual meeting Retrieved from: http://www.elsevier.com/connect/how-to-use-social-media-for-science

Vazire, S., (2015, May) Both Ways is the Only Way I Want it. Retrieved from http://sometimesimwrong.typepad.com/wrong/2015/05/index.html

Neuroskeptic, (May, 2015). Personal Communication. Retrieved from: https://twitter.com/Neuro_Skeptic/status/602093266109132800.

Buttliere, B. (personal communications, 2015, March 24). https://www.facebook.com/groups/247058398672379/permalink/952329431478602/?comment_id=952644334780445&offset=0&total_comments=126&comment_tracking=%7B%22tn%22%3A%22R9%22%7D#_=_.

Buttliere, B. (18, May, 2015). Personal Communications. https://twitter.com/BrettButtliere/status/600259689306271744.

Ad Hominem (n.d.) In Wikipedia. Retrieved May 28, 2015, from http://en.wikipedia.org/wiki/Ad_hominem

Chmiel, A., Sobkowicz, P., Sienkiewicz, J., Paltoglou, G., Buckley, K., Thelwall, M., & Hołyst, J. A. (2011). Negative emotions boost user activity at BBC forum. Physica A: Statistical Mechanics and its Applications, 390(16), 2936-2944. http://arxiv.org/abs/1011.5459

Chen, C., Ibekwe-SanJuan, F., SanJuan, E., & Weaver, C. (2006, October). Visual analysis of conflicting opinions. In Visual Analytics Science And Technology, 2006 IEEE Symposium On (pp. 59-66). IEEE. https://hal.archives-ouvertes.fr/hal-00636138/document

Sen, S., & Lerman, D. (2007). Why are you telling me this? An examination into negative consumer reviews on the web. Journal of Interactive Marketing, 21(4), 76-94. http://www.sba.oakland.edu/faculty/kim/2009/e-wom.pdf

Greenwald, A. G. (1975). Consequences of prejudice against the null hypothesis. Psychological Bulletin, 82(1), 1.

Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of general psychology, 2(2), 175.

Kazdin, A. E. (2012). Behavior modification in applied settings. Waveland Press.

Taleb, N. [NicholasTaleb]. (2015, May 18). If I said astrologists were BS artists, you’d agree. I see social “scientists” the same @BrettButtliere @StuartJRitchie @BryanAppleyard. Retrieved from https://twitter.com/nntaleb/status/600266835259326464

Gilbert, D. [DanTGilbert]. (2014, May 24). Psychology’s replication police prove to be shameless little bullies: http://www.psychol.cam.ac.uk/cece/blog (corrected link). Retrieved from https://twitter.com/dantgilbert/status/470199929626193921

Stanford University, [Psychology Everywhere]. (2012, August, 28). Bandura’s Bobo doll experiment. Retrieved from https://www.youtube.com/watch?v=dmBqwWlJg8U

Teng, S., Wei Khong, K., Wei Goh, W., & Yee Loong Chong, A. (2014). Examining the antecedents of persuasive eWOM messages in social media. Online Information Review, 38(6), 746-768. http://www.emeraldinsight.com/doi/full/10.1108/OIR-04-2014-0089

Updated 6/2/15; 3:28: title changed; previously “Academic Ad Hominems & other Problems with Science in Social Media (& how to fix them)”

Category: Academia, Inside Knowledge, PLoS, PLoS Blogs, science journalism, Social Media, Social Media, Social networks, The scientific-industrial complex, The Student Blog | Tagged , , , , , , , | 2 Comments

It’s time for universities to rethink what counts as field school

Excavation of Roman villa. Photo courtesy of Capture the Uncapturable on Flickr.

Excavation of Roman villa. Photo courtesy of Capture the Uncapturable on Flickr.

By Liam Zachary

Field school season is approaching for anthropology and earth science undergraduate students, and while some students have already enrolled in an exciting field school program, many are still scrambling to find a spot, and even more students are priced out of the experience altogether.

Field schools in archaeology, geology, palaeontology, and other fields of anthropology and earth science were initially envisioned to complement classroom and lab-based undergraduate science instruction, but today field school is a luxury for students. As the cost of undergraduate education continues to rise, especially in the United States, it has become more difficult for some students to justify attending field school instead of taking a summer job or paid internship. I believe universities need to do more to support student field training, which has tremendous benefits for the professional and academic development of students. Many undergraduate students share my experience of first connecting with a subject of passion while working in the field, which can shape the trajectory for future research careers.

My field school experience

My first experience with archaeological field work was during an internship at the University of California, Santa Cruz (UCSC) campus during my senior year in 2013. We excavated a portion of the Cowell Limeworks on weekends for 10 weeks. The limeworks was in use during the 19th and early 20th century.

Though my field work experience began in essentially UCSC’s own backyard, my field work experiences have taken me around the world. After graduating from UCSC, I attended Zamartze Mortuary Archaeology field school in Navarra, Spain. We excavated burials from a medieval church cemetery for three weeks. In 2014, during my MSc I excavated a medieval hospital cemetery and a Merovingian settlement in the Netherlands. Working at these archaeological sites catalyzed my interest in bioarchaeology, the study of human remains from archaeology sites. Beforehand, I was unsure what I wanted to specialize in within archaeology. Field school reinforced my interest in bioarchaeology, and today, I am pursuing a PhD in the subject.

Why is field school important?

My professors reiterated that field school is the most important experience for an archaeology undergraduate student, but the coursework at universities does not reflect this sentiment. In field schools, students receive practical training in their discipline, and also meet students from other universities who are also passionate about the subject. Field schools enable students to develop practical skills, such as archaeological excavation or geologic survey, which cannot be learned in the classroom. Finally, field schools connect undergraduate students with early career scientists and senior faculty for mentorship. I would suggest students looking for a traditional academic field school experience visit the Institute for Field Research (IFR) ]. IFR also offers two types of field school scholarships, one that is merit based and one that is based on financial need.

Field schools have many benefits, but are also prohibitively expensive. University field schools can cost more than US$3000, which equates to more than a semester of tuition at some state colleges in the United States.

The public university system was designed to make education affordable and accessible to all students. I am troubled that field schools are not integrated in the public higher education system, when so many professionals identify fieldwork training as critical to success. Many students receive significant financial aid to attend university, but Federal Student Aid (FAFSA) will not fund summer field schools. The lack of financial aid means the field school system is only accessible to students who can afford to pay the fees out-of-pocket, limiting the experience to students who can afford the fees. Consequently, students from low-income backgrounds are less likely to pursue an education in science disciplines where field training is crucial. Until universities make field schools part of the curriculum in anthropology and earth sciences, many bright students will be discouraged from pursuing this discipline.

Fortunately, there are affordable alternatives to university-organized field schools, but they are not well publicized to students.

Passport in Time (PIT): An alternative to field school

Passport in Time (PIT) is a volunteer archaeology and historic preservation program of the United States Forest Service. Volunteers work with Forest Service archaeologists and historians on projects in National Forests nationwide. PIT projects are free for volunteers and housing (usually camping) is provided. Today, there are two exciting PIT projects with openings for students, the Hudson-Meng Bison Bone Bed Interpretation Project and a dinosaur fossil project in South Dakota.

There are three overlapping sessions from June to September 2015, with three openings per session, in the Hudson-Meng Project. The Hudson-Meng Education and Research Center (HMEC) is open to the public, and hosts university field schools during the summer months. HMEC is looking for PIT volunteers to work during the summer field season to educate the general public about a variety of topics relating to the site, and volunteers are tasked with designing interactive materials for the general public on these topics. The volunteers will also help to develop education materials about Hudson-Meng for local public K-12 schools. If you are interested you can apply on the project page on the PIT website. The project can sponsor undergraduate or graduate students who would like to earn academic credits by completing an independent internship.

Another PIT project with openings is a dinosaur fossil-hunting project on the Nebraska National Forest in South Dakota. The project is searching for more fossils Mosasaurs and Plesiosaurs in a Late Cretaceous sea bed. PIT volunteers along with the Forest Service will help with surface survey and limited excavation led by a graduate student. Volunteers will also receive training on how to document finds. The project starts the week of July 13, and applications for one of the 15 spots are due by June 8 2015.

Though PIT offers great programs for students, there are not enough projects to shore up the need for additional financial support for students to attend field schools. Therefore, it is imperative that universities make field schools more affordable and accessible by incorporating them into the curriculum.

Student Resources Listed in Post
Institute for Field Research: https://www.ifrglobal.org/
Passport in Time: http://www.passportintime.com/

Larsen, CS 2000. A view on science: physical anthropology at the millennium. Am J Phys Anthropol 111:1–4.

Category: Academia, PLoS Blogs, The Student Blog | Tagged , , , , , , , , , , , | Leave a comment