This is a guest post by Amy Freitag. She is a science studies scholar interested broadly in how different ways of knowing might contribute to water quality conservation. You can find her other musings at Southern Fried Science or chat with her on twitter @bgrassbluecrab.
Citizen science seems to be in vogue now – it even goes by different names, like ‘public participation in scientific research’, ‘co-produced knowledge’, ‘collaborative research, and ‘democratized science’. A handful of scholars have tried to classify the different types while many others expound the myriad benefits this new type of science might bring.
However, I always got the feeling that even with concerted efforts to compare across efforts in citizen science, there was still some underlying disconnect between program expectations and what might qualify them as ‘successful’. So I decided to interview program coordinators and compare what they had to say about recommendations for success to those in the published literature. Lo and behold, my suspicion was correct. Now, my work with Max Pfeffer, recently published in PLoS ONE, describes the many aspects of success and provides a new lens to look at the diversity of citizen science programs.
Fundamentally, citizen science taps into a wealth of information and learning potential outside of traditional science. We’ve all heard the terms for these alternative ways of knowing the world – traditional, indigenous, local, citizen, etc. – that fundamentally describe how to learn instead of what is learned. For instance, a hunter will learn the daily rhythms of his prey through tracking, while a scientist might set out hair traps. Both will yield similar information with a different set of context. According to some philosophies, multiple perspectives such as these are required to fully describe the world.
What counts as legitimate information is a question that has bounced around the philosophical literature for a while. They’ve decided that the public tends to accept information as legitimate (and therefore potentially act on it) if they trust the process by which that information was discovered. Science may not be enough. Theorizing knowledge production and nesting citizen science within these broader philosophies are recent phenomena, therefore most programs don’t systematically think about their purpose and successes along these lines – which is where my disconnect comes in. The wide variety of social and scientific benefits are likely nearly impossible to achieve through the course of a single citizen science program.
So, like the citizens of the world that legitimate information by process, I set out to see how well the citizen science data (the product) meets the goals of the program (the process) in order to get a handle on this nebulous concept of ‘success’.
The top recommendations for success at a broad level matched, though in a slightly different order, between the literature and the program coordinator interviews: collaborating with experts, consistent methodologies, presenting data to policymakers. Where the rubber hits the road, however, these generic recommendations weren’t enough to direct program management. The literature also stressed standardized volunteer training, while program managers stressed that this is a no-brainer for consistency from program inception, which is necessary.
Where the generic recommendations fell flat is when program goals differed. It may not be surprising that recommendations need to be tailored to program context and missions, but this is the missing link that I suspected all along. The mission of each program was not merely to create scientific information but also about education, restoration, stewardship, and community-building. In these contexts, the questions of ‘do you consider yourself successful’ and ‘is your data reliable’ do not mean the same thing – and respondents answered differently. While 84% of program coordinators considered their program successful, only 64% stated confidence in their data. In addition, 97% of the literature articles considered their program successful, reflecting the overall positive bias in scientific reporting.
Overall, success in citizen science programs is as wide-ranging and diverse as their missions and community contexts, which are more likely to focus on the scientific process than purely on the results than published recommendations give them credit for. Survey recommendations were more sensitive to aspects of missions outside data production and incorporated more struggling programs that could speak from expertise. These voices should be made explicit when making recommendations, advertising, or fundraising for citizen science groups. The data’s important but the process is more so.
Photo credit: US Army