Mapping Life – Quality Assessment of Novice vs. Expert Georeferencers

Authors

  • Elizabeth R. Ellwood Department of Biological Science, Florida State University, Tallahassee, FL 32306
  • Henry L. Bart, Jr. Tulane University Biodiversity Research Institute, Belle Chasse, LA, 70037
  • Michael H. Doosey Tulane University Biodiversity Research Institute, Belle Chasse, LA, 70037
  • Dean K. Jue Florida Resources and Environmental Analysis Center, Florida State University, Tallahassee, FL 32306
  • Justin G. Mann Tulane University Biodiversity Research Institute, Belle Chasse, LA, 70037
  • Gil Nelson Department of Biological Science, Florida State University, Tallahassee, FL 32306 and Institute for Digital Information, Florida State University, Tallahassee, FL 32306
  • Nelson Rios Tulane University Biodiversity Research Institute, Belle Chasse, LA, 70037
  • Austin R. Mast Department of Biological Science, Florida State University, Tallahassee, FL 32306

DOI:

https://doi.org/10.5334/cstp.30

Keywords:

Benchmarking, biodiversity specimens, georeferencing, natural history collections, quality assessment, volunteered geographic information

Abstract

The majority of the world’s billions of biodiversity specimens are tucked away in museum cabinets with only minimal, if any, digital records of the information they contain. Global efforts to digitize specimens are underway, yet the scale of the task is daunting. Fortunately, many activities associated with digitization do not require extensive training and could benefit from the involvement of citizen science participants. However, the quality of the data generated in this way is not well understood. With two experiments presented here, we examine the efficacy of citizen science participants in georeferencing specimen collection localities. In the absence of an online citizen science georeferencing platform and community, students served as a proxy for the larger citizen science population. At Tulane University and Florida State University, undergraduate students and experts used the GEOLocate platform to georeference fish and plant specimen localities, respectively. Our results provide a first-approximation of what can be expected from citizen science participants with minimal georeferencing training as a benchmark for future innovations. After outliers were removed, the range between student and expert georeferenced points was <1.0 to ca. 40.0 km for both the fish and the plant experiments, with an overall mean of 8.3 km and 4.4 km, respectively. Engaging students in the process improved results beyond GEOLocate’s algorithm alone. Calculation of a median point from replicate points improved results further, as did recognition of good georeferencers (e.g., creation of median points contributed by the best 50% of contributors). We provide recommendations for improving accuracy further. We call for the creation of an online citizen science georeferencing platform.

Downloads

Published

2016-05-20

Issue

Section

Research Papers