To effectively protect biodiversity in an era of climate change, ecologists first have to know where animal and plant species are located and then be able to predict what habitats will be available to them in the future.
To help them in these tasks, the scientists use species distribution models that identify species’ habitats from observational data and climate scenarios.
Trouble is, these models are often severely limited.
They often aren't good at accounting for uncertainty: if the species is not sufficiently well-described, if the relevant climatic conditions are poorly understood, or if the model is simply not very accurate, the models tend to be inaccurate.
So when they're used to guide public policy or assess the effectiveness of decision-making, it becomes crucial to say when their predictions might be flawed.
This is the methodological problem addressed by Timothée Poisot, a professor in Université de Montréal Department of Biological Sciences.
In a study published in Advances in Ecological Research, he adapts a well-established method in machine learning that has not yet been used in biodiversity research – conformal prediction — to propose a new approach to mapping the uncertainty of biodiversity scenarios.
How? By using data from sightings of a rather unusual (and fanciful) species: Bigfoot (also known as Sasquatch), the large, hairy, mythical creature that's said to inhabit forests in North America, particularly in the Pacific Northwest.
"When developing a new method, we often use simulated data, and that always frustrates me because the simulations are too clean," Poisot explained.
"But the community that believes in the existence of Bigfoot has a database of all the sightings, and it’s a dataset that’s perfectly suited to this exercise. So by demonstrating how the method works on realistic data, we’re taking a step back from the biology itself."