In light on a couple recent posts on parks and conservation lands (on which I hope to comment soon), some of you might be interested in participating or monitoring the discussion being help this week here. The Designing the Parks initiative is sponsored by the National Park Service, UVA, and a mess of NGOs. In a conference earlier this year they attempted to “to assess lessons learned through an examination of park planning and design history” and, come December, they will reconvene in San Francisco to hash out a picture of the future look of our parklands. The latter is the topic of the current online forum. White it’s aimed primarily at landscape architects, we certainly can’t leave them to their own devices.
Testing Methods to Estimate Abundance in a Magellanic Penguin Colony Using GIS – Cecilia Villanueva et al.
How do you estimate the population size of a highly abundant species, like the Magellanic penguin? Systematic surveys or complete censuses traditionally. Villanueva simulated a colony spatially by interpolating survey data (counts from circular plots) across the study site. She then tested various survey methods against the simulation to assess their accuracy. In this way, she can estimate the costs and benefits of different surveying methods and effort. For this site, systematic sampling was the most accurate. What a nice way of analyzing an important question.
Ordonez gave an overview of species distribution modeling and had a nice slide on all the different programs that help model such things: Bioclim, Domain, Biomapper, Maxent, Garp, Grasp, Species, Biomod, and others. He also referred to a paper (Elith et al., Ecography 2006) that tested the various programs. My own work these days is focusing somewhat on species distribution modeling, but I have to admit the whole practice makes me nervous. We already have a word for this science: it’s called “ecology,” the study of the distribution and abundance of species. The fact that we smaller set of ecologists have developed a new word for it suggests that we’ve decided to ignore much of ecology — and evolution — in order to simplify the process. And that’s exactly what we’ve done.
The basic approach in species distribution modeling / habitat suitability modeling is you collect occurence data for the target species, collect spatially-explicit environmental variables that might determine the limits of the species’ range, and then model where you would expect to find the species in your study area. In general, it works okay (no models are real, some are good, &c &c). It can be useful, especially in conservation, when rigorous surveys aren’t feasible. But people have been using these methods to model current distribution, and then predict range shifts based on climate change. As with yesterday’s biophysics approach to modeling flyways, such a study ignores things like behavior and adaptation. What limits the current generation might not do so in other conditions.
Modeling Bird Migration in Changing Habitats: Space-Based Ornithology Using Satellites and GIS – James Smith, NASA Goddard Space Flight Center
Smith tracked potential shifts in migration due to changing wetlands conditions. He built a mechanistic model of bird migration (i.e. the physics and energetics of flying) and coupled that with the impacts of changing resource availability along flyways. First he defined a “climate space” (based on air temperature and solar radiation) that shows where and when migration can occur; then he added a component based on the energetics of flying to create an individual-based model of daily migration routes that “simulates the migration routes, timing and energy budgets of individuals birds under dynamic weather and land surface conditions.” He also incorporates “evolutionary learning” by including behavioral responses to the landscape, and allowing the top performers to become the breeders at the end of a season.
It surely says something about me that (with reservations) I think this is very cool! It’s always neat to have this sort of cross-disciplinary stuff: modeling the energetics of an individual is such a physicist way of thinking about movement ecology. It also doesn’t hurt to have the computing power of NASA. There’s been plenty said about the coming convergence of computing power, neural networks, and the human brain. As computers become more powerful, and our ability to model evolutionary and ecological processes improves, I think we’ll be provided with incredibly useful tools to study conservation.
Of course, field biologists will be, rightfully, crying foul: nature is incredibly complex and while it may follow certain patterns, the details are nearly impossible to define. Smith’s model doesn’t include multi-species interactions (e.g. competition and predation); it also didn’t appear to predict the fact that his study species, the Pectoral Sandpiper, is a frequent migrant to the UK. No doubt that tug of war between folks in the field and modelers will continue, but there’s a balance to be struck. Models like this, by people removed from the traditions of the field, can help to expose previously-ignored pressure points in ecology. I think the word I’m looking for is “refreshing.”
Hamilton presented his work in the James Reserve and the recently-created Blue Oak Ranch Reserve, University of California research stations near the San Jacinto Mountains and in San Jose, respectively. He has worked over the past 20 years to deploy insane amounts of networked monitoring tools to create a “macroscope” view of the ecological landscape. Microclimate, cold air drainage, nest boxes, pitfall traps, camera traps, towers with video, nitrogen
sensing, microphone arrays, underground root observing microscopes, autonomous underwater and flying robots. Uh, yeah, you read right. Autonomous underwater robots.
Rybock presented information on a new effort between NatureServe and National Geographic to create an on-line atlas of conservation areas in the U.S. LandScope “presents detailed geospatial data, along with compelling writing, photography, and video describing conservation priorities, threats, and protected areas.”
NatureServe, as I understand it, grew out of a TNC initiative to develop spatial datasets and provide them to researchers on-line. It seems that, throughout their existence, they’ve struggled with presenting conservation content on the internet: who is their audience? How do they get the data they need, and how is it updated? Much like the rest of the web, a lot of these projects try to do too much (“Let’s get researchers to upload all of their spatial data to one place!”), and the effort dies out. The really useful sites are ones that present novel data-sets for specific purposes. If you need a land-cover dataset, it’s much easier to find it at the host institution rather than going through some aggregated site. “One-stop shops” tend not to carry the stock that they promise. Regardless, it’s National Geographic, so the site is definitely beautiful. I hope they push through that nebulous barrier and keep this thing going.
[Edit: Oops! I forgot that NatureServe developed its NatureServe Explorer and InfoNatura, presenting datasets of species ranges in North and South America, which I've used in the past. They are useful -- but, again, not comprehensive.]
Identifying Species at Risk from Environmental Change: Habitat Specificity of Amazonian Plants – Hannah Stevens
Stevens used climatic and soil data to delineate habitat types in the Amazon, and then used data from the Global Biodiversity Information Facility to identify plants in the Amazon that are limited by the number of habitats in which they’re found. She found that about 10% of plant species are habitat restricted. As the climate changes and these habitats shift, those ~4,500 plants may be in trouble.
As a scholar of the checkerspot butterfly, Weiss is concerned that the macroclimatic models of climate change are irrelevant to species with small home ranges. His talk presented methods for down-scaling climate data to model mesoclimates (1-100km), topoclimates (10m to 1km) and microclimates. By using iButtons (tiny, cheap, climatic sensors — we’re using them at my field site in Carrizo and despite being kind of finicky to get started, they are very cool), and a multiple regression using topographic features, he can model quite accurately (r^2=.93) how temperature is distributed across the site. By embedding this model across the mountain range, temperature can be modeled to much higher resolution than the current country-wide climate change models.
Paul Beier gave the keynote this morning and talked about his work in southern California working on connectivity issues for wildlife (originally and mostly mountain lions / cougars / pumas / catamounts). He spoke, of course, about the importance of maintaining connectivity among wildlands — none of the mountain ranges in the Los Angeles area are big enough on their own to sustain mountain lion populations — and the utility of involving key stakeholders, etc. But the most important thing, to me, was his final slide. He showed a picture where four highways, a railroad, the California aqueduct, and high-voltage power lines all crossed, and then pointed out that above them all stood a key wildlife corridor. He indirectly suggested that in highly-disturbed environments we push for “greenness” to be considered an element of our infrastructure. This is the sort of mental jui-jitsu that can succeed in pulling in important governmental agencies in working towards and responsible for conservation. Convincing Caltrans (which Beier’s done!) to participate in conservation programs is a brilliant way of increasing the number of stakeholders.