Posted on 23 April 2018
However, the simple presence of green in a landscape can unintentionally obscure our ability to track one particular driver of deforestation – oil palm.
By Emelin Gasparrini, WWF Forest and Climate
From above, tropical forests are carpets of green rolling across the landscape. Produced by the chlorophyll in a plant’s leaves, that green is what some satellites detect when scanning forests for signs of deforestation. This optical data can be crucial for forest monitoring, especially in hard to reach landscapes. However, the simple presence of green in a landscape can unintentionally obscure our ability to track one particular driver of deforestation – oil palm.
A joint WWF-NASA Jet Propulsion Laboratory project seeks to pierce through that obscurity with an innovative mapping algorithm, to be published later this year. Rather than simply detecting chlorophyll’s distinctive green, this new approach responds to the structure of a forest canopy, separating mixed natural forests from homogenous oil palm plantations.
The algorithm itself is not the only innovative part of this story. Project leaders Naiara Pinto of JPL and Naikoa Aguilar-Amuchastegui of WWF Forest and Climate took a highly participatory approach to its development by crowdsourcing the validation process with partners in the field from WWF-Indonesia to improve the results of the algorithm.
“This is an aspect that the remote sensing community has strong opinions about,” says Aguilar-Amuchastegui. “Some are against the use of crowdsourced information to validate products because not everyone knows how to interpret the data, but we challenge this because people on the ground know what’s there.”
He adds, “If things are done well and there are protocols to follow it could be a very powerful wave for algorithm developers to enhance products. We see it all the time to improve user end products when people sign up as beta testers, and developers use their information to make improvements. We are showing how science can also do that.”
Pinto agrees. “What most people don’t realize is that map making is very iterative – maps should not be static products. With remote sensing, we should use as much as we know about physics, but also allow local information to refine results.”
To facilitate the refinement, Pinto and Aguilar-Amuchastegui worked with Christoph Perger from Spatial Focus
using IIASA’s online platform LACO-Wiki
, which is designed “for validating land cover and land use maps.” But first they first they had to generate data for their partners to validate.
The first iteration started with existing Synthetic Aperture Radar data for an area with known oil palm presence, and the model produced an oil palm presence likelihood map layer, known as a soft prediction. This kind of prediction produces a color gradient map, each color on the spectrum indicating the model’s estimated likelihood of the presence of oil palm. From there, the information was converted into a presence/absence map, or hard prediction, which was used by LACO-Wiki to generate random points for the campaign. Now they were ready for validation.
LACO-Wiki took those validation points and, using data from existing online sources, displayed high spatial resolution satellite imagery hosted by platforms like Bing Maps, Google Maps, or Google Earth, of the location in that point along with whether the model had classified it as containing oil palm or not, prompting the user to mark whether the classification was correct or incorrect. This process served two functions, and the first was fairly standard: to assess the performance of the hard prediction produced by the algorithm.
But the second was decidedly less so: to collect information for model recalibration based on local knowledge of the area by the teams in Indonesia. Not only did this strengthen the results of the model, it allowed for the collection of additional areas where there is known oil palm presence, to build a larger data set to train the algorithm and improve the results. In this way, the partners in the field – the end users – become involved in the actual development of the algorithm and the maps instead of simply recipients of the information produced.
It also makes the process of assessing the confidence in the map results more quantifiable. Pinto adds, “Sometimes it is hard to quantify the errors, but through this process we can see – if I add the local information, how much did my model improve?”
The additional data was provided by the testing team after a capacity building workshop, where Pinto and Aguilar-Amuchastegui showed them how to set up their own data to be able to run the algorithm without external assistance. “This makes it iterative and turns the process into a two-way street,” says Aguilar-Amuchastegui, “Local experts are not just validating an existing – static – final product, they are also given the tools to improve it if they want to.”
And those improvements have already produced results. In further iterations the mapping area was expanded to include all of southern-Central Kalimantan using Sentinel 1 data, and new oil palm areas have already been detected. In addition to publishing the results, the team is also planning a second capacity building workshop to further empower partners in the field and ensure the algorithm can be put to use.
Usability is crucial, because deforestation from oil palm expansion is an important issue beyond conservation, too. Global corporations that have committed to removing deforestation from their supply chains are also concerned about natural forests being converted into oil palm plantations.
As companies commit to more forest-friendly sourcing practices, they need better information about the commodities they are buying to make sure they are following through on those commitments. This algorithm will provide an additional layer of that information, which will support better conservation outcomes and commodity sourcing decisions to protect natural forests from the threat of oil palm expansion.