In spatial predictive mapping, models are often applied to make predictions far beyond sampling locations (i.e. field observations used to map a variable even on a global scale), where new locations might considerably differ in their environmental properties. However, areas in the predictor space without support of training data are problematic. The model has not been enabled to learn about relationships in these environments and predictions for such areas have to be considered highly uncertain.
In CAST, we implement the methodology described in Meyer&Pebesma (2021) to estimate the “area of applicability” (AOA) of (spatial) prediction models. The AOA is defined as the area where we enabled the model to learn about relationships based on the training data, and where the estimated cross-validation performance holds. To delineate the AOA, first an dissimilarity index (DI) is calculated that is based on distances to the training data in the multidimensional predictor variable space. To account for relevance of predictor variables responsible for prediction patterns we weight variables by the model-derived importance scores prior to distance calculation. The AOA is then derived by applying a threshold based on the DI observed in the training data using cross-validation.
This tutorial shows an example of how to estimate the area of applicability of spatial prediction models.
For further information see: Meyer, H., & Pebesma, E. (2021). Predicting into unknown space? Estimating the area of applicability of spatial prediction models. Methods in Ecology and Evolution, 12, 1620– 1633. [https://doi.org/10.1111/2041-210X.13650]
library(CAST)
library(caret)
library(terra)
#library(sp)
library(sf)
library(viridis)
library(latticeExtra)
library(gridExtra)
As predictor variables, a set of bioclimatic variables are used (https://www.worldclim.org). For this tutorial, they have been originally downloaded using the getData function from the raster package but cropped to an area in central Europe. The cropped data are provided in the CAST package.
<- rast(system.file("extdata","bioclim.grd",package="CAST"))
predictors plot(predictors,col=viridis(100))
To be able to test the reliability of the method, we’re using a simulated prediction task. We therefore simulate a virtual response variable from the bioclimatic variables.
<- function(raster, predictornames =
generate_random_response names(raster), seed = sample(seq(1000), 1)){
= c("+", "-", "*", "/")
operands_1 = c("^1","^2")
operands_2
<- paste(as.character(predictornames, sep=""))
expression # assign random power to predictors
set.seed(seed)
<- paste(expression,
expression sample(operands_2, length(predictornames),
replace = TRUE),
sep = "")
# assign random math function between predictors (expect after the last one)
set.seed(seed)
-length(expression)] <- paste(expression[-
expression[length(expression)],
sample(operands_1,
length(predictornames)-1, replace = TRUE),
sep = " ")
print(paste0(expression, collapse = " "))
# collapse
= paste0("raster$", expression, collapse = " ")
e
= eval(parse(text = e))
response names(response) <- "response"
return(response)
}
<- generate_random_response (predictors, seed = 10) response
## [1] "bio2^1 * bio5^1 + bio10^2 - bio13^2 / bio14^2 / bio19^1"
plot(response,col=viridis(100),main="virtual response")
To simulate a typical prediction task, field sampling locations are randomly selected. Here, we randomly select 20 points. Note that this is a very small data set, but used here to avoid long computation times.
<- predictors[[1]]
mask values(mask)[!is.na(values(mask))] <- 1
<- st_as_sf(as.polygons(mask))
mask <- st_make_valid(mask) mask
set.seed(15)
<- st_as_sf(st_sample(mask,20,"random"))
samplepoints
plot(response,col=viridis(100))
plot(samplepoints,col="red",add=T,pch=3)
Next, a machine learning algorithm will be applied to learn the relationships between predictors and response.
Therefore, predictors and response are extracted for the sampling locations.
<- extract(predictors,samplepoints,na.rm=FALSE)
trainDat $response <- extract(response,samplepoints,na.rm=FALSE, ID=FALSE)$response
trainDat<- na.omit(trainDat) trainDat
Random Forest is applied here as machine learning algorithm (others can be used as well, as long as variable importance is returned). The model is validated by default cross-validation to estimate the prediction error.
set.seed(10)
<- train(trainDat[,names(predictors)],
model $response,
trainDatmethod="rf",
importance=TRUE,
trControl = trainControl(method="cv"))
print(model)
## Random Forest
##
## 20 samples
## 6 predictor
##
## No pre-processing
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 18, 18, 18, 18, 18, 18, ...
## Resampling results across tuning parameters:
##
## mtry RMSE Rsquared MAE
## 2 3712.212 1 3227.393
## 4 3032.032 1 2614.909
## 6 2880.079 1 2477.548
##
## RMSE was used to select the optimal model using the smallest value.
## The final value used for the model was mtry = 6.
The estimation of the AOA will require the importance of the individual predictor variables.
plot(varImp(model,scale = F),col="black")
The trained model is then used to make predictions for the entire area of interest. Since a simulated area-wide response is used, it’s possible in this tutorial to compare the predictions with the true reference.
<- predict(predictors,model,na.rm=T)
prediction <- abs(prediction-response)
truediff plot(rast(list(prediction,response)),main=c("prediction","reference"))
The visualization above shows the predictions made by the model. In the next step, the DI and AOA will be calculated.
The AOA calculation takes the model as input to extract the importance of the predictors, used as weights in multidimensional distance calculation. Note that the AOA can also be calculated without a trained model (i.e. using training data and new data only). In this case all predictor variables are treated equally important (unless weights are given in form of a table).
<- aoa(predictors, model)
AOA class(AOA)
## [1] "aoa"
names(AOA)
## [1] "parameters" "DI" "AOA"
print(AOA)
## DI:
## class : SpatRaster
## dimensions : 102, 123, 1 (nrow, ncol, nlyr)
## resolution : 14075.98, 14075.98 (x, y)
## extent : 3496791, 5228136, 2143336, 3579086 (xmin, xmax, ymin, ymax)
## coord. ref. : +proj=laea +lat_0=52 +lon_0=10 +x_0=4321000 +y_0=3210000 +ellps=GRS80 +units=m +no_defs
## source(s) : memory
## name : DI
## min value : 0.000000
## max value : 3.381338
## AOA:
## class : SpatRaster
## dimensions : 102, 123, 1 (nrow, ncol, nlyr)
## resolution : 14075.98, 14075.98 (x, y)
## extent : 3496791, 5228136, 2143336, 3579086 (xmin, xmax, ymin, ymax)
## coord. ref. : +proj=laea +lat_0=52 +lon_0=10 +x_0=4321000 +y_0=3210000 +ellps=GRS80 +units=m +no_defs
## source(s) : memory
## name : AOA
## min value : 0
## max value : 1
##
##
## Predictor Weights:
## bio2 bio5 bio10 bio13 bio14 bio19
## 1 3.858957 19.81684 15.89291 3.55824 0.7102592 0
##
##
## AOA Threshold: 0.4528229
Plotting the aoa
object shows the distribution of DI
values within the training data and the DI of the new data.
plot(AOA)
The most output of the
aoa
function are two raster data:
The first is the DI that is the normalized and weighted minimum distance
to a nearest training data point divided by the average distance within
the training data. The AOA is derived from the DI by using a threshold.
The threshold is the (outlier-removed) maximum DI observed in the
training data where the DI of the training data is calculated by
considering the cross-validation folds. The used threshold and all
relevant information about the training data DI is returned in the
parameters
list entry.
We can plot the DI as well as predictions onyl in the AOA:
plot(truediff,col=viridis(100),main="true prediction error")
plot(AOA$DI,col=viridis(100),main="DI")
plot(prediction, col=viridis(100),main="prediction for AOA")
plot(AOA$AOA,col=c("grey","transparent"),add=T,plg=list(x="topleft",box.col="black",bty="o",title="AOA"))
The patterns in the DI are in general agreement with the true prediction error. Very high values are present in the Alps, as they have not been covered by training data but feature very distinct environmental conditions. Since the DI values for these areas are above the threshold, we regard this area as outside the AOA.
The example above had randomly distributed training samples. However, sampling locations might also be highly clustered in space. In this case, the random cross-validation is not meaningful (see e.g. Meyer et al. 2018, Meyer et al. 2019, Valavi et al. 2019, Roberts et al. 2018, Pohjankukka et al. 2017, Brenning 2012)
Also the threshold for the AOA is not reliable, because it is based in distance to a nearest data point within the training data (which is usually very small when data are clustered). Instead, cross-validation should be based on a leave-cluster-out approach, and the AOA estimation based on distances to a nearest data point not located in the same spatial cluster.
To show how this looks like, we use 15 spatial locations and simulate 5 data points around each location.
set.seed(25)
<- clustered_sample(mask,75,15,radius=25000)
samplepoints
plot(response,col=viridis(100))
plot(samplepoints,col="red",add=T,pch=3)
<- extract(predictors,samplepoints,na.rm=FALSE)
trainDat $response <- extract(response,samplepoints,na.rm=FALSE)$response
trainDat<- data.frame(trainDat,samplepoints)
trainDat <- na.omit(trainDat) trainDat
We first train a model with (in this case) inappropriate random cross-validation.
set.seed(10)
<- train(trainDat[,names(predictors)],
model_random $response,
trainDatmethod="rf",
importance=TRUE,
trControl = trainControl(method="cv"))
<- predict(predictors,model_random,na.rm=TRUE)
prediction_random print(model_random)
## Random Forest
##
## 75 samples
## 6 predictor
##
## No pre-processing
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 68, 67, 68, 68, 68, 67, ...
## Resampling results across tuning parameters:
##
## mtry RMSE Rsquared MAE
## 2 1176.557 0.9949208 872.2160
## 4 986.404 0.9961500 743.4851
## 6 1046.026 0.9952013 782.2645
##
## RMSE was used to select the optimal model using the smallest value.
## The final value used for the model was mtry = 4.
…and a model based on leave-cluster-out cross-validation.
<- CreateSpacetimeFolds(trainDat, spacevar="parent",k=10)
folds set.seed(15)
<- train(trainDat[,names(predictors)],
model $response,
trainDatmethod="rf",
importance=TRUE,
tuneGrid = expand.grid(mtry = c(2:length(names(predictors)))),
trControl = trainControl(method="cv",index=folds$index))
print(model)
## Random Forest
##
## 75 samples
## 6 predictor
##
## No pre-processing
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 65, 70, 65, 70, 70, 65, ...
## Resampling results across tuning parameters:
##
## mtry RMSE Rsquared MAE
## 2 3044.489 0.9424818 2572.770
## 3 2500.727 0.9640092 2087.498
## 4 2462.658 0.9630713 2050.879
## 5 2549.798 0.9486584 2121.558
## 6 2641.278 0.9423514 2206.040
##
## RMSE was used to select the optimal model using the smallest value.
## The final value used for the model was mtry = 4.
<- predict(predictors,model,na.rm=TRUE) prediction
The AOA is then calculated (for comparison) using the model validated by random cross-validation, and second by taking the spatial clusters into account and calculating the threshold based on minimum distances to a nearest training point not located in the same cluster. This is done in the aoa function, where the folds used for cross-validation are automatically extracted from the model.
<- aoa(predictors, model)
AOA_spatial
<- aoa(predictors, model_random) AOA_random
plot(AOA_spatial$DI,col=viridis(100),main="DI")
plot(prediction, col=viridis(100),main="prediction for AOA \n(spatial CV error applies)")
plot(AOA_spatial$AOA,col=c("grey","transparent"),add=TRUE,plg=list(x="topleft",box.col="black",bty="o",title="AOA"))
plot(prediction_random, col=viridis(100),main="prediction for AOA \n(random CV error applies)")
plot(AOA_random$AOA,col=c("grey","transparent"),add=TRUE,plg=list(x="topleft",box.col="black",bty="o",title="AOA"))
Note that the AOA is much larger for the spatial CV approach.
However, the spatial cross-validation error is considerably larger,
hence also the area for which this error applies is larger. The random
cross-validation performance is very high, however, the area to which
the performance applies is small. This fact is also apparent if you plot
the aoa
objects which will display the distributions of the
DI of the training data as well as the DI of the new data. For random CV
most of the predictionDI is larger than the AOA threshold determined by
the trainDI. Using spatial CV, the predictionDI is well within the DI of
the training samples.
grid.arrange(plot(AOA_spatial) + ggplot2::ggtitle("Spatial CV"),
plot(AOA_random) + ggplot2::ggtitle("Random CV"), ncol = 2)
Since we used a simulated response variable, we can now compare the prediction error within the AOA with the model error, assuming that the model error applies inside the AOA but not outside.
###for the spatial CV:
RMSE(values(prediction)[values(AOA_spatial$AOA)==1],
values(response)[values(AOA_spatial$AOA)==1])
## [1] 3272.494
RMSE(values(prediction)[values(AOA_spatial$AOA)==0],
values(response)[values(AOA_spatial$AOA)==0])
## [1] 9816.195
$results model
## mtry RMSE Rsquared MAE RMSESD RsquaredSD MAESD
## 1 2 3044.489 0.9424818 2572.770 2372.231 0.07664328 1967.022
## 2 3 2500.727 0.9640092 2087.498 1920.640 0.03588806 1518.062
## 3 4 2462.658 0.9630713 2050.879 1848.606 0.03378018 1433.924
## 4 5 2549.798 0.9486584 2121.558 1794.440 0.05572419 1316.847
## 5 6 2641.278 0.9423514 2206.040 1807.013 0.06276768 1340.278
###and for the random CV:
RMSE(values(prediction_random)[values(AOA_random$AOA)==1],
values(response)[values(AOA_random$AOA)==1])
## [1] 1436.841
RMSE(values(prediction_random)[values(AOA_random$AOA)==0],
values(response)[values(AOA_random$AOA)==0])
## [1] 3947.403
$results model_random
## mtry RMSE Rsquared MAE RMSESD RsquaredSD MAESD
## 1 2 1176.557 0.9949208 872.2160 701.6381 0.006071611 423.2547
## 2 4 986.404 0.9961500 743.4851 564.1099 0.004930192 331.6885
## 3 6 1046.026 0.9952013 782.2645 676.8563 0.006977909 365.3408
The results indicate that there is a high agreement between the model CV error (RMSE) and the true prediction RMSE. This is the case for both, the random as well as the spatial model.
The relationship between error and DI can be used to limit predictions to an area (within the AOA) where a required performance (e.g. RMSE, R2, Kappa, Accuracy) applies. This can be done using the result of calibrate_aoa which used the relationship analyzed in a window of DI values. The corresponding model (here: shape constrained additive models which is the default: Monotone increasing P-splines with the dimension of the basis used to represent the smooth term is 6 and a 2nd order penalty.) can be used to estimate the performance on a pixel level, which then allows limiting predictions using a threshold. Note that we used a multi-purpose CV to estimate the relationship between the DI and the RMSE here (see details in the paper).
<- calibrate_aoa(AOA_spatial,model,window.size = 5,length.out = 5, multiCV=TRUE,showPlot=FALSE)
AOA_calib $plot AOA_calib
plot(AOA_calib$AOA$expected_RMSE,col=viridis(100),main="expected RMSE")
plot(AOA_calib$AOA$AOA,col=c("grey","transparent"),add=TRUE,plg=list(x="topleft",box.col="black",bty="o",title="AOA"))
The example above used simulated data so that it allows to analyze the reliability of the AOA. However, a simulated area-wide response is not available in usual prediction tasks. Therefore, as a second example the AOA is estimated for a dataset that has point observations as a reference only.
To do so, we will work with the cookfarm dataset, described in e.g. Gasch et al 2015. The dataset included in CAST is a re-structured dataset. Find more details also in the vignette “Introduction to CAST”. We will use soil moisture (VW) as response variable here. Hence, we’re aiming at making a spatial continuous prediction based on limited measurements from data loggers.
<- get(load(system.file("extdata","Cookfarm.RData",package="CAST")))
dat # calculate average of VW for each sampling site:
<- aggregate(dat[,c("VW","Easting","Northing")],by=list(as.character(dat$SOURCEID)),mean)
dat # create sf object from the data:
<- st_as_sf(dat,coords=c("Easting","Northing"))
pts
##### Extract Predictors for the locations of the sampling points
<- rast(system.file("extdata","predictors_2012-03-25.grd",package="CAST"))
studyArea st_crs(pts) <- crs(studyArea)
<- extract(studyArea,pts,na.rm=FALSE)
trainDat $ID <- 1:nrow(pts)
pts<- merge(trainDat,pts,by.x="ID",by.y="ID")
trainDat # The final training dataset with potential predictors and VW:
head(trainDat)
## ID DEM TWI BLD NDRE.M NDRE.Sd Bt Easting Northing
## 1 1 788.1906 4.304258 1.42 -0.051189531 0.2506899 0.0000 493384 5180587
## 2 2 788.3813 3.863605 1.29 -0.046459336 0.1754623 0.0000 493514 5180567
## 3 3 790.5244 3.947488 1.36 -0.040845532 0.2225785 0.0000 493574 5180577
## 4 4 775.7229 5.395786 1.55 -0.004329725 0.2099845 0.0501 493244 5180587
## 5 5 796.7618 3.534822 1.31 0.027252737 0.2002646 0.0000 493624 5180607
## 6 6 795.8370 3.815516 1.40 -0.123434804 0.2180606 0.0000 493694 5180607
## MinT_wrcc MaxT_wrcc Precip_cum cday Precip_wrcc Group.1 VW
## 1 1.1 36.2 10.6 15425 0 CAF003 0.2938029
## 2 1.1 36.2 10.6 15425 0 CAF007 0.2737227
## 3 1.1 36.2 10.6 15425 0 CAF009 0.2723993
## 4 1.1 36.2 10.6 15425 0 CAF019 0.3134010
## 5 1.1 36.2 10.6 15425 0 CAF031 0.2751161
## 6 1.1 36.2 10.6 15425 0 CAF033 0.2602674
## geometry
## 1 POINT (493383.1 5180586)
## 2 POINT (493510.7 5180568)
## 3 POINT (493574.6 5180573)
## 4 POINT (493246.6 5180590)
## 5 POINT (493628.3 5180612)
## 6 POINT (493692.2 5180610)
A set of variables is used as predictors for VW in a random Forest model. The model is validated with a leave one out cross-validation. Note that the model performance is very low, due to the small dataset being used here (and for this small dataset a low ability of the predictors to model VW).
<- c("DEM","NDRE.Sd","TWI","Bt")
predictors <- "VW"
response
<- train(trainDat[,predictors],trainDat[,response],
model method="rf",tuneLength=3,importance=TRUE,
trControl=trainControl(method="LOOCV"))
model
## Random Forest
##
## 42 samples
## 4 predictor
##
## No pre-processing
## Resampling: Leave-One-Out Cross-Validation
## Summary of sample sizes: 41, 41, 41, 41, 41, 41, ...
## Resampling results across tuning parameters:
##
## mtry RMSE Rsquared MAE
## 2 0.03878560 0.001416717 0.03071621
## 3 0.03905656 0.003966208 0.03126685
## 4 0.03920408 0.003327775 0.03137010
##
## RMSE was used to select the optimal model using the smallest value.
## The final value used for the model was mtry = 2.
Next, the model is used to make predictions for the entire study area.
#Predictors:
plot(stretch(studyArea[[predictors]]))
#prediction:
<- predict(studyArea,model,na.rm=TRUE) prediction
Next we’re limiting the predictions to the AOA. Predictions outside the AOA should be excluded.
<- aoa(studyArea,model)
AOA
#### Plot results:
plot(AOA$DI,col=viridis(100),main="DI with sampling locations (red)")
plot(pts,zcol="ID",col="red",add=TRUE)
plot(prediction, col=viridis(100),main="prediction for AOA \n(LOOCV error applies)")
plot(AOA$AOA,col=c("grey","transparent"),add=TRUE,plg=list(x="topleft",box.col="black",bty="o",title="AOA"))
Meyer, H., & Pebesma, E. (2022): Machine learning-based global maps of ecological variables and the challenge of assessing them. Nature Communications. Accepted.
Meyer, H., & Pebesma, E. (2021). Predicting into unknown space? Estimating the area of applicability of spatial prediction models. Methods in Ecology and Evolution, 12, 1620– 1633. [https://doi.org/10.1111/2041-210X.13650]
Tutorial (https://youtu.be/EyP04zLe9qo) and Lecture (https://youtu.be/OoNH6Nl-X2s) recording from OpenGeoHub summer school 2020 on the area of applicability. As well as talk at the OpenGeoHub summer school 2021: https://av.tib.eu/media/54879