Cliping several rasters with a multi-polygon shapefile

The problem!

Imagine you have this situation: you have several global raster files and a shapefile with a few areas (e.g. Natural Parks). You want to generate a raster file using as mask each of the polygons in the shapefile for each of the original rasters. So, if you have 5 global rasters and a shapefile with 10 polygons, the output of this would be 50 rasters (a smaller raster for each polygon, cut from each of the 5 larger rasters).
Well I had this problem, as you might have guessed. I looked for solutions, maybe there are some, but I could not find any! I’m sure there are solutions out there… but let me show you mine!

This is a simple task, but if it is not automated, it’s a grueling one… As such, the natural step, for me, was using R to automate this work.

The solution!

First we need to load the raster package and the shapefile:

library(raster)
polygon_areas <- raster::shapefile("C:/yourshapefile.shp")

This is the code for the function I created, called crop_save:

crop_save <- function(origin_folder, pattern, destination_folder, name_sub_folder, crop_areas, name_crop_areas){
  file_list <- list.files(path = origin_folder, pattern)
  #Create folder
  dir.create(paste0(destination_folder,"/",name_sub_folder))
  how_many_areas <- nrow(crop_areas)
  #Create raster stack
  raster_stack <- stack() 
  #File paths
  paths1 <- paste0(origin_folder,file_list)
  #Load rasters to stack
  for(i in 1:length(file_list)){
    raster_stack <- stack(raster_stack, raster(paths1[i]))  
  }  
  names_list <-  eval(parse(text=name_crop_areas))
  numbers <- 1:length(names_list)
  names_list <- paste0(as.character(numbers),"_polygon_", names_list)
  polyRR_list <- list()
  for(x in 1: nrow(crop_areas)){
    pol1 <- assign(names_list[x],crop_areas[x,])
    polyRR_list[[x]] <- pol1
  }
  for(j in 1:nlayers(raster_stack)){
    dir.create(paste0(destination_folder,"/",name_sub_folder, "/", names(raster_stack)[j]))
    for(k in 1:length(polyRR_list)){
      a<-crop(raster_stack[[j]], polyRR_list[[k]])
      a<-mask(a,polyRR_list[[k]], filename = paste0(destination_folder,"/",name_sub_folder, "/", names(raster_stack)[j], "/", "RR",polyRR_list[[k]]$Id, ".tif"))
    }
  }
}

The arguments for this function are:

origin_folder – Where the original rasters are saved.
pattern – This is a character string to identify raster files: in the folder were rasters are saved there are, generally, other files. This argument allows the selection of only rasters (e.g. tif files).
destination_folder – Folder where the otput folder will be created.
name_sub_folder – Name of the sub-folder to be created inside the destination folder. Inside this, a folder is created for each of the original rasters where the smaller rasters for each polygon are saved.
crop_areas – Areas to be used in the raster croping (a SpatialPolygonsDataFrame created by importing the shapefile into R).
name_crop_areas – Column of the SpatialPolygonsDataFrame with the unique names or codes for the regions.

An example (not run, you have to try this with your own rasters):

crop_save(origin_folder = "D:/THIS_FOLDER/"
          , pattern = ".tif
          , destination_folder = "C:/OUTPUT/" 
          , name_sub_folder = "Cut_rasters" 
          , crop_areas = polygon_areas 
          , name_crop_areas = "polygon_areas$Id" 
          ) 

I hope this is useful!

New MetaLandSim version!

Small adjustments were made to the MetaLandSim (CRAN, GitHub) package, as a result of changes made in a package from which MetaLandSim imports functions, spatstat (which is being sub-divided into several sub-packages).

The new version (1.0.8) is only on GitHub for now, it will be later sent to CRAN, after the spatstat packages are updated there.

Thanks to Ege Rubak for making the necessary modifications in the code, adapting it to the new spatstat structure.

Downloading food web databases and deriving basic structural metrics

This blog post presents a beta version of the package to download, process and derive metrics from food web datasets.

DISCLAIMER: This is an early release. The code will be subject to improvements with time. Some of this code has been adapted from other sources (e.g. Ecobase website). A few errors are expected when running these functions. Any suggestions are welcomed! Plenty of room for improvement…

Additional information on the package is available on GitHub.

First, we need to install Load required packages:

install_github("FMestre1/fw_package")

Create food web list (from the mangal database as an example):
#mg1 <- create.fw.list(db="mg", ref=TRUE, spatial = TRUE)

#(this takes a long time...)
#To make it faster, let's just upload the data:

data(mg1)

Which of the matrices are adjacency matrices (0 and 1 matrices)?

is.adjacency.matrix(mg1)
## Matrix 1 is an adjacency matrix!
## Matrix 2 is an adjacency matrix!
## Matrix 3 is an adjacency matrix!
## Matrix 4 is an adjacency matrix!
## Matrix 5 is an adjacency matrix!
(...)
## Matrix 387 is an adjacency matrix!
## Matrix 388 is an adjacency matrix!
## Matrix 389 is an adjacency matrix!
## Matrix 390 is an adjacency matrix!

Need to convert to adjacency matrix (not needed here)?

mg2 <- convert2adjacency(mg1)
is.adjacency.matrix(mg2)
## Matrix 1 is an adjacency matrix!
## Matrix 2 is an adjacency matrix!
## Matrix 3 is an adjacency matrix!
## Matrix 4 is an adjacency matrix!
## Matrix 5 is an adjacency matrix!
(...)

## Matrix 386 is an adjacency matrix!
## Matrix 387 is an adjacency matrix!
## Matrix 388 is an adjacency matrix!
## Matrix 389 is an adjacency matrix!
## Matrix 390 is an adjacency matrix!

Which are square matrices (having the same number of columns and rows)?

is.sq.matrix(mg1) 
## Matrix 1 already is square!
## Matrix 2 already is square!
## Matrix 3 already is square!
## Matrix 4 already is square!
## Matrix 5 already is square!
(...)

## Matrix 385 already is square!
## Matrix 386 already is square!
## Matrix 387 already is square!
## Matrix 388 already is square!
## Matrix 389 already is square!
## Matrix 390 already is square!

Need to convert to square matrix (not needed here)?

mg3 <- rect2square(mg2)
## Matrix 1 already is square!
## Matrix 2 already is square!
## Matrix 3 already is square!
## Matrix 4 already is square!
## Matrix 5 already is square!
(...)

## Matrix 388 already is square!
## Matrix 389 already is square!
## Matrix 390 already is square!

Having the full dataset, with all the matrices with the same format, now we will derive network metrics for each food web matrix (number of nodes,  number of trophic interactions, linkage density, connectance, compartmentalization and maximum trophic level).

metrics <- fw.metrics(mg1)
## Computing network metrics for food web 1 of 390...
## Computing network metrics for food web 2 of 390...
## Computing network metrics for food web 3 of 390...
## Computing network metrics for food web 4 of 390...
## Computing network metrics for food web 5 of 390...

names(metrics)
## [1] "number_nodes"          "number_links"          "link_density"         
## [4] "connectance"           "compartmentalization"  "maximum_trophic_level"

Finally, we can plot the degree distribution of all food webs in the dataset:

dd.fw(mg1, log=TRUE, cumulative=TRUE)

##       iter degree1 probability
## 1    iter1       1  0.98484848
## 2    iter1       2  0.83333333
## 3    iter1       3  0.75757576
## 4    iter1       4  0.62121212
## 5    iter1       5  0.53030303

From R to WordPress in two easy steps

Wanting to post with some regularity in this blog, I needed to have a quick, easy way to do so. Importantly, I wanted to do this for free!

I searched the internet, and found RWordPress. This package does just that, helps the user to post in WordPress (resorting also to knitr).

You’ll need to have a Markdown file with the post. I will provide here the code I used to do this (which I did not use for this post!):

#Loading required packages
library(knitr)
library(RWordPress)

#Defining login options
options(WordPressLogin = c(your_username = “your_password”),
WordpressURL = ‘https://yourblog.wordpress.com/xmlrpc.php&#8217;)

#Import the Rmd file and update it to WordPress

knit2wp(‘post_blog.Rmd’,
title = ‘title_here’,
publish = FALSE, #not to be published you can later edit it in WordPress
action = “newPost”)

New version of gDefrag (0.3) is out!

A new version of gDefrag (0.3) is now available on CRAN, due to changes made to the package’s dependencies.

This package provides tools to manage connectivity in regions crossed by linear infrastructures. It prioritizes the different sections of linear infrastructures (e.g. roads, power-lines) according to the need to break the barriers to connectivity imposed by these structures.

Very useful to reduce the negative impacts of roads, railways or power-lines.

New MetaLandSim version: allows importing shapefiles, creating “landscape” class objects

A small change was made to the MetaLandSim package. This was, however, long overdue and it stems from something I should have changed a long time ago.

This package allows shapefiles to be imported in order to be used in the analysis and simulations it performs. However, these shapefiles, when imported into MetaLandSim, belonged to the class “metapopulation” (which has the information on species presence/absence in each habitat patch). Nevertheless, the user frequently requires these objects to belong to the class “landscape” (without the species presence/absence information).

I’ve received, more than once, requests from users concerning this issue!

Now, this new MetaLandSim version (v. 1.0.8) allows the function import.shape to create “landscape” class objects. 

One note: If the reason why the user wants objects from the class “landscape” is to use it on the metrics.graph function (to compute landscape connectivity metrics) than it is better to use instead the package lconnect. This package computes landscape connectivity metrics using the imported shapefile and keeping the habitat patches’ original shape. MetaLandSim, on the contrary, converts all habitat patches to circles centered on the patch centroid and with the same area.

For now these changes were made only in GitHub, but should be available on CRAN soon.

New paper out: “Ecological and epidemiological models are both useful for SARS-CoV-2 “

In this correspondence to Nature Ecology and Evolution we defend that both, traditional epidemiological models (mechanistic) and species distribution models (correlative) are useful approaches when studying infectious disease spread. Both these approaches answer different questions and have different advantages and limitations.

Here’s the abstract:

A longstanding debate exists on whether ecological phenomena should be modelled ‘top-down’ (modelling patterns that arise from mechanisms) or ‘bottom-up’ (modelling mechanisms to generate pattern). Recently, the discussion re-emerged in the context of modelling the spread of SARS-CoV-2. Simply put, the point made by Carlson et al. was that top-down correlative models are inappropriate, whereas bottom-up epidemiological models are fine. Rather than opposing families of models, we argue that all have strengths and limitations and that judicious use of all available tools is the way forward.