New paper out: “Ecological and epidemiological models are both useful for SARS-CoV-2 “

In this correspondence to Nature Ecology and Evolution we defend that both, traditional epidemiological models (mechanistic) and species distribution models (correlative) are useful approaches when studying infectious disease spread. Both these approaches answer different questions and have different advantages and limitations.

Here’s the abstract:

A longstanding debate exists on whether ecological phenomena should be modelled ‘top-down’ (modelling patterns that arise from mechanisms) or ‘bottom-up’ (modelling mechanisms to generate pattern). Recently, the discussion re-emerged in the context of modelling the spread of SARS-CoV-2. Simply put, the point made by Carlson et al. was that top-down correlative models are inappropriate, whereas bottom-up epidemiological models are fine. Rather than opposing families of models, we argue that all have strengths and limitations and that judicious use of all available tools is the way forward.

New version of MetaLandSim (v. 1.0.7)!

A new version of MetaLandSim will be available in the next few days! Only minor changes, with no effect for the user, were made.

logo

MetaLandSim provides tools to generate random landscape graphs, evaluate species occurrence in dynamic landscapes, simulate future landscape occupation and range expansion.

You can find it here!

Package lconnect: patch connectivity metrics and patch prioritization

Today I’m revisiting an older blog post on our package lconnect, which is available in CRAN (here). If you want to learn about the available connectivity metrics check this post.

It is intended to be a very simple approach to derive landscape connectivity metrics. Many of these metrics come from the interpretation of landscape as graphs.

Additionally, it also provides a function to prioritize landscape patches based on their contribution to the overall landscape connectivity. For now this function works only with the Integral Index of connectivity, by Pascual-Hortal & Saura (2006).

Here’s a brief tutorial!

First install the package:

#load package from CRAN
#install.packages("lconnect")

Then, upload the landscape shapefile …

#Load data
vec_path <- system.file("extdata/vec_projected.shp", package = "lconnect")

…and create a ‘lconnect’ class object:

#upload landscape
land <- upload_land(vec_path, habitat = 1, max_dist = 500)
class(land)
## [1] "lconnect"

And now, let’s plot it:

plot(land, main="Landscape clusters")

fig1

If we wish we can derive patch importance (the contribution of each individual patch to the overall connectivity):

land1 <- patch_imp(land, metric="IIC")
##  [1]  0.0000000  0.0000000  0.0000000  0.0000000  0.0000000  0.1039501
##  [7]  0.1039501  0.0000000  0.1039501  0.0000000  0.0000000  0.1039501
## [13]  0.3118503 21.9334719  0.0000000 15.5925156  2.5987526  0.1039501
## [19]  0.1039501  0.2079002  0.0000000  0.0000000  0.0000000  0.0000000
## [25]  0.9355509  0.0000000 14.2411642  2.9106029  0.2079002 12.9937630
## [31]  0.3118503  0.7276507  0.0000000  7.5883576  0.5197505 70.2702703
class(land1)

Which produces an object of the class ‘pimp’:

## [1] "pimp"

And, finally, we can also plot the relative contribution of each patch to the landscape connectivity:

plot(land1, main="Patch prioritization (%)")

fig2

And that’s it!

Which functions and packages are used in a given R script

Did you ever wrote an R script and then had to use it in a different computer, were all the required packages are not loaded?

Or just wanted to know which functions/packages you used in a given function?

Well this happened to me (both situations). I found a very helpful function to provide information on the functions and packages used in a given R script:

install.packages("NCmisc")
library(NCmisc)

list.functions.in.file(filename=path_to_script, alphabetic = TRUE))

And that’s it!

NOTE: It’s better practice to identify the package of the function in use at any given moment (I’m not used to do that, but I should).

For instance, if you want to use the function remove_patch from our package lconnect:

lconnect::remove_patch()

Function to download biotic interaction datasets

I work in ecology, biogeography, etc… Biotic interactions (interactions between species) and its repercussions on species distributions is my main research interest.

As such, I had, at some point, to download datasets on species interactions. I wanted to be able to produce a uniform (more or less, not as much as I would like) R object whatever the database I used. I created this function to do just that!

It can be used to download from database website or upload csv files into R (depending on the database). It currently uses the following databases:

  1. EcoBase – The database for the models using Ecopath with Ecosim. These are really good, have good metadata and spatial information (polygons) (Christensen & Walters, 2004Heymans et al. 2016).
  2. Web of Life – To be really honest I don’t know this database all that much. However it also provides information on the dataset location, but it does not provide details on the ecosystem where it occurs (Fortuna et al. 2014). For this database the user has to download the datasets to a folder previously (which should be given in the argument “folder”).
  3. Global Web – This database provides details on the ecosystem where the interaction dataset occur, bu no details on spatial location are provided (Thompson et al. 2012). For this database the user has to download the datasets to a folder previously (which should be given in the argument “folder”).
  4. Mangal – This database provides lots of information on the biotic interactions datasets (Poisot et al. 2015).

In order to write this function I relied on work previously done by others. Namely the code to download EcoBase interaction datasets was obtained here. In what concerns the mangal database I relied on the work done in the package rmangal.

At the end of this long function you can find a small example of its use.

NOTE: This is important! Be sure to check the results of the function! I’m still working on it, so this is a provisional version. But I think it works pretty good though.

(Somewhere in the future it might be available in my GitHub.)

create.fw.list <- function(db, folder = NULL, type = NULL, ecosyst=FALSE, ref=FALSE, spatial=FALSE, code=FALSE)
{
  
  #### Arguments  ####
  #'db' - database - eb (EcoBase), gw (GlobalWeb), wl (Web of Life) and mg (Mangal)
  #'folder' - folder in the WD to get the dataset files (db=gw and wl).
  #'type' - if db=mg the user should provide the type of interactions to be downloaded
  # 'ecosyst' - Getting ecosystem information, only for gw, eb
  #'ref' references information
  #'spatial' - get spatial info, only for wl, eb and mg
  
  #### Data Sources ####
  #Global Web: https://www.globalwebdb.com/
  #EcoTroph Example (EcoBase): http://sirs.agrocampus-ouest.fr/EcoTroph/index.php?action=examples
  #Script for EcoBase: http://sirs.agrocampus-ouest.fr/EcoBase/#discoverytools
  #Mangal: #https://mangal-wg.github.io/rmangal/articles/rmangal.html
  #Web of Life: http://www.web-of-life.es/
  
  #### Results  #### 
  fwlist <- list()
  
  #### Conditions for data entry  ############################################################ 
  
  #db
  
  if(!db %in% c("eb","wl","gw","mg"))stop("Argument 'db' must take one of the following values:\n
                                          'wl' - Web of Life
                                          'mg' - mangal
                                          'gw' - globalweb
                                          'eb' - ecobase")
  
  #folder
  if(!db %in% c("wl","gw") & !is.null(folder)) stop("Argument 'folder'can only be used if 'db'= 'wl' or 'gw'!")
  
  #type
  
  #folder
  if(!db %in% c("mg") & !is.null(type)) stop("Argument 'type'can only be used if 'db'= 'mg'!")
  
  #ecosyst
  
  if(!db %in% c("gw", "eb") & ecosyst==TRUE) stop("Argument 'ecosyst'can only be used if 'db'= 'eb' or 'gw'!")
  
  #ref
  #all
  
  #spatial
  if(!db %in% c("wl", "mg", "eb") & spatial==TRUE) stop("Argument 'spatial'can only be used if 'db'= 'eb', 'mg' 'wl'!")
  
  #code
  if(!db %in% c("wl", "mg", "gw") & code==TRUE) stop("Argument 'code'can only be used if 'db'= 'wl', 'mg'', 'gw'!")
  
  ############################################################################################
  #Updating each dataset database
  
  #GlobalWeb
  if (db == "gw"){
    message("####################### GLOBALWEB DATABASE #######################\n\n")
    
    message("Fetching info from the provided folder!")
    files_gw <- list.files(path = folder, pattern = "WEB")
    ngw <- length(files_gw)
    message (paste0("There are ", ngw, " food web files in the folder!"))
    
    #Load files into list
    #And create vector of references
    if (ref==TRUE) reflist_gw <- c()
    
    names_gw <- c()#FW names
    
    #getting the files into R
    for(i in 1:ngw){
      
      message(paste0("Fetching food web ", i, " in ", ngw, "!"))
      
      dfgw <- read.csv(paste0(folder,"/",files_gw[i]), header = FALSE)  # read csv file 
      dfgw <- dfgw[, colSums( is.na(dfgw) ) <=1]#Remove columns with all NA
      #dfgw <- dfgw[colSums( is.na(dfgw) ) <=1,]#Remove rows with no data
      
      #Get the FW name
      names_gw[i] <- as.character(dfgw[2,1])
      
      #Get the reference to the vector
      if (ref==TRUE) reflist_gw[i] <- as.character(dfgw[1,1])
      
      #to name the columns
      names_gw_c <- c()
      n1 <- ncol(dfgw)-1
      for(j in 1:n1){
        names_gw_c[j] <- as.character(dfgw[2,j+1])
      }
      
      #to name the rows
      names_gw_r <- c()
      n2 <- nrow(dfgw)-2
      for(j in 1:n2){
        names_gw_r[j] <- as.character(dfgw[j+2, 1])
      }
      
      dfgw <- dfgw[-c(1,2),-1]
      
      #Remove columns with NA 
      dfgw[dfgw==""] <- NA
      dfgw <- na.omit(dfgw)
      
      if(i==281){names_gw_r <- names_gw_r[-c(36,37)]}#the FW on i=281 has a note at the bottom
      
      #Delete the 'empty names'
      names_gw_c <- names_gw_c[names_gw_c!=""]
      names_gw_r <- names_gw_r[names_gw_r!=""]
      
      #Same names in rows or columns?
      #if(length(unique(names_gw_r)) < length(names_gw_r)) rown[i] <- as.character(i)
      #if(length(unique(names_gw_c)) < length(names_gw_c)) coln[i] <- as.character(i)
      
      #For some strange reason some rows and columns have the same name
      names_gw_c <- paste0("sp_", as.character(1:length(names_gw_c)), "_",names_gw_c)
      names_gw_r <- paste0("sp_", as.character(1:length(names_gw_r)), "_",names_gw_r)
      
      colnames(dfgw) <- names_gw_c
      rownames(dfgw) <- names_gw_r
      
      fwlist[[i]] <- dfgw
      
    }
    
    #Name the list
    names(fwlist) <- names_gw
    
    if(ref==TRUE){
      
      references <- as.data.frame(matrix(ncol = 4))
      names(references) <- c("FW code", "first_author", "year", "full_ref" )
      
      files_gw <- list.files(folder, pattern = "WEB")
      
      message("Fetching references from the dataset files!")
      
      for(w in 1:ngw){
        
        dfgw <- read.csv(paste0(folder,"/",files_gw[w]), header = FALSE)  # read csv file 
        #message(paste0("Reading file ", files_gw[w]))
        dfgw <- dfgw[, colSums( is.na(dfgw) ) <=1]#Remove columns with all NA
        
        #Get the reference to the vector
        full_ref1 <- as.character(dfgw[1,1])
        references[w,4] <- full_ref1#full reference
        references[w,1] <- files_gw[w]#fw code
        references[w,2] <- str_sub(word(full_ref1, start = 1), 1, str_length(word(full_ref1, start = 1))-1)#fisrt author
        references[w,3] <- regmatches(x = full_ref1,gregexpr("[0-9]+",text = full_ref1))[[1]][1]#year
        #references[w,3] <- gsub('.+\\(([0-9]+)\\).+?$', '\\1', full_ref1)#year
        
      }#end loop to add refs
      
    }#end gw refs
    
    #ECOSYSTEM
    if(ecosyst==TRUE){
      message("Searching for 'gw_list.csv' file...")
      
      if (!file.exists(paste0(folder, "/gw_list.csv"))) stop("\nThe pdf 'gw_list.pdf' has to be previously converted to a csv file...")
      
      #I had to conver the gw_list.pdf file to excel (csv), since I could not install tabulizes to extract pdf tables
      gw_eco <- read.csv(paste0(folder,"/","gw_list.csv"), header = TRUE, sep = ";")  # read csv file 
      
      filn <- paste0("WEB", as.character(gw_eco[,1]), ".csv")
      
      gw_eco2 <- gw_eco[,1:3]
      gw_eco2[,1] <- filn
      
      names(gw_eco2)[1] <- "FW"
      
      #yes... I do know the following few lines are 'ugly'...
      filn <- as.data.frame(cbind(filn, filn))
      
      names(filn) <- c("filn1","filn2")
      
      #files_gw <- list.files(path = folder, pattern = "WEB")
      
      ecosystem <- merge(x=filn, y=gw_eco2, by.x= "filn2", by.y = "FW")
      
      ecosystem <- ecosystem[,c(2, 3, 4)]
      
      names(ecosystem)[1] <- "Food web"
      
    }
    
  }#end of gw
  
  #Web of Life
  if (db == "wl"){
    
    message("####################### WEB OF LIFE DATABASE #######################\n\n")
    
    files_wl <- list.files(path = folder, pattern = "FW")
    nwl <- length(files_wl)
    message (paste0("There are ", nwl, " food web files in the folder!"))
    
    #Get refs and metrics table
    if (file.exists(paste0(folder, "/references.csv"))) {
      table_wl <- read.csv(paste0(folder, "/references.csv"), header = TRUE)  # read csv file
    } else {
      stop("There is no 'references.csv' file on the folder, as provided by the website!")
    }
    
    #FW names
    names_wl <- as.character(table_wl[,8])
    
    #Load files
    for(i in 1:nwl){
      message(paste0("Fetching food web ", i, " in ", nwl, "!"))
      dfwl <- read.csv(paste0(folder, "/",files_wl[i]), header = TRUE)  # read csv file 
      #row.names(dfwl) <- as.character(dfwl[,1])
      #dfwl <- dfwl[,-1]
      dfwl[is.na(dfwl)] <- 0
      fwlist[[i]] <- dfwl 
    }
    
    names(fwlist) <- names_wl
    
    #REFERENCES
    if(ref==TRUE){
      
      references <- as.data.frame(matrix(ncol = 4))
      names(references) <- c("FW code", "first_author", "year", "full_ref" ) 
      
      message("Fetching references from the 'references.csv' file!")
      message("Checking the presence of the 'references.csv' file...")
      if(!file.exists(paste0(folder, "/references.csv"))==TRUE)stop("Can't retrieve reference details... \n File not present!")
      
      ref_file <- read.csv(paste0(folder, "/references.csv"), header = TRUE)  # read csv file 
      
      for(w in 1:nwl){
        
        full_ref1 <- as.character(ref_file[w,7])
        references[w,4] <- full_ref1#full reference
        references[w,1] <- as.character(ref_file[w,1])#fw code
        references[w,2] <- str_sub(word(full_ref1, start = 1), 1, str_length(word(full_ref1, start = 1))-1)#fisrt author
        references[w,3] <- regmatches(x = full_ref1,gregexpr("[0-9]+",text = full_ref1))[[1]][1]#year
        #references[w,3] <- gsub('.+\\(([0-9]+)\\).+?$', '\\1', full_ref1)#year
        
      }#end loop to add refs
      
      
    }#end wl refs
    
    #SPATIAL
    if(spatial==TRUE){
      
      message("Fetching the spatial information from the 'references.csv' file!")
      message("Checking the presence of the 'references.csv' file...")
      if(!file.exists(paste0(folder, "/references.csv"))==TRUE)stop("Can't retrieve spatial info... \n File not present!")
      
      ref_file <- read.csv(paste0(folder, "/references.csv"), header = TRUE)  # read csv file 
      spatial1 <- ref_file[,c(1,9,10)]
      
    }#end of spatial
    
  }#end of wl
  
  #EcoBase
  if(db == "eb"){
    
    message("####################### ECOBASE DATABASE #######################\n\n")
    
    message("Fetching info from the EcoBase website!")
    suppressWarnings({
      
      #To obtain the list of available models
      
      suppressMessages({
        h=basicTextGatherer()
        curlPerform(url = 'http://sirs.agrocampus-ouest.fr/EcoBase/php/webser/soap-client_3.php',writefunction=h$update)
        data1 <- xmlTreeParse(h$value(),useInternalNodes=TRUE)
        liste_mod <- ldply(xmlToList(data1),data.frame)#liste_mod contains a list and decription
      })
      
      #Select only those allowing dissemination
      l2 <- subset(liste_mod, model.dissemination_allow =="true")#only those of which dissemination is allowed
      message("Sellected only those to which model dissemination is allowed!")
      
      #Select only those with whole food webs
      l3 <- subset(l2, model.whole_food_web =="true")#only those with the full food web
      message("Sellected only those to which the whole food web is available!")
      
      #Get model names
      model.name <- as.character(l3$model.model_name)
      input_list <- list()
      id <- as.numeric(as.character(l3$model.model_number))
      
      #Loop to get input list
      for(i in 1:nrow(l3)){
        
        message(paste0("Fetching information on food web ",i, " of ", nrow(l3)))
        
        suppressMessages({
          h=basicTextGatherer()
          mymodel <- id[i]
          curlPerform(url = paste('http://sirs.agrocampus-ouest.fr/EcoBase/php/webser/soap-client.php?no_model=',mymodel,sep=''),writefunction=h$update,verbose=TRUE)
          data2 <- xmlTreeParse(h$value(),useInternalNodes=TRUE)
          input1 <- xpathSApply(data2,'//group',function(x) xmlToList(x))
        })
        
        #need do name the columns
        names_input <- as.character(input1[1,])
        input1 <- as.data.frame(input1)
        colnames(input1) <- names_input
        input1 <- input1[-1,]
        input_list[[i]] <- input1
      }#end of loop to get input list
      
      mnames <- names(input_list)
      
      for (i in 1:length(input_list)){
        
        m2 <- input_list[[i]] #get the model
        
        nnodes <- length(m2)
        node_names <- names(m2)
        
        int_matrix <- as.data.frame(matrix(ncol=nnodes, nrow=nnodes))
        
        for(j in 1:length(m2)){
          
          node1 <- m2[[j]]
          node_id <- as.numeric(node1$group_seq)  
          node_name <- node_names[j]
          
          #matrix
          colnames(int_matrix)[node_id] <- node_name
          rownames(int_matrix)[node_id] <- node_name
          
          diet_node1 <- node1$diet_descr
          nr_food_items <- length(diet_node1)
          
          for(a in 1:nr_food_items){
            item1 <- diet_node1[[a]]
            id_item1 <- as.numeric(item1$prey_seq)  
            proportion_item1 <- as.numeric(item1$proportion)
            detritus_item1 <- as.numeric(item1$detritus_fate)
            #send to matrix
            int_matrix[id_item1,node_id] <- proportion_item1  
          }
          
        }
        
        int_matrix[is.na(int_matrix)] <- 0#replacing NA with 0
        
        fwlist[[i]] <- int_matrix
        
      }
      
      names(fwlist) <- model.name
      
    })#end of outer suppressWarnings
    
    #REFERENCES
    if(ref==TRUE){
      
      references <- as.data.frame(matrix(ncol = 4))
      names(references) <- c("FW code", "first_author", "year", "full_ref" ) 
      
      message("Fetching the references information!")
      
      for(w in 1:nrow(l3)){
        
        #Get the reference to the vector
        full_ref1 <- as.character(l3$model.reference)[w]
        references[w,4] <- full_ref1#full reference
        references[w,1] <- as.numeric(as.character(l3$model.model_number[w]))#fw code
        references[w,2] <- as.character(l3$model.author[w])#fisrt author
        references[w,3] <- regmatches(x = full_ref1,gregexpr("[0-9]+",text = full_ref1))[[1]][1]#year
        #references[w,3] <- gsub('.+\\(([0-9]+)\\).+?$', '\\1', full_ref1)#year
        
      }#end loop to add refs
      
      
    }#end of eb refs
    
    #ECOSYSTEM
    if(ecosyst==TRUE){
      
      ecosystem <- data.frame(l3$model.model_number, l3$model.country, l3$model.ecosystem_type)
      
      names(ecosystem) <- c("Food web", "Location", "Ecosystem")
      
    }#end of eb ecosystem
    
    #SPATIAL
    if(spatial==TRUE){
      
      message("Fetching spatial information from the EcoBase website...")
      
      #Get actual polygons
      EcoBase_shape <- sf::st_read("http://sirs.agrocampus-ouest.fr/EcoBase/php/protect/extract_kml.php")
      
      ebd <- EcoBase_shape$Name
      
      #Getting the model numbers
      nmr <- list()
      for(i in 1:length(ebd)){
        nr <- strsplit(as.character(ebd[i]), "--::")[[1]][1]
        nr <- as.numeric(str_extract_all(nr, "\\d+")[[1]])
        nmr[[i]] <- nr
      }
      
      nmr2 <- c()#line rows for each model
      for(i in 1:length(nmr)){
        a <- nmr[[i]]  
        b <- length(a)
        c1 <- rep(i,b)
        nmr2 <- c(nmr2, c1)  
      }
      
      #In Which row in ecobase geo file is the model?
      nmr <- unlist(nmr)
      table1 <- as.data.frame(cbind(nmr2, nmr))
      colnames(table1) <- c("row_n","id")
      
      #In which row does model.model_number with a given Id occurs?
      lines_n <- c()
      for (i in 1:nrow(liste_mod)){
        id <- as.numeric(as.character(liste_mod$model.model_number[i]))  
        lines_n[i] <- as.numeric(table1[table1$id==id,][1])
      }
      
      ecobase_poly2 <- list()
      for(i in 1:length(lines_n)){
        ecobase_poly2[i] <- st_geometry(EcoBase_shape)[lines_n[i]]  
        #plot(st_geometry(EcoBase_shape)[lines_n[i]], border="green", add=TRUE)
      } 
      
      #if no polygon then bounding box
      #into here ecobase_poly2
      for(i in 1:length(ecobase_poly2)){
        if(is.na(lines_n[i])){
          #create a bounding box geographic thing
          z1 <- as.numeric(unlist(regmatches(liste_mod$model.geographic_extent[[i]],gregexpr("[[:digit:]]+\\.*[[:digit:]]*",liste_mod$model.geographic_extent[[i]]))))
          z2 <- c(z1[4], z1[1], z1[2], z1[1], z1[2], z1[3], z1[4], z1[3])
          x1 <- as.data.frame(matrix(z2, ncol=2, byrow=TRUE))
          x1 <- cbind(x1[2], x1[1])#had to change lat and long... I had this the other way around...
          p1 <- Polygon(x1)
          ps1 <- Polygons(list(p1),1)
          ecobase_poly2[[i]] <- st_as_sf(SpatialPolygons(list(ps1)))
        }
        ecobase_poly2[[i]] <- ecobase_poly2[[i]]
      }
      
      #convert all to class sf
      for(i in 1:length(ecobase_poly2)){
        if(!any(class(ecobase_poly2[[i]])=='sf')){
          t2 <- ecobase_poly2[[i]]
          t3 <- st_cast(t2, to="POLYGON")
          ecobase_poly2[[i]] <- st_as_sf(as(st_zm(st_geometry(t3)), "Spatial"))
        } 
        else message("Ok!")
      }
      
      #line.Id correspondence
      table2 <- as.data.frame(cbind(1:length(ecobase_poly2),as.numeric(as.character(liste_mod$model.model_number))))
      names(table2) <- c("row","id")
      
      #select the corresponding polygons
      id_selected <- as.numeric(as.character(l3$model.model_number))
      
      #Which rows?
      rows_selected <- c()
      for(i in 1:length(id_selected)){
        rows_selected[i]  <- as.numeric(table2[table2["id"]==id_selected[i],][1])
      }  
      
      spatial1 <- ecobase_poly2[rows_selected]
      
    }#end of eb spatial
    
  }#end of eb
  
  #MANGAL
  if(db == "mg"){
    
    message("####################### MANGAL DATABASE #######################\n\n")
    
    message("Fetching datasets from the Mangal website! \n\n Types 'predation' and 'herbivory' by default... \n but run mangal function 'avail_type' to check available types...\n\nThis operation might take a long time!")
    
    ntypes <- length(type)
    
    net_info <- list()
    
    for(i in 1:ntypes){
      
      message(paste0("\n\nFetching information from interactions of the type ","'",type[i], "'!"))
      
      fwlist1 <- search_interactions(type = type[i]) %>% get_collection()
      
      net_info <- rbind(net_info, fwlist1)
      
      fwlist2 <- as.igraph(fwlist1)
      
      fwlist <- c(fwlist, fwlist2)
      
      #class(fwlist)
      
    }
    
    #Converting igraph objects to data frame
    for(i in 1:length(fwlist)){
      fw2 <- fwlist[[i]]
      #convert each igraph to a data frame
      fw3 <- as_data_frame(fw2, what = "both")
      id_name <- fw3$vertices[,1:2]
      
      for(j in 1:nrow(id_name)){#clean the names
        
        node_name <- id_name$original_name[j]
        
        if (grepl(":", node_name, fixed=TRUE)) {
          node_name <- tail(strsplit(node_name, ": "))[[1]]
          id_name[j,2] <- node_name[2]
        } else id_name[j,2] <- node_name
        
        
      }#end clean names
      
      id_edges <- fw3$edges[,1:3]
      int_matrix <- as.data.frame(matrix(ncol = nrow(id_name), nrow = nrow(id_name)))
      colnames(int_matrix) <- id_name$original_name
      rownames(int_matrix) <- id_name$original_name
      
      #Fill the matrix
      for(a in 1:nrow(id_edges)){
        edge1 <- as.numeric(id_edges[a,1:2])
        name1 <- id_name[as.character(edge1[1]),][,2]
        name2 <- id_name[as.character(edge1[2]),][,2]
        int_matrix[name1,name2] <- 1
      }
      
      int_matrix[is.na(int_matrix)] <- 0 #convert all NA to zero
      
      fwlist[[i]] <- int_matrix
      
    }#end of loop to convert to a data frame
    
    if(ref==TRUE){
      references <- as.data.frame(matrix(ncol = 4))
      names(references) <- c("Dataset ID", "first_author", "year", "DOI" )
      
      message("Fetching references!")
      
      for(j in 1:length(net_info)){
        dataset_id <- net_info[[j]]$dataset$dataset_id
        first_author <- net_info[[j]]$reference$first_author
        year_mng <- as.numeric(net_info[[j]]$reference$year)  
        doi_mng <- net_info[[j]]$reference$doi
        references[j,1] <- dataset_id
        references[j,2] <- first_author
        references[j,3] <- year_mng
        references[j,4] <- doi_mng
        
        references <- references[order(references$`Dataset ID`),]
        rownames(references) <- 1:nrow(references)
      }
      
    }#End of mg refs
    
    if(spatial==TRUE){
      spatial1 <- as.data.frame(matrix(ncol = 4)) 
      names(spatial1) <- c("Dataset ID", "first_author", "lat", "long")
      message("Fetching coordinates!")
      
      for(z in 1: length(net_info)){
        dataset_id <- net_info[[z]]$dataset$dataset_id
        lat_mng <- net_info[[z]]$network$geom_lat
        long_mng <-  net_info[[z]]$network$geom_lon
        first_author <- net_info[[z]]$reference$first_author
        if(length(unlist(lat_mng))>1){
          
          spatial2 <- as.data.frame(matrix(ncol = 4)) 
          names(spatial2) <- c("Dataset ID", "first_author", "long", "lat" )
          
          for(b in 1:length(unlist(lat_mng))){
            spatial2[b,3] <- long_mng[[1]] [b]
            spatial2[b,4] <- lat_mng [[1]] [b]
          }
          
          spatial2[,1] <- dataset_id
          spatial2[,2] <- first_author
          
          spatial1 <- rbind(spatial1, spatial2)
          
        }
        spatial1[z,1] <- dataset_id
        spatial1[z,2] <- first_author
        if(length(unlist(lat_mng))==1) spatial1[z,3] <- lat_mng
        if(length(unlist(lat_mng))==1) spatial1[z,4] <- long_mng
      }
      
      spatial1 <- spatial1[order(spatial1$`Dataset ID`),]
      rownames(spatial1) <- 1:nrow(spatial1)
      
    }#End of mg spatial
    
    if (exists("references") & exists("spatial1")) (if(nrow(references)!=nrow(spatial1)) message("WARNING: There are more than on FW in some datasets! References and Spatial data frames have different number of rows."))
    
  }#end of mangal
  
  message(paste0("DONE! \n\nOverall the list stores ", length(fwlist), " datasets!"))
  
  master_list <- list()
  master_list[["int_matrix"]] <- fwlist
  
  if(ecosyst==TRUE) {
    master_list[["ecosystem"]] <- ecosystem
    message ("\n Additional element in the results: \n\n The vector with information on the ecosystems.")
  }
  
  if(ref==TRUE) {
    master_list[["references"]] <- references
    message ("Additional element in the results! \nA data frame with information on the references.")
  }
  
  if(spatial==TRUE) {
    master_list[["spatial_info"]] <- spatial1
    message ("\n Additional element in the results: \n\n Spatial information was added.")
  }
  
  if(code==TRUE) {
    if(db == "gw") master_list[["code"]] <- files_gw
    if(db == "wl") master_list[["code"]] <- files_wl
    if(db == "mg") master_list[["code"]] <- references[1,]
    
    message ("Added food web code information.")
  }
  
  #Return results
  if(length(master_list)==1) return(fwlist)
  if(length(master_list)!=1) return(master_list)
  
  message("####################### DONE! #######################")
  
  
}#END OF FUNCTION create.fw.list

Next a small example of the function, resorting to EcoBase:

#TUTORIAl (example with EcoBase data)

#Load required packages
library(RCurl)
library(XML)
library(plyr)
library(stringr)
library(NCmisc)
library(sf)

#ecosystem=TRUE - We require information on the ecosystem
#ref=TRUE - We require information on the references


eb1 <- create.fw.list(db="eb", ecosyst=TRUE, ref=TRUE)


#Getting the list of interaction matrices
eb1$int_matrix


#Getting the information on the ecosystem
eb1$ecosystem


#Getting the information on the references
eb1$references


eb1$int_matrix[[1]]
                  Phytoplankton Macroalgae Mesozooplankton Inf. zoobenthos Intertidal inv.
Phytoplankton                 0          0             0.8            0.00            0.23
Macroalgae                    0          0             0.0            0.00            0.00
Mesozooplankton               0          0             0.0            0.29            0.36
Inf. zoobenthos               0          0             0.0            0.01            0.00
Intertidal inv.               0          0             0.0            0.00            0.10
Macrozooplankton              0          0             0.0            0.00            0.00
Epi. zoobenthos               0          0             0.0            0.00            0.00
Wild salmon fry               0          0             0.0            0.00            0.00
Hatch. salmon fry             0          0             0.0            0.00            0.00
Herring                       0          0             0.0            0.00            0.00
Small pelagics                0          0             0.0            0.00            0.00
Sea otters                    0          0             0.0            0.00            0.00
Demersal fish                 0          0             0.0            0.00            0.00
Birds                         0          0             0.0            0.00            0.00
Salmon                        0          0             0.0            0.00            0.00
Trans. mammals                0          0             0.0            0.00            0.00
Res. mammals                  0          0             0.0            0.00            0.00
Pinnipeds                     0          0             0.0            0.00            0.00
Detritus                      0          0             0.2            0.70            0.31
                  Macrozooplankton Epi. zoobenthos Wild salmon fry Hatch. salmon fry
Phytoplankton                 0.25           0.000            0.00              0.00
Macroalgae                    0.00           0.199            0.00              0.00
Mesozooplankton               0.75           0.100            0.75              0.75
Inf. zoobenthos               0.00           0.500            0.00              0.00
Intertidal inv.               0.00           0.000            0.00              0.00
Macrozooplankton              0.00           0.000            0.25              0.25
Epi. zoobenthos               0.00           0.080            0.00              0.00
Wild salmon fry               0.00           0.000            0.00              0.00
Hatch. salmon fry             0.00           0.000            0.00              0.00
Herring                       0.00           0.000            0.00              0.00
Small pelagics                0.00           0.000            0.00              0.00
Sea otters                    0.00           0.000            0.00              0.00
Demersal fish                 0.00           0.000            0.00              0.00
Birds                         0.00           0.000            0.00              0.00
Salmon                        0.00           0.000            0.00              0.00
Trans. mammals                0.00           0.000            0.00              0.00
Res. mammals                  0.00           0.000            0.00              0.00
Pinnipeds                     0.00           0.000            0.00              0.00
Detritus                      0.00           0.121            0.00              0.00
                  Herring Small pelagics Sea otters Demersal fish Birds Salmon
Phytoplankton         0.0          0.000          0         0.000 0.000   0.00
Macroalgae            0.0          0.000          0         0.000 0.002   0.00
Mesozooplankton       0.6          0.600          0         0.100 0.000   0.00
Inf. zoobenthos       0.0          0.101          1         0.130 0.000   0.00
Intertidal inv.       0.0          0.000          0         0.025 0.216   0.00
Macrozooplankton      0.4          0.299          0         0.265 0.187   0.15
Epi. zoobenthos       0.0          0.000          0         0.030 0.000   0.00
Wild salmon fry       0.0          0.000          0         0.005 0.000   0.00
Hatch. salmon fry     0.0          0.000          0         0.005 0.000   0.00
Herring               0.0          0.000          0         0.080 0.000   0.00
Small pelagics        0.0          0.000          0         0.260 0.459   0.85
Sea otters            0.0          0.000          0         0.000 0.000   0.00
Demersal fish         0.0          0.000          0         0.100 0.136   0.00
Birds                 0.0          0.000          0         0.000 0.000   0.00
Salmon                0.0          0.000          0         0.000 0.000   0.00
Trans. mammals        0.0          0.000          0         0.000 0.000   0.00
Res. mammals          0.0          0.000          0         0.000 0.000   0.00
Pinnipeds             0.0          0.000          0         0.000 0.000   0.00
Detritus              0.0          0.000          0         0.000 0.000   0.00
                  Trans. mammals Res. mammals Pinnipeds Detritus
Phytoplankton              0.000        0.000     0.000        0
Macroalgae                 0.000        0.000     0.000        0
Mesozooplankton            0.137        0.000     0.000        0
Inf. zoobenthos            0.000        0.000     0.000        0
Intertidal inv.            0.000        0.000     0.070        0
Macrozooplankton           0.459        0.010     0.000        0
Epi. zoobenthos            0.000        0.000     0.000        0
Wild salmon fry            0.000        0.000     0.000        0
Hatch. salmon fry          0.000        0.000     0.000        0
Herring                    0.067        0.296     0.051        0
Small pelagics             0.138        0.487     0.362        0
Sea otters                 0.000        0.001     0.000        0
Demersal fish              0.193        0.150     0.500        0
Birds                      0.000        0.000     0.000        0
Salmon                     0.001        0.056     0.017        0
Trans. mammals             0.000        0.000     0.000        0
Res. mammals               0.001        0.000     0.000        0
Pinnipeds                  0.004        0.000     0.000        0
Detritus                   0.000        0.000     0.000        0


Just accepted: “Species traits, patch turnover and successional dynamics: When does intermediate disturbance favour metapopulation occupancy?”

Well… and to finish the year the last paper of my PhD was accepted for publication in BMC Ecology. Let’s hope that 2020 is, at least, as productive as 2019!

With this paper we are not trying to rehabilitate the Intermediate Disturbance Hypothesis (IDH), which has received some criticism.  The IDH “…posits that biodiversity should be highest at intermediate time periods after a disturbance, at intermediate frequencies of disturbance, and at intermediate extents of disturbance.

Here we want to address a distinct issue: is individual species persistence in the landscape benefited by some degree of disturbance? If yes, what types of species? And in what types of landscapes?

We resorted to ecological simulation, by using 54 virtual species and determining how these species reacted to different disturbance intensities in landscapes with distinct habitat availability.

Here’s the abstract:

Background: In fragmented landscapes, natural and anthropogenic disturbances coupled with successional processes result in the destruction and creation of habitat patches. Disturbances are expected to reduce metapopulation occupancy for species associated with stable habitats, but they may benefit species adapted to transitory habitats by maintaining a dynamic mosaic of successional stages. However, while early-successional species may be favoured by very frequent disturbances resetting successional dynamics, metapopulation occupancy may be highest at intermediate disturbance levels for species with mid-successional habitat preferences, though this may be conditional on species traits and patch network characteristics. Here we test this ‘intermediate disturbance hypothesis’ applied to metapopulations (MIDH), using stochastic patch occupancy simulation modelling to assess when does intermediate disturbance favour metapopulation occupancy. We focused on 54 virtual species varying in their habitat preferences, dispersal abilities and local extinction and colonization rates. Long-term metapopulation dynamics was estimated in landscapes with different habitat amounts and patch turnover rates (i.e. disturbance frequency).
Results: Equilibrium metapopulation occupancy by late-successional species strongly declined with increasing disturbance frequency, while occupancy by early-successional species increased with disturbance frequency at low disturbance levels and tended to level-off thereafter. Occupancy by mid-successional species tended to increase along with disturbance frequency at low disturbance levels and declining thereafter. Irrespective of habitat preferences, occupancy increased with the amount of habitat, and with species dispersal ability and colonisation efficiency.
Conclusions: Our study suggests that MIDH is verified only for species associated with mid-successional habitats. These species may be particularly sensitive to land use changes causing either increases or decreases in disturbance frequency. This may be the case, for instance, of species associated with traditional agricultural and pastoral mosaic landscapes, where many species disappear either through intensification or abandonment processes that change disturbance frequency.

Just out: “Structural simplification compromises the potential of common insectivorous bats to provide biocontrol services against the major olive pest Prays oleae”

Check out our new paper on biocontrol services in olive groves.

Here’s the abstract: 

Crop production intensification often leads to the structural simplification of production systems. This structural simplification is expected to have strong impacts on biodiversity and the provisioning of ecosystem services, but information about this topic is scarce. For instance, no information exists for Mediterranean olive (Olea europaea) groves, despite olive farming representing a significant share of the agricultural sector in some European countries. We investigated the impact of in-farm and landscape-level structural simplification on the potential of three common insectivorous bats (i.e., Pipistrellus kuhlii, P. pygmaeus and P. pipistrellus) to provide biocontrol services against one of the most harmful olive pests worldwide, the olive fruit moth Prays oleae. Bats and insect surveys were both carried out in olive groves representing increasing levels of structural simplification and during three sampling seasons (spring, summer and autumn). At grove-level, structural simplification was considered as resulting from reduced planting pattern variability (i.e., tree and row spacing) and tree features (diameter at breast height, height of the trunk and canopy area), while at landscape level was considered as resulting from reduced land-cover types. We found that the Kuhl’s pipistrelle was the most frequently recorded species in all types of olive groves and seasons. Moreover, the activity levels of pipistrelle bats as a whole significantly decreased with the structural simplification of olive groves. The abundance of P. oleae was highest at intermediate levels of structural simplification, irrespective of the season. Forest cover in the surrounding landscape had a significant positive influence on the activity levels of P. kuhlii, and a significant and negative influence on the abundance of P. oleae. Our study demonstrates that structural simplification differentially influences the activity patterns of both insectivorous bats and insect pests within olive groves. Moreover, it suggests that structural simplification may strongly compromise biocontrol services provided by bats on the major olive pest P. oleae.

 

Just accepted “Prioritizing road defragmentation using graph-based tools”

The paper I co-authored with Fernando Ascensão and Márcia Barbosa, entitled “Prioritizing road defragmentation using graph-based tools“, was just accepted for publication in Landscape and Urban Planning.

This has been a great collaboration, which already resulted in the publication of one R package (gDefrag) and one other paper!

Here’s the abstract:

Roads are a main cause of habitat fragmentation but mitigating the full road network is unfeasible. A key goal in the road mitigation planning process is to highlight, at the transportation network level, the most problematic roads, i.e. where mitigation measures are most required, in order to maximize the benefits for biodiversity while keeping implementation costs as low as possible. Grounded on the concepts of habitat amount and accessible habitat, we prioritized roads for mitigation based on dual spatial graphs, where the land polygons delimited by roads are the nodes and the roads themselves are the links. The rationale was to identify those links (roads) that connect the nodes with higher potential biodiversity (as a proxy for quality habitat). We applied this approach to prioritize the defragmentation of the major road network of the Iberian Peninsula, targeting all native mammalian carnivores inhabiting this region. Our goal was to identify those roads that, by dividing areas with the best habitat quality and/or are major potential barriers for connectivity, should be prioritized in the mitigation process. We used two complementary metrics: Area Weighted Metric and the Integral Index of Connectivity. Highlighted roads bisect regions of high potential biodiversity for carnivores in northern Spain and along the Portugal-Spain border. Thirty-five roads were scored as high-priority by both metrics, suggesting that they have particular impact both in the amount of quality habitat and in overall landscape functional connectivity. This approach is completely scalable, allowing a fast assessment from local to continental scales.