The following two scripts will generate a "SpatialPixelDataFrame" object:
# FIRST
library(rgdal)
elev.grid <- readGDAL("whatever.asc")
elev.grid <- as(elev.grid, "SpatialPixelsDataFrame")
# SECOND
library(raster)
library(SDMTools)
library(adehabitat)
elev.grid <- raster("whatever.asc")
elev.grid.asc <- asc.from.raster(elev.grid)
elev.grid.SPDF <- asc2spixdf(elev.grid.asc)
HOWEVER, the first one excedes the capability of my computing resources when applying it to big (15000 x 16000) grids, and the second one generates an object which I can't use for some of my further analyses. For example, when I use it for krige purposes
x <- krige(V3~var, points, elev.grid)
I get the following:
Error in model.frame.default(terms(formula), as(data, "data.frame"), na.action = na.fail) : invalid type (closure) for variable 'var'
I will be really thankful if somebody is kind enough to tell me how to fix it, whether providing me a trick to bypass the memory/capability issue in the first case (preferably), or fixing the error generated by the second case.
THANKS A LOT IN ADVANCE!!!
perep
Related
I am trying to create a phylo correlogram based on my data using phyloCorrelogram from the phylosignal package in order to test for presence of a phylogenetic signal. My data is in the so-called phylo4d format and is called tree.
Now, when I run phyloCorrelogram(tree), I am returned the following error:
library("phylobase")
library("ape")
library("phylosignal") # contains phyloCorrelogram()
> phyloCorrelogram(tree, ci.bs = 10, n.points = 10)
Error in boot::boot(X, function(x, z) moranTest(xr = x[z], Wr = prop.table(Wi[z, :
no data in call to 'boot'
I have already done an elaborate search on the internet to look for ways how to solve this issue, but without success.
Since the dimension of the data that I am using is too large to post it here, I have posted my data in .rda format on dropbox.
Does anyone know the flaw?
You have no data associated to your tree. Your tree is a phylo4d object which has the "tree" information but no data attached to it. You need something like that
library(phylobase)
g1 <- as(geospiza_raw$tree, "phylo4")
geodata <- geospiza_raw$data
g2 <- phylo4d(g1, geodata)
pc <- phyloCorrelogram(g2)
I'm trying to read in two dataframes into a comparitive object so I can plot them using pgls.
I'm not sure what the error being returned means, and how to go about getting rid of it.
My code:
library(ape)
library(geiger)
library(caper)
taxatree <- read.nexus("taxonomyforzeldospecies.nex")
LWEVIYRcombodata <- read.csv("LWEVIYR.csv")
LWEVIYRcombodataPGLS <-data.frame(LWEVIYRcombodata$Sum.of.percentage,OGT=LWEVIYRcombodata$OGT, Species=LWEVIYRcombodata$Species)
comp.dat <- comparative.data(taxatree, LWEVIYRcombodataPGLS, "Species")
Returns error:
> comp.dat <- comparative.data(taxatree, LWEVIYRcombodataPGLS, 'Species')
Error in if (tabulate(phy$edge[, 1])[ntips + 1] > 2) FALSE else TRUE :
missing value where TRUE/FALSE needed
This might come from your data set and your phylogeny having some discrepancies that comparative.data struggles to handle (by the look of the error message).
You can try cleaning both the data set and the tree using dispRity::clean.data:
library(dispRity)
## Reading the data
taxatree <- read.nexus("taxonomyforzeldospecies.nex")
LWEVIYRcombodata <- read.csv("LWEVIYR.csv")
LWEVIYRcombodataPGLS <- data.frame(LWEVIYRcombodata$Sum.of.percentage,OGT=LWEVIYRcombodata$OGT, Species=LWEVIYRcombodata$Species)
## Cleaning the data
cleaned_data <- clean.data(LWEVIYRcombodataPGLS, taxatree)
## Preparing the comparative data object
comp.dat <- comparative.data(cleaned_data$tree, cleaned_data$data, "Species")
However, as #MrFlick suggests, it's hard to know if that solves the problem without a reproducible example.
The error here is that I was using a nexus file, although ?comparitive.data does not specify which phylo objects it should use, newick trees seem to work fine, whereas nexus files do not.
First of all, I´m new to programing so this might be a simple question but i cant find the solution anywhere.
I´ve been using this code to extract values from a set of stacked rasters:
raster.files <- list.files()
raster.list <- list()
raster.files <-list.files(".",pattern ="asc")
for(i in 1: length(raster.files)){
raster.list[i] <- raster(raster.files[i])}
stacking <- stack(raster.list)
coord <- read.csv2("...")
extract.data <- extract(stacking,coord,method="simple")
I already used this code several times without any problem, until now. Every time I run the extract line I get this error:
Error in .doCellFromXY(object#ncols, object#nrows, object#extent#xmin, :
Not compatible with requested type: [type=character; target=double].
The coord file consists in a data.frame with 2 columns(X and Y respectively).
I´ve managed to found a way to bypass this error, its not technically a solution because I can´t understand why R was treating my data as text instead in first place.
Basically I separated the X and Y columns and treated them individually and then binded them again in a new data.frame:
coord_matrix_x<-as.numeric(as.matrix(coord[1]))
coord_matrix_y<-as.numeric(as.matrix(coord[2]))
coord2 <- cbind(coord_matrix_x, coord_matrix_y)
coord2<-as.data.frame(coord2)
coordinates(coord2)<-c("coord_matrix_x","coord_matrix_y")
It´s far form the most elegant way to do it, but it just works.
I worked a lot with MaxEnt in R recently (dismo-package), but only using a crossvalidation to validate my model of bird-habitats (only a single species). Now I want to use a self-created test sample file. I had to pick this points for validation by hand and can't use random test point.
So my R-script looks like this:
library(raster)
library(dismo)
setwd("H:/MaxEnt")
memory.limit(size = 400000)
punkteVG <- read.csv("Validierung_FL_XY_2016.csv", header=T, sep=";", dec=",")
punkteTG <- read.csv("Training_FL_XY_2016.csv", header=T, sep=";", dec=",")
punkteVG$X <- as.numeric(punkteVG$X)
punkteVG$Y <- as.numeric(punkteVG$Y)
punkteTG$X <- as.numeric(punkteTG$X)
punkteTG$Y <- as.numeric(punkteTG$Y)
##### mask NA ######
mask <- raster("final_merge_8class+le_bb_mask.img")
dataframe_VG <- extract(mask, punkteVG)
dataframe_VG[dataframe_VG == 0] <- NA
dataframe_TG <- extract(mask, punkteTG)
dataframe_TG[dataframe_TG == 0] <- NA
punkteVG <- punkteVG*dataframe_VG
punkteTG <- punkteTG*dataframe_TG
#### add the raster dataset ####
habitat_all <- stack("blockstats_stack_8class+le+area_8bit.img")
#### MODEL FITTING #####
library(rJava)
system.file(package = "dismo")
options(java.parameters = "-Xmx1g" )
setwd("H:/MaxEnt/results_8class_LE_AREA")
### backgroundpoints ###
set.seed(0)
backgrVMmax <- randomPoints(habitat_all, 100000, tryf=30)
backgrVM <- randomPoints(habitat_all, 1000, tryf=30)
### Renner (2015) PPM modelfitting Maxent ###
maxentVMmax_Renner<-maxent(habitat_all,punkteTG,backgrVMmax, path=paste('H:/MaxEnt/Ergebnisse_8class_LE_AREA/maxVMmax_Renner',sep=""),
args=c("-P",
"noautofeature",
"nothreshold",
"noproduct",
"maximumbackground=400000",
"noaddsamplestobackground",
"noremoveduplicates",
"replicates=10",
"replicatetype=subsample",
"randomtestpoints=20",
"randomseed=true",
"testsamplesfile=H:/MaxEnt/Validierung_FL_XY_2016_swd_NA"))
After the "maxent()"-command I ran into multiple errors. First I got an error stating that he needs more than 0 (which is the default) "randomtestpoints". So I added "randomtestpoints = 20" (which hopefully doesn't stop the program from using the file). Then I got:
Error: Test samples need to be in SWD format when background data is in SWD format
Error in file(file, "rt") : cannot open the connection
The thing is, when I ran the script with the default crossvalidation like this:
maxentVMmax_Renner<-maxent(habitat_all,punkteTG,backgrVMmax, path=paste('H:/MaxEnt/Ergebnisse_8class_LE_AREA/maxVMmax_Renner',sep=""),
args=c("-P",
"noautofeature",
"nothreshold",
"noproduct",
"maximumbackground=400000",
"noaddsamplestobackground",
"noremoveduplicates",
"replicates=10"))
...all works fine.
Also I tried multiple things to get my csv-validation-data in the correct format. Two rows (labled X and Y), Three rows (labled species, X and Y) and other stuff. I would rather use the "punkteVG"-vector (which is the validation data) I created with read.csv...but it seems MaxEnt wants his file.
I can't imagine my problem is so uncommon. Someone must have used the argument "testsamplesfile" before.
I found out, what the problem was. So here it is, for others to enjoy:
The correct maxent-command for a Subsample-file looks like this:
maxentVMmax_Renner<-maxent(habitat_all, punkteTG, backgrVMmax, path=paste('H:/MaxEnt',sep=""),
args=c("-P",
"noautofeature",
"nothreshold",
"noproduct",
"maximumbackground=400000",
"noaddsamplestobackground",
"noremoveduplicates",
"replicates=1",
"replicatetype=Subsample",
"testsamplesfile=H:/MaxEnt/swd.csv"))
Of course, there can not be multiple replicates, since you got only one subsample.
Most importantly the "swd.csv" Subsample-file has to include:
the X and Y coordinates
the Values at the respective points (e.g.: with "extract(habitat_all, PunkteVG)"
the first colum needs to consist of the word "species" with the header "Species" (since MaxEnt uses the default "species" if you don't define one in the Occurrence data)
So the last point was the issue here. Basically, if you don't define the species-colum in the Subsample-file, MaxEnt will not know how to assign the data.
I have a problem with the package clogitLasso where I continually get the error "(list) object cannot be coerced to type 'double'"
I've done plenty of searching on this, and there are plenty of ways to pre-convert the data to solve this problem, but no matter what I do it keeps coming up.
I'm not sure what I'm doing wrong here - I can generate data structured exactly like this within R and it runs with the same syntax without any problems, but when I read it in like this it doesn't work.
Using the data (trimmed, but gives the same error): https://pastebin.com/WfB1LJQ2
And the code:
library(clogitLasso)
#Read in data
data <- read.csv('data.txt',sep="\t")
#Data must be sorted so that the
#binary=1 option comes FIRST within the strata
datasorted <- data[order(data$groupid,-data$binary),]
#Convert from a data frame to numericals
X <- as.matrix(datasorted[,1:4])
y <- as.numeric(datasorted[,5])
group <- as.numeric(datasorted[,6])
results <- clogitLasso(X,y,group)
This gives the same error every time. Any tips would be greatly appreciated!
The object y must be of class matrix. Here is the modified code:
library(clogitLasso)
data <- read.csv('WfB1LJQ2.txt',sep="\t", header=T)
datasorted <- data[order(data$groupid,-data$binary),]
X <- as.matrix(datasorted[,1:4])
y <- as.matrix(datasorted[,5])
group <- as.numeric(datasorted[,6])
results <- clogitLasso(X,y,group)
plot(results)