I am trying to optimise the k parameter using AdehabitatHR LoCoH.k.area and it stops running when the topology is such that it can't produce a polygon. Message is:
rgeos_PolyCreateComment: orphaned hole, cannot find containing polygon
for hole at index 12.
I have done a number of successful single runs using LoCoH.k with only a few not running due to orphan holes.
Is it possible to keep LoCoH.k.area looping through the k values specified in the vector even if the one prior produces an orphan hole?
Thanks, Janine
You cant wrap LoCoH.k.area function in tryCatch. E.g. the function with krange = 5:9 argument throws:
Error in rgeos::createPolygonsComment(oobj) :
rgeos_PolyCreateComment: orphaned hole, cannot find containing polygon
for hole at index 6
Please see the code below:
library(adehabitatHR)
data(puechabonsp)
locs <- puechabonsp$relocs
## The call below throws an error
## LoCoH.k.area(locs[, 1], krange = 5:9)
pdf()
y <- sapply(5:9, function(x) tryCatch(
expr = cbind(LoCoH.k.area(locs[, 1], krange = x), k = x),
error = function(e){},
finally = NULL))
dev.off()
do.call(rbind, y)
Output:
Brock Calou Chou Jean k
1 25.21552 38.61693 83.37389 80.97771 8
2 27.37161 39.10789 86.45349 83.44156 9
Related
I have some code that loops over a list of study IDs (ids) and turns them into separate polygons/spatial points. On the first execution of the loop it produces the following error:
Error in (function (x) : attempt to apply non-function
This is from the raster::rasterToPoints function. I've looked at the examples in the help section for this function and passing fun=NULL seems to be an acceptable method (filters out all NA values). All the values are equal to 1 anyways so I tried passing a simple function like it suggests such as function(x){x==1}. When this didn't work, I also tried to just suppress the error message but without any luck using try() or tryCatch().
Main questions:
1. Why does this produce an error at all?
2. Why does it only display the error on the first run through the loop?
Reproducible example:
library(ggplot2)
library(raster)
library(sf)
library(dplyr)
pacific <- map_data("world2")
pac_mod <- pacific
coordinates(pac_mod) <- ~long+lat
proj4string(pac_mod) <- CRS("+init=epsg:4326")
pac_mod2 <- spTransform(pac_mod, CRS("+init=epsg:4326"))
pac_rast <- raster(pac_mod2, resolution=0.5)
values(pac_rast) <- 1
all_diet_density_samples <- data.frame(
lat_min = c(35, 35),
lat_max = c(65, 65),
lon_min = c(140, 180),
lon_max = c(180, 235),
sample_replicates = c(38, 278),
id= c(1,2)
)
ids <- all_diet_density_samples$id
for (idnum in ids){
poly1 = all_diet_density_samples[idnum,]
pol = st_sfc(st_polygon(list(cbind(c(poly1$lon_min, poly1$lon_min, poly1$lon_max, poly1$lon_max, poly1$lon_min), c(poly1$lat_min, poly1$lat_max, poly1$lat_max, poly1$lat_min, poly1$lat_min)))))
pol_sf = st_as_sf(pol)
x <- rasterize(pol_sf, pac_rast)
df1 <- raster::rasterToPoints(x, fun=NULL, spatial=FALSE) #ERROR HERE
df2 <- as.data.frame(df1)
density_poly <- all_diet_density_samples %>% filter(id == idnum) %>% pull(sample_replicates)
df2$density <- density_poly
write.csv(df2, paste0("pol_", idnum, ".csv"))
}
Any help would be greatly appreciated!
These are error messages, but not errors in the strict sense as the script continues to run, and the results are not affected. They are related to garbage collection (removal from memory of objects that are no longer in use) and this makes it tricky to pinpoint what causes it (below you can see a slightly modified example that suggests another culprit), and why it does not always happen at the same spot.
Edit (Oct 2022)
These annoying messages
Error in x$.self$finalize() : attempt to apply non-function
Error in (function (x) : attempt to apply non-function
Will disappear with the next release of Rcpp, which is planned for Jan 2023. You can also install the development version of Rcpp like this:
install.packages("Rcpp", repos="https://rcppcore.github.io/drat")
I am making a neural network in R so I can predict future data.
Firstly, I made a function that makes the layers:
add_layer <- function(x, in_size, out_size, act_function){
w = tf$V=variable(tf$random_normal(shape(in_size, out_size)))
b = tf$variable(tf$random_normal(shape(1, out_size)))
wxb = tf$matmul(x,w)+ b
y = act_function(wxb)
return(y)
}
Then, I create the layers. For now, I create 2 layers:
x = tf$placeholder(tf$float32, shape(NULL,31))
ty = tf$placeholder(tf$float32, shape(NULL, 2))
#First layer
l1 = add_layer(x, 31, 10, tf$nn$relu)
#Second layer, result is 0(false) or 1(true)
l = add_layer(l1, 10,2, tf$nn$sotfmax)
But then there is an error when I make layer l1 and layer l:
AttributeError: module 'tensorflow' has no attribute 'variable'
The problem is, when I remove in_size or out_size, it gives me the error that these are missing. Do I add these two then it gives me this error. After filling all the parameters(in_size, out_size, x and the activation function) it still gives me variable missing as seen above.
Any suggestions how to solve this?
Edit: Changed capital letter v, but result is still the same
I am trying to make a function of my own to subset a data.cube in R, and format the result automatically for some predefined plots I aim to build.
This is my function.
require(data.table)
require(data.cube)
secciona <- function(cubo = NULL,
fecha_valor = list(),
loc_valor = list(),
prod_valor = list(),
drop = FALSE){
cubo[fecha_valor, loc_valor, prod_valor, drop = drop]
## The line above will really be an asignment of type y <- format(cubo[...drop])
## Rest of code which will end up plotting the subset of the function
}
The thing is I keep on getting the error: Error in eval(expr, envir, enclos) : object 'fecha_valor' not found
What is most strange for me, is that on the console everything works fine, but not when defined inside the subsetting function of mine.
In console:
> dc[list(as.Date("2013/01/01"))]
> dc[list(as.Date("2013/01/01")),]
> dc[list(as.Date("2013/01/01")),,]
> dc[list(as.Date("2013/01/01")),list(),list()]
all give as result:
<data.cube>
fact:
5627 rows x 2 dimensions x 1 measures (0.32 MB)
dimensions:
localizacion : 4 entities x 3 levels (0.01 MB)
producto : 153994 entities x 3 levels (21.29 MB)
total size: 21.61 MB
But whenever I try
secciona(dc)
secciona(dc, fecha_valor = list(as.Date("2013/01/01")))
secciona(dc, fecha_valor = list())
I always get the error above mentioned.
Any ideas why this is happening? should I proceed in else way for my approach of editing the subset for plotting?
This is the standard issue that R users will face when dealing with non-standard evaluation. This is a consequence of Computing on the language R language feature.
[.data.cube function expects to be used in interactive way, that extends the flexibility of the arguments passed to it, but gives some restrictions. In that aspect it is similar to [.data.table when passing expressions from wrapper function to [ subset operator. I've added dummy example to make it reproducible.
I see you are already using data.cube-oop branch, so just to clarify for other readers. data.cube-oop branch is 92 commits ahead of master branch, to install use the following.
install.packages("data.cube", repos = paste0("https://", c(
"jangorecki.gitlab.io/data.cube",
"Rdatatable.github.io/data.table",
"cran.rstudio.com"
)))
library(data.cube)
set.seed(1)
ar = array(rnorm(8,10,5), rep(2,3),
dimnames = list(color = c("green","red"),
year = c("2014","2015"),
country = c("IN","UK"))) # sorted
dc = as.data.cube(ar)
f = function(color=list(), year=list(), country=list(), drop=FALSE){
expr = substitute(
dc[color=.color, year=.year, country=.country, drop=.drop],
list(.color=color, .year=year, .country=country, .drop=drop)
)
eval(expr)
}
f(year=list(c("2014","2015")), country="UK")
#<data.cube>
#fact:
# 4 rows x 3 dimensions x 1 measures (0.00 MB)
#dimensions:
# color : 2 entities x 1 levels (0.00 MB)
# year : 2 entities x 1 levels (0.00 MB)
# country : 1 entities x 1 levels (0.00 MB)
#total size: 0.01 MB
You can track the expression just by putting print(expr) before/instead eval(expr).
Read more about non-standard evaluation:
- R Language Definition: Computing on the language
- Advanced R: Non-standard evaluation
- manual of substitute function
And some related SO questions:
- Passing on non-standard evaluation arguments to the subset function
- In R, why is [ better than subset?
Can somebody help me convert an 'ashape3d' class object to class 'mesh3d'?
In ashape3d, the triangle en tetrahedron faces are are stored in different fields. As I don't think there's a function that can create a mesh3d object from triangles&tetrahedrons simultaneously, I tried the following (pseudocode):
model <- ashape3d(rtorus(1000, 0.5, 2),alpha=0.25)
vert <- model$x[model$vert[,2]==1,]
vert <- cbind(vert,rep(1,nrow(vert)))
tria <- model$triang[model$triang[,4]==1,1:3]
tetr <- model$tetra[model$tetra[,6]==1,1:4]
m3dTria <- tmesh3d(vertices=vert , indices=tria)
m3dTetr <- qmesh3d(vertices=vert , indices=tetr)
m3d <- mergeMeshes(m3dTria,m3dTetr)
plot.ashape3d(model) # works fine
plot3d(m3d) # Error in x$vb[1, x$it] : subscript out of bounds
Does anybody have a better way?
I needed to do this recently and found this unanswered question. The easiest way to figure out what is going on is to look at plot.ashape3d and read the docs for ashape3d. plot.ashape3d only plots triangles.
The rgl package has a generic as.mesh3d function. This defines a method for that generic function.
as.mesh3d.ashape3d <- function(x, ...) {
if (length(x$alpha) > 1)
stop("I don't know how to handle ashape3d objects with >1 alpha value")
iAlpha = 1
# from help for ashape3d
# for each alpha, a value (0, 1, 2 or 3) indicating, respectively, that the
# triangle is not in the alpha-shape or it is interior, regular or singular
# (columns 9 to last)
# Pick the rows for which the triangle is regular or singular
selrows = x$triang[, 8 + iAlpha] >= 2
tr <- x$triang[selrows, c("tr1", "tr2", "tr3")]
rgl::tmesh3d(
vertices = t(x$x),
indices = t(tr),
homogeneous = FALSE
)
}
You can try it out on the data above
model <- ashape3d(rtorus(1000, 0.5, 2),alpha=0.25)
plot(model, edges=F, vertices=F)
library(rgl)
model2=as.mesh3d(model)
open3d()
shade3d(model2, col='red')
I want to find documents whose similarity between other doucuments are larger than a given value(0.1) by cutting documents into blocks.
library(tm)
data("crude")
sample.dtm <- DocumentTermMatrix(
crude, control=list(
weighting=function(x) weightTfIdf(x, normalize=FALSE),
stopwords=TRUE
)
)
step = 5
n = nrow(sample.dtm)
block = n %/% step
start = (c(1:block)-1)*step+1
end = start+step-1
j = unlist(lapply(1:(block-1),function(x) rep(((x+1):block),times=1)))
i = unlist(lapply(1:block,function(x) rep(x,times=(block-x))))
ij <- cbind(i,j)
library(skmeans)
getdocs <- function(k){
ci <- c(start[k[[1]]]:end[k[[1]]])
cj <- c(start[k[[2]]]:end[k[[2]]])
combi <- sample.dtm[ci]
combj < -sample.dtm[cj]
rownames(combi)<-ci
rownames(combj)<-cj
comb<-c(combi,combj)
sim<-1-skmeans_xdist(comb)
cat("Block", k[[1]], "with Block", k[[2]], "\n")
flush.console()
tri.sim<-upper.tri(sim,diag=F)
results<-tri.sim & sim>0.1
docs<-apply(results,1,function(x) length(x[x==TRUE]))
docnames<-names(docs)[docs>0]
gc()
return (docnames)
}
It works well when using apply
system.time(rmdocs<-apply(ij,1,getdocs))
When using parRapply
library(snow)
library(skmeans)
cl<-makeCluster(2)
clusterExport(cl,list("getdocs","sample.dtm","start","end"))
system.time(rmdocs<-parRapply(cl,ij,getdocs))
Error:
Error in checkForRemoteErrors(val) :
2 nodes produced errors; first error: attempt to set 'rownames' on an object with no dimensions
Timing stopped at: 0.01 0 0.04
It seems that sample.dtm coundn't be used in parRapply. I'm confused. Can anyone help me? Thanks!
In addition to exporting objects, you need to load the necessary packages on the cluster workers. In your case, the result of not doing so is that there isn't a dimnames method defined for "DocumentTermMatrix" objects, causing rownames<- to fail.
You can load packages on the cluster workers with the clusterEvalQ function:
clusterEvalQ(cl, { library(tm); library(skmeans) })
After doing that, rownames(combi)<-ci will work correctly.
Also, if you want to see the output from cat, you should use the makeCluster outfile argument:
cl <- makeCluster(2, outfile='')