I would like to apply transformations to a set of points using existing nonlinear transformations from niftyreg. How can I do this?
I created the transformations by registering a subject to a template with Niftyreg https://www.nitrc.org/projects/niftyreg/. and also found an example how to do it in RNiftyReg (R version of Niftyreg):
result <- applyTransform(transform, x, interpolation = 3L, nearest = FALSE, internal = FALSE)
https://github.com/Borda/BIRL/blob/master/scripts/Rscript/RNiftyReg_elastic.r
There are now three things I can think of that could help me:
find the command of the niftyreg toolbox that does what applyTransform in the R package does.
understand what applyTransform does and recreate it with python or bash. Unfortunately I could not understand the line in the R function that performs the registration. (see below)
run applyTransform in R, but there I have currently the problem, that I can only run this function with a internal image, meaning the registration has to be produced with R, but I would like to use the ones I already computed.
applyTransform function (transform, x, interpolation = 3L, nearest = FALSE, internal = FALSE)
result <- .Call(C_transformPoints, transform, points, isTRUE(nearest))
newPoints <- sapply(seq_len(nrow(points)), function(i) {if (length(result[[i]]) == nDims) return(result[[i]])
What is C_transformPoints ? my attempts to google it were not successful..
If anyone already did this or knows if this is possible, that would help me a lot!
Related
I have a set of one-dimensional data points (locations on a segment), and I would like to test for Complete Spatial randomness. I was planning to run Gest (nearest neighbor), Fest (empty space) and Kest (pairwise distances) functions on it.
I am not sure how I should import my data set though. I can use ppp by setting a second dimension to 0, e.g.:
myDistTEST<- data.frame(
col1= sample(x = 1:100, size = 50, replace = FALSE),
col2= paste('Event', 1:50, sep = ''), stringsAsFactors = FALSE)
myDistTEST<- myDistTEST[order(myDistTEST$col1),]
myPPPTest<- ppp(x = myDistTEST[,1], y = replicate(n = 50, expr = 0),
c(1,120), c(0,0))
But I am not sure it is the proper way to format my data. I have also tried to use lpp, but I am not sure how to set the linnet object. What would be the correct way to import my data?
Thank you for your kind attention.
It will be wrong to simply let y=0 for all your points and then proceed as if you had a point pattern in two dimensions. Your suggestion of using lpp is good. Regarding how to define the linnet and lpp try to look at my answer here.
I have considered making a small package to handle one dimensional patterns more easily in spatstat, but so far I have only started the package with a single function to make the definition of the appropriate lpp easier. If you feel adventurous you can install it from the GitHub repo via the remotes package:
remotes::install_github("rubak/spatstat.1d")
The single function you can use is called lpp1. It basically just wraps up the few steps described in the linked answer.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I hope you can help me with this problem i can't find how to overcome. Sorry if I made some mistakes while writing this post, my english is a bit rusty right now.
Here is the question. I have .shp data that I want to analyze in R. The .shp can be either lines that represent lines of traps we set to catch octopuses or points located directly over those lines, representing where we had catured one.
The question i'm trying to answer is: Are octopuses statistically grouped or not?
After a bit of investigation it seems to me that i need to use R and its linearK function to answer that question, using the libraries Maptools, SpatStat and Sp.
Here is the code i'm using in RStudio:
Loading the libraries
library(spatstat)
library(maptools)
library(sp)
Creating a linnet object with the track
t1<- as.linnet(readShapeSpatial("./20170518/t1.shp"))
I get the following warning but it seems to work
Warning messages:
1: use rgdal::readOGR or sf::st_read
2: use rgdal::readOGR or sf::st_read
Plotting it to be sure everything is ok
plot(t1)
Creating a ppp object with the points
p1<- as.ppp(readShapeSpatial("./20170518/p1.shp"))
I get the same warning here, but the real problems start when I try to plot it:
> plot(p1)
Error in if (!is.vector(xrange) || length(xrange) != 2 || xrange[2L] < :
missing value where TRUE/FALSE needed
In addition: Warning messages:
1: Interpretation of arguments maxsize and markscale has changed (in spatstat version 1.37-0 and later). Size of a circle is now measured by its diameter.
2: In plot.ppp(x, ..., multiplot = FALSE, do.plot = FALSE) :
All mark values are NA; plotting locations only.
3: In plot.ppp(x, ..., multiplot = FALSE, do.plot = FALSE) :
All mark values are NA; plotting locations only.
4: In plot.ppp(x, ..., multiplot = FALSE, do.plot = FALSE) :
All mark values are NA; plotting locations only.
5: In plot.ppp(x, ..., multiplot = FALSE, do.plot = FALSE) :
All mark values are NA; plotting locations only.
6: In plot.ppp(x, ..., multiplot = FALSE, do.plot = FALSE) :
All mark values are NA; plotting locations only.
7: In plot.ppp(x, ..., multiplot = FALSE, do.plot = FALSE) :
All mark values are NA; plotting locations only.
Now what is left is to join the objects in a lpp object and to analyze it with the linearK function
> pt1 <- lpp(p1,t1)
> linearK(pt1)
Function value object (class ‘fv’)
for the function r -> K[L](r)
......................................
Math.label Description
r r distance argument r
est {hat(K)[L]}(r) estimated K[L](r)
......................................
Default plot formula: .~r
where “.” stands for ‘est’
Recommended range of argument r: [0, 815.64]
Available range of argument r: [0, 815.64]
This is my situation right now. What i dont know is why the plot function is not working with my ppp object and how to understant the return of the linearK function. Help(linearK) didn't provide any clue. Since i have a lot of tracks, each with its set of points, my desired outcome would be some kind of summary like x tracks analized, a grouped, b dispersed and c unkown.
Thank you for your time, i'll greatly appreciate if you can help me solve this problem.
Edit: Here is a link to a zip file containing al the shp files of one day, both tracks and points, and a txt file with my code. https://drive.google.com/open?id=0B0uvwT-2l4A5ODJpOTdCekIxWUU
First two pieces of general advice: (1) each time you create a complicated object, print it at the terminal, to see if it is what you expected. (2) When you get an error, immediately type traceback() and copy the output. This will reveal exactly where the error is detected.
A ppp object must include a specification of the study region (window). In your code, the object p1 is created by converting data of class SpatialPointsDataFrame, which do not include a specification of the study region, converted via the function as.ppp.SpatialPointsDataFrame, into an object of class ppp in which the window is guessed by taking the bounding box of the coordinates. Unfortunately, in your example, there is only one data point in p1, so the default bounding box is a rectangle of width 0 and height 0. [This would have been revealed by printing p1.] Such objects can usually be handled by spatstat, but this particular object triggers a bug in the function plot.solist which expects windows to have non-zero size. I will fix the bug, but...
In your case, I suggest you do
Window(p1) <- Window(t1)
immediately after creating p1. This will ensure that p1 has the window that you probably intended.
If all else fails, read the spatstat vignette on shapefiles...
I have managed to find a solution. As Adrian Baddeley noticed there was a problem with the owin object. That problem seems to be bypassed (not really solved) if I create the ppp object in a manual way instead of converting my set of points.
I have also changed the readShapeFile function for the rgdal::readOGR, since the first once was deprecated, and that was the reason of the warnings I was getting.
This is the R script i'm using right now, commented to clarify:
#first install spatstat, maptools y sp
#load them
library(spatstat)
library(maptools)
library(sp)
#create an array of folders, will add more when everything works fine
folders=c("20170518")
for(f in folders){
#read all shp from that folder, both points and tracks
pointfiles <- list.files(paste("./",f,"/points", sep=""), pattern="*.shp$")
trackfiles <- list.files(paste("./",f,"/tracks", sep=""), pattern="*.shp$")
#for each point and track couple
for(i in 1:length(pointfiles)){
#create a linnet object with the track
t<- as.linnet(rgdal::readOGR(paste("./",f,"/tracks/",trackfiles[i], sep="")))
#plot(t)
#create a ppp object for each set of points
pre_p<-rgdal::readOGR(paste("./",f,"/points/",pointfiles[i], sep=""))
#plot(p)
#obtain the coordinates the current set of points
c<-coordinates(pre_p)
#create vector of x coords
xc=c()
#create vector of y coords
yc=c()
#not a very good way to fill my vectors but it works for my study area
for(v in c){
if(v>4000000){yc<-c(yc,v)}
else {if(v<4000000 && v>700000){xc<-c(xc,v)}}
}
print(xc)
print(yc)
#create a ppp object using the vectors of x and y coords, and a window object
#extracted from my set of points
p=ppp(xc,yc,Window(as.ppp(pre_p)))
#join them into an lpp object
pt <- lpp(p,t)
#plot(pt)
#analize it with the linearK function, nsim=9 for testing purposes
#envelope.lpp is the method for analyzing linear point patterns
assign(paste("results",f,i,sep="_"),envelope.lpp(pt, nsim=9, fun=linearK))
}#end for each points & track set
}#end for each day of study
So as you can see this script is testing for CSR each couple of points and track for each day, working fine right now. Unfortunately I have not managed to create a report or reportlike with the results yet (or even to fully understand them), I'll keep working on that. Of course I can use any advice you have, since this is my first try with R and many newie mistakes will happen.
The script and the shp files with the updated folder structure can be found here(113 KB size)
I am working with rEMM package and want to add a function that is very similar to fade.
A little background: this function is 'fading' data stream based on the temporal structure of the data. The weights are calculated as w = 2^{-lambda t}. I would like to only modify the weights for this method to 1/(1+(T-t/c)^lambda). In this case I have two additional parameters I would have to pass to the new function: T and c compare to the original fade function.
It seems that the only difference between current script fade.R would be the lines 24-30, where I would create new lambda_factor.
I have absolutely no idea how to do that.
Any help would be appreciated.
The author of fade seems to have exposed lambda as a parameter that you can specify. Looking at the code with showMethods(fade, includeDef=TRUE), everything ends up at rEMM:::.fade, where we see
> rEMM:::.fade
function (x, t = 1, lambda = NULL)
{
if (is.null(lambda))
lambda_factor <- x#tnn_d$lambda_factor
else lambda_factor <- 2^(-lambda)
x#tnn_d$counts <- x#tnn_d$counts * lambda_factor^t
x#tracds_d$mm <- smc_fade(x#tracds_d$mm, lambda_factor^t)
invisible(x)
}
<environment: namespace:rEMM>
So instead of writing a new function, why not calculate lambda the way you'd like it, transformed so that lambda_factor used in this function is consistent with the transformations
mylambda <- function(t, lambda, T, c)
-log(1/(1+(T-t/c)^lambda)), 2) / t
and then invoke fade as
fade(x, t, mylambda(t, lambda, T, c))
? Wrap this further as
myfade <- function(x, t, lambda)
fade(x, t, mylambda(t, lambda, T, c))
If you have the code for the current fade function that looks like:
fade <- function(...){
...
}
Just copy that code, make the modifications you want (you seem to know what you want), and then just run the code. Your new function will appear in the global environment, and will be called each time you call fade(...).
I'm trying to arcsine squareroot data lying on [-1,1]. Using transf.arcsine from the metafor package produces NaNs when trying to squareroot the negative datapoints. Conceptually, I want to use arcsin(sgn(x)√|x|) i.e. square the absolute value, apply its previous sign, then arcsine transform it. The trouble is I have no idea how to begin doing this in R. Help would be appreciated.
x <- seq(-1, 1, length = 20)
asin(sign(x) * sqrt(abs(x)))
or as a function
trans.arcsine <- function(x){
asin(sign(x) * sqrt(abs(x)))
}
trans.arcsine(x)
Help in R is just help() or help.search(). So, let's try the obvious,
> help(arcsin)
No documentation for ‘arcsin’ in specified packages and libraries:
OK, that's not good. But it must be able to trig... let's try something even simpler.
help(sin)
There's all the trig functions. And I note, there's a link to Math on the page. Clicking that seems to provide all of the functions you need. It turns out that I could have just typed..
help(Math)
also,
help.search('trigonometry')
I had a similar prob. I wanted to arcsine transform most of the dataset "logmeantd.ascvr" & approached it in this manner:
First make are data range has been transformed b/t -1 and 1 (in this case they were expressed as percentages):
logmeantd.ascvr[1:12] <- logmeantd.ascvr[1:12] * 0.01
Next apply the square root function, sqrt():
logmeantd.ascvr[1:12] <- sqrt(logmeantd.ascvr[1:12])
lastly apply the arc sine function, asin():
logmeantd.ascvr[1:12] <- asin(logmeantd.ascvr[1:12])
*note in this instance I had excluded the MEAN variable of my dataset because I wanted to apply a log function to it, log():
logmeantd.ascvr$MEAN <- log(logmeantd.ascvr$MEAN)
I have some experience with R as a statistics platform, but am inexperienced in image based maths. I have a series of photographs (tiff format, px/µm is known) with holes and irregular curves. I'd like to measure the shortest distance between a hole and the closest curve for that particular hole. I'd like to do this for each hole in a photograph. The holes are not regular either, so maybe I'd need to tell the program what are holes and what are curves (ImageJ has a point and segmented line functions).
Any ideas how to do this? Which package should I use in R? Would you recommend another program for this kind of task?
EDIT: Doing this is now possible using sclero package. The package is currently available on GitHub and the procedure is described in detail in the tutorial. Just to illustrate, I use an example from the tutorial:
library(devtools)
install_github("MikkoVihtakari/sclero", dependencies = TRUE)
library(sclero)
path <- file.path(system.file("extdata", package = "sclero"), "shellspots.zip")
dat <- read.ijdata(path, scale = 0.7812, unit = "um")
shell <- convert.ijdata(dat)
aligned <- spot.dist(shell)
plot(aligned)
It is also possible to add sample spot sizes using the functions provided by the sclero package. Please see Section 2.5 in the tutorial.
There's a tool for edge detection written for Image J that might help you first find the holes and the lines, and clarify them. You find it at
http://imagejdocu.tudor.lu/doku.php?id=plugin:filter:edge_detection:start
Playing around with the settings for the tresholding and the hysteresis can help in order to get the lines and holes found. It's difficult to tell whether this has much chance of working without seeing your actual photographs, but a colleague of mine had good results using this tool on FRAP images. I programmed a ImageJ tool that can calculate recoveries in FRAP analysis based on those images. You might get some ideas for yourself when looking at the code (see: http://imagejdocu.tudor.lu/doku.php?id=plugin:analysis:frap_normalization:start )
The only way I know you can work with images, is by using EBImage that's contained in the bioconductor system. The package Rimage is orphaned, so is no longer maintained.
To find the shortest distance: once you have the coordinates of the lines and holes, you can go for the shotgun approach : calculate the distances between all points and the line, and then take the minimum. An illustration about that in R :
x <- -100:100
x2 <- seq(-70,-50,length.out=length(x)/4)
a.line <- list(x = x,
y = 4*x + 5)
a.hole <- list(
x = c(x2,rev(x2)),
y = c(200 + sqrt(100-(x2+60)^2),
rev(200 - sqrt(100-(x2+60)^2)))
)
plot(a.line,type='l')
lines(a.hole,col='red')
calc.distance <- function(line,hole){
mline <- matrix(unlist(line),ncol=2)
mhole <- matrix(unlist(hole),ncol=2)
id1 <- rep(1:nrow(mline),nrow(mhole))
id2 <- rep(1:nrow(mhole), each=nrow(mline))
min(
sqrt(
(mline[id1,1]-mhole[id2,1])^2 +
(mline[id1,2]-mhole[id2,2])^2
)
)
}
Then :
> calc.distance(a.line,a.hole)
[1] 95.51649
Which you can check mathematically by deriving the equations from the circle and the line. This goes fast enough if you don't have millions of points describing thousands of lines and holes.