Fitting a line to an image - r

I am trying to fit a line to an image based on the intensity (or color) of the pixels. The figure below shows a typical test image in panel 1 with a line manually drawn in panel 2. The test image (matrix) can be downloaded here: .RData from dropbox .
I would like to use a regression analysis to produce something similar to the manually drawn line in panel 2. However, I can not use a simple linear regression because, as with all images, there are errors in both the x and y axes.
I am open to algorithm descriptions with relevant equations, links, etc... and not necessarily code that I can copy and paste.
METHODS I WANT TO AVOID
Correlating a series synthetic binary images of pixels drawn at various slopes with the actual data image. For example the correlation of the two images below would be quite good, but again, I want to avoid this method.
Using a skeletonization algorithm to reduce the image such that a simple linear regression can be used.

Seismologists, interestingly enough, deal with similar problems where they correct reflection data based on the distance between a seismic source and a receiver with a process known as normal move out (Normal Moveout). I used a similar process.
The general algorithm is:
load in the image
define a series of slopes to investigate
define a window length that is < number of image columns
loop over the series of slopes and...
define index locations (x,y) over the image based on the slope and the size of the window (gray points in row one of image below).
build a matrix from those original matrix indexed at the x,y locations from above (plots in row two of image below).
sum the matrix then normalize the sum by dividing by the length of the summed matrix.
save the each sum (there will be 1 sum for every velocity you loop over)
The velocity vector corresponding to the max (or min) index of the sum vector is the best slope/velocity of the image at that current pixel column (row three in image below).
Perform the above steps along the columns of the image.
The algorithm is visually described in the image below.
The code to perform the above procedure is on one column of the test data given in the question is:
load('test.RData')
## INPUTS ##
img=test
vel.min=1 ## minimum velocity (or slope) to test
vel.max=20 ## max velocity to test
vel.number=100 ## how many velocities to test
win=10 ## size of window to investigate
## define a time index
ti=nrow(img)/2
## set up a vector to hold the velocity correlation values
vel.corrs <- rep(NA,vel.number)
## define the set of velocities to search over
vels <- seq(vel.min,vel.max,length.out=vel.number)
## define a velocity index
vi=1
while(vi<=length(vels)) {
## build a binary matrix with corresponding to the window and velocity
bin.mat <- matrix(0,ncol=ncol(img),nrow=nrow(img))
slope.line <- seq(0,ncol(bin.mat)/vels[vi],length.out=ncol(bin.mat))
bin.mat[(ti-win/2):(ti+win/2),]=1
## define the offeset
offset <- rep(slope.line,each=win+1)
## define the indices of array points according to velocity and window
win.vel.ind <- cbind(which(bin.mat==1,arr.ind=TRUE)[,1]+offset,which(bin.mat==1,arr.ind=TRUE)[,2])
## limit the points to the dimensions of the image
if(any(floor(win.vel.ind[,1]) > nrow(img))){
win.vel.ind[(which(floor(win.vel.ind[,1])>nrow(img))),]=NA
##win.vel.ind <- win.vel.ind[-(which(floor(win.vel.ind[,1])>nrow(img))),]
}
## pluck the values of the image associated with those non-NA indices
slice <- img[win.vel.ind]
## build a matrix of the slice vector with nrow=win+1
slice.mat <- matrix(slice,nrow=win+1,ncol=ncol(img),byrow=FALSE)
## apply a hamming window
##ham.mat <- matrix(hamming(win+1),ncol=ncol(slice.mat),nrow=nrow(slice.mat))
##slice.ham <- slice.mat*ham.mat
## sum this 'slice' and normalize and store
vel.corrs[vi] <- sum(slice,na.rm=TRUE)/length(na.omit(slice))
vi=vi+1
}

Related

How to calculate displacement of an image with cross correlation in R?

I am comparing two satellite images (one optical and one SAR) and I want to determine the displacement between them. Both images have a spatial resolution of 30m. I would like to analyze the displacement in pixels (e.g., greater than half a pixel = 15m). I found a code and adapted it to my data. The script generates a cross-correlation image, but I don't understand what I have to do to generate a graphic where I can see the displacement in pixels (i.e., in meters). For example, if there are displacements greater than 15m, etc.
These are the images I am analyzing.
Here my images: first optical and then SAR
Plese I need a help!! :)
## read my images with raster
f1 <- as.matrix(readPNG("optical.png", package="png"))
f2 <- as.matrix(readPNG("sar.png", package="png"))
## take the fft of the first frame
F1 <- fft(f1)
## take the Conjugate fft of the second frame
noise.b <- runif(length(f2),min(range(f2)),max(range(f2)))
f2 <- noise.b+f2
F2.c <- Conj(fft(f2))
## calculate the cross power spectrum according to the wiki article
R <- (F1*F2.c)/abs(F1*F2.c)
## take the inverse fft of R
r <- fft(R,inv=TRUE)/length(R)
## because the zero valued imaginary numbers are not needed
r <- Re(r)
## show the normalized cross-correlation
image(r)
## find the max in the cross correlation matrix, or the phase shift -
## between the two images
min.err <- which(r==max(r),arr.ind=TRUE)
shift <- (dim(f1)+3)/2-min.err
Here's the result of the cross-correlation.
enter image description here

How can I find the pixel-wise maximum of multiple rasters?

I have hundreds of rasters with same resolution and extent. It's a time series and each raster represent one point of time.
I know how to find the absolute maximum value in a raster.
But how can I find the maximum value in each cell in the entire time series of rasters?
If a1,a2,......a1000 are rasters, I want to create a raster x where each pixel is the maximum of all corresponding pixels of a1-a1000.
If you first put the rasters in a stack, you can then simply apply min() or max() to the stack to get the summary RasterLayer you're after
## Example rasters and stack
r1 <- raster(matrix(1:4,ncol=4))
r2 <- -2*r1
r3 <- 2*r1
rr <- list(r1,r2,r3)
s <- stack(rr)
## Extract the pixel-wise min and max values
min(s)
max(s)
(To apply some other, more complicated function that returns a scalar for each pixel in the stack, you may want to use calc(), as demonstrated (for example) here.)

Find correct 2D translation of a subset of coordinates

I have a problem I wish to solve in R with example data below. I know this must have been solved many times but I have not been able to find a solution that works for me in R.
The core of what I want to do is to find how to translate a set of 2D coordinates to best fit into an other, larger, set of 2D coordinates. Imagine for example having a Polaroid photo of a small piece of the starry sky with you out at night, and you want to hold it up in a position so they match the stars' current positions.
Here is how to generate data similar to my real problem:
# create reference points (the "starry sky")
set.seed(99)
ref_coords = data.frame(x = runif(50,0,100), y = runif(50,0,100))
# generate points take subset of coordinates to serve as points we
# are looking for ("the Polaroid")
my_coords_final = ref_coords[c(5,12,15,24,31,34,48,49),]
# add a little bit of variation as compared to reference points
# (data should very similar, but have a little bit of noise)
set.seed(100)
my_coords_final$x = my_coords_final$x+rnorm(8,0,.1)
set.seed(101)
my_coords_final$y = my_coords_final$y+rnorm(8,0,.1)
# create "start values" by, e.g., translating the points we are
# looking for to start at (0,0)
my_coords_start =apply(my_coords_final,2,function(x) x-min(x))
# Plot of example data, goal is to find the dotted vector that
# corresponds to the translation needed
plot(ref_coords, cex = 1.2) # "Starry sky"
points(my_coords_start,pch=20, col = "red") # start position of "Polaroid"
points(my_coords_final,pch=20, col = "blue") # corrected position of "Polaroid"
segments(my_coords_start[1,1],my_coords_start[1,2],
my_coords_final[1,1],my_coords_final[1,2],lty="dotted")
Plotting the data as above should yield:
The result I want is basically what the dotted line in the plot above represents, i.e. a delta in x and y that I could apply to the start coordinates to move them to their correct position in the reference grid.
Details about the real data
There should be close to no rotational or scaling difference between my points and the reference points.
My real data is around 1000 reference points and up to a few hundred points to search (could use less if more efficient)
I expect to have to search about 10 to 20 sets of reference points to find my match, as many of the reference sets will not contain my points.
Thank you for your time, I'd really appreciate any input!
EDIT: To clarify, the right plot represent the reference data. The left plot represents the points that I want to translate across the reference data in order to find a position where they best match the reference. That position, in this case, is represented by the blue dots in the previous figure.
Finally, any working strategy must not use the data in my_coords_final, but rather reproduce that set of coordinates starting from my_coords_start using ref_coords.
So, the previous approach I posted (see edit history) using optim() to minimize the sum of distances between points will only work in the limited circumstance where the point distribution used as reference data is in the middle of the point field. The solution that satisfies the question and seems to still be workable for a few thousand points, would be a brute-force delta and comparison algorithm that calculates the differences between each point in the field against a single point of the reference data and then determines how many of the rest of the reference data are within a minimum threshold (which is needed to account for the noise in the data):
## A brute-force approach where min_dist can be used to
## ameliorate some random noise:
min_dist <- 5
win_thresh <- 0
win_thresh_old <- 0
for(i in 1:nrow(ref_coords)) {
x2 <- my_coords_start[,1]
y2 <- my_coords_start[,2]
x1 <- ref_coords[,1] + (x2[1] - ref_coords[i,1])
y1 <- ref_coords[,2] + (y2[1] - ref_coords[i,2])
## Calculate all pairwise distances between reference and field data:
dists <- dist( cbind( c(x1, x2), c(y1, y2) ), "euclidean")
## Only take distances for the sampled data:
dists <- as.matrix(dists)[-1*1:length(x1),]
## Calculate the number of distances within the minimum
## distance threshold minus the diagonal portion:
win_thresh <- sum(rowSums(dists < min_dist) > 1)
## If we have more "matches" than our best then calculate a new
## dx and dy:
if (win_thresh > win_thresh_old) {
win_thresh_old <- win_thresh
dx <- (x2[1] - ref_coords[i,1])
dy <- (y2[1] - ref_coords[i,2])
}
}
## Plot estimated correction (your delta x and delta y) calculated
## from the brute force calculation of shifts:
points(
x=ref_coords[,1] + dx,
y=ref_coords[,2] + dy,
cex=1.5, col = "red"
)
I'm very interested to know if there's anyone that solves this in a more efficient manner for the number of points in the test data, possibly using a statistical or optimization algorithm.

How to count line segment occurrences by pixel in R?

I am trying to convey the concentration of lines in 2D space by showing the number of crossings through each pixel in a grid. I am picturing something similar to a density plot, but with more intuitive units. I was drawn to the spatstat package and its line segment class (psp) as it allows you to define line segments by their end points and incorporate the entire line in calculations. However, I'm struggling to find the right combination of functions to tally these counts and would appreciate any suggestions.
As shown in the example below with 50 lines, the density function produces values in (0,140), the pixellate function tallies the total length through each pixel and takes values in (0, 0.04), and as.mask produces a binary indictor of whether a line went through each pixel. I'm hoping to see something where the scale takes integer values, say 0..10.
require(spatstat)
set.seed(1234)
numLines = 50
# define line segments
L = psp(runif(numLines),runif(numLines),runif(numLines),runif(numLines), window=owin())
# image with 2-dimensional kernel density estimate
D = density.psp(L, sigma=0.03)
# image with total length of lines through each pixel
P = pixellate.psp(L)
# binary mask giving whether a line went through a pixel
B = as.mask.psp(L)
par(mfrow=c(2,2), mar=c(2,2,2,2))
plot(L, main="L")
plot(D, main="density.psp(L)")
plot(P, main="pixellate.psp(L)")
plot(B, main="as.mask.psp(L)")
The pixellate.psp function allows you to optionally specify weights to use in the calculation. I considered trying to manipulate this to normalize the pixels to take a count of one for each crossing, but the weight is applied uniquely to each line (and not specific to the line/pixel pair). I also considered calculating a binary mask for each line and adding the results, but it seems like there should be an easier way. I know that you can sample points along a line, and then do a count of the points by pixel. However, I am concerned about getting the sampling right so that there is one and only one point per line crossing of a pixel.
Is there is a straight-forward way to do this in R? Otherwise would this be an appropriate suggestion for a future package enhancement? Is this more easily accomplished in another language such as python or matlab?
The example above and my testing has been with spatstat 1.40-0, R 3.1.2, on x86_64-w64-mingw32.
You are absolutely right that this is something to put in as a future enhancement. It will be done in one of the next versions of spatstat. It will probably be an option in pixellate.psp to count the number of crossing lines rather than measure the total length.
For now you have to do something a bit convoluted as e.g:
require(spatstat)
set.seed(1234)
numLines = 50
# define line segments
L <- psp(runif(numLines),runif(numLines),runif(numLines),runif(numLines), window=owin())
# split into individual lines and use as.mask.psp on each
masklist <- lapply(1:nsegments(L), function(i) as.mask.psp(L[i]))
# convert to 0-1 image for easy addition
imlist <- lapply(masklist, as.im.owin, na.replace = 0)
rslt <- Reduce("+", imlist)
# plot
plot(rslt, main = "")

Adaptive time/position series filter in R...?

I'm trying to filter a time/position data series to produce a smoothed plot. I am measuring depth vs time (mechanical system) where the velocity is changing. I calculate velocity from the measured depth/time values and can plot velocity vs. depth, but at low speeds, the noise is excessive (for various reasons). The trend at low speeds is correct, but I'd like to be able to apply a filter that will use an adaptive smoothing routine, i.e. for low speeds (where I have many data points) I need to use a larger smoothing window, and for high speeds (few data points) I need to use a smaller window.
I've looked a bit and have figured out a solution using rollapply() but was wondering if there are other approaches. In particular, I'm not clear on how to "vectorise" an operation. I'm a relatively new coder so I'm sorry if my code is a bit amateurish. My solution is below:
adapt<-function(x,wmin,wmax) {
# adapt takes a vector of calculated velocities (x), a minimum window size (wmin),
# and a maximum window size (wmax). It returns a vector of filtered velocities
#
x<-ifelse(is.na(x),0,x) # check for na values
x<-ifelse(is.infinite(1/x),1/wmax,x) # check for infinite values
x<-runmed(x,11) # smooth raw velocities using 11 point window
wins<-ceiling(ifelse(is.infinite(1/x),wmin,1+wmax/(1+x)^15)) # set window widths
wins<-ifelse(wins<=wmin,wmin,wins) # set min windows
wins<-ifelse(wins>wmax,wmax,wins) # set max windows
out<-rollapply(x,width=wins,median) # apply filter to each element
out[length(x)]<-0 # set last value to zero
return(out)
}

Resources