Arc length between points on Archimedean spiral using MATLAB or R - r

I'm needing to derive xy values and calculate arc length between each xy value, so a length value for every value in i as generated by the attached code below (excluding the origin). The points follow an Archimedean spiral path. I don't have MATLAB and am using R, but the closest I've found that I can interpret was a MATLAB example found here with credit to Jos. Below is a modified version of the MATLAB script to generate the xy data:
r = 938; %outer radius
a = 0; %inner radius
b = 7; %increment per rev
n = (r - a)./(b); %number of revolutions
th = 2*n*pi; %angle
i = linspace(0,n,n*1000);
x = (a+b*i).* cos(2*pi*i);
y = (a+b*i).* sin(2*pi*i);
and the R equivalent:
r <- 938 # outer radius
a <- 0 # inner radius
b <- 7 # increment per revolution
n <- (r - a)/b # number of revolutions
th <- 2*n*pi # angle
i <- seq(0, n, length.out = n*1000) # number of points per revolution
x <- (a+b*i) * cos(2*pi*i)
y <- (a+b*i) * sin(2*pi*i)
My assumption is that the easiest way to derive arc length between every point is to coerce i, x, and y into a MATLAB table (dataframe in R). The closest I've found for calculating arc length is this formula for calculating the total length. I'm unable to interpret math notation, so am not sure how to implement it or how to modify it to calculate arc length between every point. Using the example of the first spiral in the link above for calculating total length I tried:
sqrt((5 + 0.1289155 * 47.12389)^2 + (0.1289155)^2) * 47.12389
The link above says the result should be 378.8 but my attempt returns 521.9324. So in sum, how is the arc length between points derived in MATLAB or R?

The exact formula for the length, with your notations for a (start radius), r (end radius) and b (increment per revolutions) reduces to
(note that in order to preserve the OP notation, there are two different meanings of the same r symbol, that might be frown upon by some)
That formula can be implemented this way
r <- 938 # outer radius
a <- 0 # inner radius
b <- 7 # increment per revolution
A <- 2 * pi / b
fa <- sqrt(1 + A^2 * a^2)
fr <- sqrt(1 + A^2 * r^2)
int_r <- (A*r*fr - log(-(A*r)+fr))/(2*A)
int_a <- (A*a*fa - log(-(A*a)+fa))/(2*A)
spiralLen <- int_r - int_a #exact formula
394877.5
you can also use numerical (approximative) integration in R stats integrate to evaluate the integral
integrate(function(r){sqrt(4*pi^2*r^2/b^2+1)}, a, r)
394877.3 with absolute error < 5.8
Another method, that gives a rather rough approximation, but is a very good verification because it doesn't use any theoretical considerations, but just takes the data you generated - and sums up the length of the segments of all consecutive points in the data:
dx <- x[2:length(x)] - x[1:length(x)-1]
dy <- y[2:length(x)] - y[1:length(x)-1]
len_approx = sum(sqrt(dx^2 + dy^2))
394876.8
As for plotting, in R, since you already have a set of points, it seems the very basic application of plot function does the job
plot(x, y, type="l")

Related

How to make use of rollapply or filter functions in R to create the moving average of a specific size in a 2D matrix

Considering that we have a padded matrix with window size k ready for getting smoothed using the moving average, I want to know if the filter or rollapply or other R functions I am not aware of can be used to find the moving average of a submatrix. Looking at R manuals I saw they have been used for MA in 1D but just wanted to know if they can be used for MA in 2D as well or not.
mat.pad<-function(X,k){
dims<-dim(X)
n<-dims[1]
m<-dims[2]
pad.X <- matrix(0, n + 2 * k, m + 2 * k)
pad.X[(k + 1):(n + k), (k + 1):(m + k)] <- X
return(pad.X)
}
If you are asking if a moving average can be applied to a multiple dimensional object, the answer is yes.
Example
library(zoo)
#
a <- 1:10
b <- 11:20
c <- cbind(a,b)
#
rollapply(c,
FUN = mean,
width = 3)

Standard form of ellipse

I'm getting ellipses as level curves of a fit dataset. After selecting a particular ellipse, I would like to report it as a center point, semi-major and minor axes lengths, and a rotation angle. In other words, I would like to transform (using mathematica) my ellipse equation from the form:
Ax^2 + By^2 + Cx + Dy + Exy + F = 0
to a more standard form:
((xCos[alpha] - ySin[alpha] - h)^2)/(r^2) + ((xSin[alpha] + yCos[alpha] - k)^2)/(s^2) = 1
where (h,k) is the center, alpha is the rotation angle, and r and s are the semi-axes
The actual equation I'm attempting to transform is
1.68052 x - 9.83173 x^2 + 4.89519 y - 1.19133 x y - 9.70891 y^2 + 6.09234 = 0
I know the center point is the fitted maximum, which is:
{0.0704526, 0.247775}
I posted a version of this answer on Math SE since it benefits a lot from proper mathematical typesetting. The example there is simpler as well, and there are some extra details.
The following description follows the German Wikipedia article Hauptachsentransformation. Its English counterpart, according to inter-wiki links, is principal component analysis. I find the former article a lot more geometric than the latter. The latter has a strong focus on statistical data, though, so it might be useful for you nevertheless.
Rotation
Your ellipse is described as
[A E/2] [x] [x]
[x y] * [E/2 B] * [y] + [C D] * [y] + F = 0
First you identify the rotation. You do this by identifying the eigenvalues and eigenvectors of this 2×2 matrix. These eigenvectors will form an orthogonal matrix describing your rotation: its entries are the Sin[alpha] and Cos[alpha] from your formula.
With your numbers, you get
[A E/2] [-0.74248 0.66987] [-10.369 0 ] [-0.74248 -0.66987]
[E/2 B] = [-0.66987 -0.74248] * [ 0 -9.1715] * [ 0.66987 -0.74248]
The first of the three factors is the matrix formed by the eigenvectors, each normalized to unit length. The central matrix has the eigenvalues on the diagonal, and the last one is the transpose of the first. If you multiply the vector (x,y) with that last matrix, then you will change the coordinate system in such a way that the mixed term vanishes, i.e. the x and y axes are parallel to the main axes of your ellipse. This is just what happens in your desired formula, so now you know that
Cos[alpha] = -0.74248 (-0.742479398678 with more accuracy)
Sin[alpha] = 0.66987 ( 0.669868899516)
Translation
If you multiply the row vector [C D] in the above formula with the first of the three matrices, then this effect will exactly cancel the multiplication of (x, y) by the third matrix. Therefore in that changed coordinate system, you use the central diagonal matrix for the quadratic term, and this product for the linear term.
[-0.74248 0.66987]
[1.68052, 4.89519] * [-0.66987 -0.74248] = [-4.5269 -2.5089]
Now you have to complete the square independently for x and y, and you end up with a form from which you can read the center coordinates.
-10.369x² -4.5269x = -10.369(x + 0.21829)² + 0.49408
-9.1715y² -2.5089y = -9.1715(y + 0.13677)² + 0.17157
h = -0.21829 (-0.218286476695)
k = -0.13677 (-0.136774259156)
Note that h and k describe the center in the already rotated coordinate system; to obtain the original center you'd multiply again with the first matrix:
[-0.74248 0.66987] [-0.21829] [0.07045]
[-0.66987 -0.74248] * [-0.13677] = [0.24778]
which fits your description.
Scaling
The completed squares above contributed some more terms to the constant factor F:
6.09234 + 0.49408 + 0.17157 = 6.7580
Now you move this to the right side of the equation, then divide the whole equation by this number so that you get the = 1 from your desired form. Then you can deduce the radii.
1 -10.369
-- = ------- = 1.5344
r² -6.7580
1 -9.1715
-- = ------- = 1.3571
s² -6.7580
r = 0.80730 (0.807304599162099)
s = 0.85840 (0.858398019487315)
Verifying the result
Now let's check that we didn't make any mistakes. With the parameters we found, you can piece together the equation
((-0.74248*x - 0.66987*y + 0.21829)^2)/(0.80730^2)
+ (( 0.66987*x - 0.74248*y + 0.13677)^2)/(0.85840^2) = 1
Move the 1 to the left side, and multiply by -6.7580, and you should end up with the original equation. Expanding that (with the extra precision versions printed in parentheses), you'll get
-9.8317300000 x^2
-1.1913300000 x y
+1.6805200000 x
-9.7089100000 y^2
+4.8951900000 y
+6.0923400000
which is a perfect match for your input.
If you have h and k, you can use Lagrange Multipliers to maximize / minimize the function (x-h)^2+(y-k)^2 subject to the constraint of being on the ellipse. The maximum distance will be the major radius, the minimum distance the minor radius, and alpha will be how much they are rotated from horizontal.

Greatest distance between set of longitude/latitude points

I have a set of lng/lat coordinates. What would be an efficient method of calculating the greatest distance between any two points in the set (the "maximum diameter" if you will)?
A naive way is to use Haversine formula to calculate the distance between each 2 points and get the maximum, but this doesn't scale well obviously.
Edit: the points are located on a sufficiently small area, measuring the area in which a person carrying a mobile device was active in the course of a single day.
Theorem #1: The ordering of any two great circle distances along the surface of the earth is the same as the ordering as the straight line distance between the points where you tunnel through the earth.
Hence turn your lat-long into x,y,z based either on a spherical earth of arbitrary radius or an ellipsoid of given shape parameters. That's a couple of sines/cosines per point (not per pair of points).
Now you have a standard 3-d problem that doesn't rely on computing Haversine distances. The distance between points is just Euclidean (Pythagoras in 3d). Needs a square-root and some squares, and you can leave out the square root if you only care about comparisons.
There may be fancy spatial tree data structures to help with this. Or algorithms such as http://www.tcs.fudan.edu.cn/rudolf/Courses/Algorithms/Alg_ss_07w/Webprojects/Qinbo_diameter/2d_alg.htm (click 'Next' for 3d methods). Or C++ code here: http://valis.cs.uiuc.edu/~sariel/papers/00/diameter/diam_prog.html
Once you've found your maximum distance pair, you can use the Haversine formula to get the distance along the surface for that pair.
I think that the following could be a useful approximation, which scales linearly instead of quadratically with the number of points, and is quite easy to implement:
calculate the center of mass M of the points
find the point P0 that has the maximum distance to M
find the point P1 that has the maximum distance to P0
approximate the maximum diameter with the distance between P0 and P1
This can be generalized by repeating step 3 N times,
and taking the distance between PN-1 and PN
Step 1 can be carried out efficiently approximating M as the average of longitudes and latitudes, which is OK when distances are "small" and the poles are sufficiently far away. The other steps could be carried out using the exact distance formula, but they are much faster if the points' coordinates can be approximated as lying on a plane. Once the "distant pair" (hopefully the pair with the maximum distance) has been found, its distance can be re-calculated with the exact formula.
An example of approximation could be the following: if φ(M) and λ(M) are latitude and longitude of the center of mass calculated as Σφ(P)/n and Σλ(P)/n,
x(P) = (λ(P) - λ(M) + C) cos(φ(P))
y(P) = φ(P) - φ(M) [ this is only for clarity, it can also simply be y(P) = φ(P) ]
where C is usually 0, but can be ± 360° if the set of points crosses the λ=±180° line. To find the maximum distance you simply have to find
max((x(PN) - x(PN-1))2 + (y(PN) - y(PN-1))2)
(you don't need the square root because it is monotonic)
The same coordinate transformation could be used to repeat step 1 (in the new coordinate system) in order to have a better starting point. I suspect that if some conditions are met, the above steps (without repeating step 3) always lead to the "true distant pair" (my terminology). If I only knew which conditions...
EDIT:
I hate building on others' solutions, but someone will have to.
Still keeping the above 4 steps, with the optional (but probably beneficial, depending on the typical distribution of points) repetition of step 3,
and following the solution of Spacedman,
doing calculations in 3D overcomes the limitations of closeness and distance from poles:
x(P) = sin(φ(P))
y(P) = cos(φ(P)) sin(λ(P))
z(P) = cos(φ(P)) cos(λ(P))
(the only approximation is that this holds only for a perfect sphere)
The center of mass is given by x(M) = Σx(P)/n, etc.,
and the maximum one has to look for is
max((x(PN) - x(PN-1))2 + (y(PN) - y(PN-1))2 + (z(PN) - z(PN-1))2)
So: you first transform spherical to cartesian coordinates, then start from the center of mass, to find, in at least two steps (steps 2 and 3), the farthest point from the preceding point. You could repeat step 3 as long as the distance increases, perhaps with a maximum number of repetitions, but this won't take you away from a local maximum. Starting from the center of mass is not of much help, either, if the points are spread all over the Earth.
EDIT 2:
I learned enough R to write down the core of the algorithm (nice language for data analysis!)
For the plane approximation, ignoring the problem around the λ=±180° line:
# input: lng, lat (vectors)
rad = pi / 180;
x = (lng - mean(lng)) * cos(lat * rad)
y = (lat - mean(lat))
i = which.max((x - mean(x))^2 + (y )^2)
j = which.max((x - x[i] )^2 + (y - y[i])^2)
# output: i, j (indices)
On my PC it takes less than a second to find the indices i and j for 1000000 points. The following 3D version is a bit slower, but works for any distribution of points (and does not need to be amended when the λ=±180° line is crossed):
# input: lng, lat
rad = pi / 180
x = sin(lat * rad)
f = cos(lat * rad)
y = sin(lng * rad) * f
z = cos(lng * rad) * f
i = which.max((x - mean(x))^2 + (y - mean(y))^2 + (z - mean(z))^2)
j = which.max((x - x[i] )^2 + (y - y[i] )^2 + (z - z[i] )^2)
k = which.max((x - x[j] )^2 + (y - y[j] )^2 + (z - z[j] )^2) # optional
# output: j, k (or i, j)
The calculation of k can be left out (i.e., the result could be given by i and j), depending on the data and on the requirements. On the other hand, my experiments have shown that calculating a further index is useless.
It should be remembered that, in any case, the distance between the resulting points is an estimate which is a lower bound of the "diameter" of the set, although it very often will be the diameter itself (how often depends on the data.)
EDIT 3:
Unfortunately the relative error of the plane approximation can, in extreme cases, be as much as 1-1/√3 ≅ 42.3%, which may be unacceptable, even if very rare. The algorithm can be modified in order to have an upper bound of approximately 20%, which I have derived by compass and straight-edge (the analytic solution is cumbersome). The modified algorithm finds a pair of points whith a locally maximal distance, then repeats the same steps, but this time starting from the midpoint of the first pair, possibly finding a different pair:
# input: lng, lat
rad = pi / 180
x = (lng - mean(lng)) * cos(lat * rad)
y = (lat - mean(lat))
i.n_1 = 1 # n_1: n-1
x.n_1 = mean(x)
y.n_1 = 0 # = mean(y)
s.n_1 = 0 # s: square of distance
repeat {
s = (x - x.n_1)^2 + (y - y.n_1)^2
i.n = which.max(s)
x.n = x[i.n]
y.n = y[i.n]
s.n = s[i.n]
if (s.n <= s.n_1) break
i.n_1 = i.n
x.n_1 = x.n
y.n_1 = y.n
s.n_1 = s.n
}
i.m_1 = 1
x.m_1 = (x.n + x.n_1) / 2
y.m_1 = (y.n + y.n_1) / 2
s.m_1 = 0
m_ok = TRUE
repeat {
s = (x - x.m_1)^2 + (y - y.m_1)^2
i.m = which.max(s)
if (i.m == i.n || i.m == i.n_1) { m_ok = FALSE; break }
x.m = x[i.m]
y.m = y[i.m]
s.m = s[i.m]
if (s.m <= s.m_1) break
i.m_1 = i.m
x.m_1 = x.m
y.m_1 = y.m
s.m_1 = s.m
}
if (m_ok && s.m > s.n) {
i = i.m
j = i.m_1
} else {
i = i.n
j = i.n_1
}
# output: i, j
The 3D algorithm can be modified in a similar way. It is possible (both in the 2D and in the 3D case) to start over once again from the midpoint of the second pair of points (if found). The upper bound in this case is "left as an exercise for the reader" :-).
Comparison of the modified algorithm with the (too) simple algorithm has shown, for normal and for square uniform distributions, a near doubling of processing time, and a reduction of the average error from .6% to .03% (order of magnitude). A further restart from the midpoint results in an a just slightly better average error, but almost equal maximum error.
EDIT 4:
I have to study this article yet, but it looks like the 20% I found with compass and straight-edge is in fact 1-1/√(5-2√3) ≅ 19.3%
Here's a naive example that doesn't scale well (as you say), as you say but might help with building a solution in R.
## lonlat points
n <- 100
d <- cbind(runif(n, -180, 180), runif(n, -90, 90))
library(sp)
## distances on WGS84 ellipsoid
x <- spDists(d, longlat = TRUE)
## row, then column index of furthest points
ind <- c(row(x)[which.max(x)], col(x)[which.max(x)])
## maps
library(maptools)
data(wrld_simpl)
plot(as(wrld_simpl, "SpatialLines"), col = "grey")
points(d, pch = 16, cex = 0.5)
## draw the points and a line between on the page
points(d[ind, ], pch = 16)
lines(d[ind, ], lwd = 2)
## for extra credit, draw the great circle on which the furthest points lie
library(geosphere)
lines(greatCircle(d[ind[1], ], d[ind[2], ]), col = "firebrick")
The geosphere package provides more options for distance calculation if that's needed. See ?spDists in sp for the details used here.
You don't tell us whether these points will be located in a sufficiently small part of the globe. For truly global sets of points, my first guess would be running a naive O(n^2) algorithm, possibly getting performance boost with some spatial indexing (R*-trees, octal-trees etc.). The idea is to pre-generate an n*(n-1) list of the triangle in the distance matrix and feed it in chunks to a fast distance library to minimize I/O and process churn. Haversine is fine, you could also do it with Vincenty's method (the greatest contributor to running time is quadratic complexity, not the (fixed number of) iterations in Vincenty's formula). As a side note, in fact, you don't need R for this stuff.
EDIT #2: The Barequet-Har-Peled algorithm (as pointed at by Spacedman in his reply) has O((n+1/(e^3))log(1/e)) complexity for e>0, and is worth exploring.
For the quasi-planar problem, this is known as "diameter of convex hull" and has three parts:
Computing convex hull with Graham's scan which is O(n*log(n)) - in fact, one should try transforming points into a transverse Mercator projection (using the centroid of the points in data set).
Finding antipodal points by Rotating Calipers algorithm - linear O(n).
Finding the largest distance among all antipodal pairs - linear search, O(n).
The link with pseudo-code and discussion: http://fredfsh.com/2013/05/03/convex-hull-and-its-diameter/
See also the discussion on a related question here: https://gis.stackexchange.com/questions/17358/how-can-i-find-the-farthest-point-from-a-set-of-existing-points
EDIT: Spacedman's solution pointed me to the Malandain-Boissonnat algorithm (see the paper in pdf here). However, this is worse or the same as the bruteforce naive O(n^2) algorithm.

Calculating the distance between polygon and point in R

I have a, not necessarily convex, polygon without intersections and a point outside this polygon. I'm wondering how calculate the Euclidian distance most efficiently in a 2-dimensional space. Is there a standard method in R?
My first idea was to calculate the minimum distance of all the lines of the polygon (extended infinitely so they are line, not line pieces) and then calculate the distance from the point to each individual line using the start of the line piece and Pythagoras.
Do you know about a package that implements an efficient algorithm?
You could use the rgeos package and the gDistance method. This will require you to prepare your geometries, creating spgeom objects from the data you have (I assume it is a data.frame or something similar). The rgeos documentation is very detailed (see the PDF manual of the package from the CRAN page), this is one relevant example from the gDistance documentation:
pt1 = readWKT("POINT(0.5 0.5)")
pt2 = readWKT("POINT(2 2)")
p1 = readWKT("POLYGON((0 0,1 0,1 1,0 1,0 0))")
p2 = readWKT("POLYGON((2 0,3 1,4 0,2 0))")
gDistance(pt1,pt2)
gDistance(p1,pt1)
gDistance(p1,pt2)
gDistance(p1,p2)
readWKT is included in rgeos as well.
Rgeos is based on the GEOS library, one of the de facto standards in geometric computing. If you don't feel like reinventing the wheel, this is a good way to go.
I decided to return and write up a theoretical solution, just for posterity. This isn't the most concise example, but it is fully transparent for those who want to know how to go about solving a problem like this by hand.
The theoretical algorithm
First, our assumptions.
We assume the polygon's vertices specify the points of a polygon in a rotational order going clockwise or counter-clockwise and the lines of the polygon cannot intersect. This means we have a normal geometric polygon, and not some strangely-defined vector graphic shape.
We assume this is a set of Cartesian coordinates, using 'x' and 'y' values that represent location on a 2-dimensional plane.
We assume the point must be outside the internal area of the polygon.
Finally, we assume that the distance desired is the minimum distance between the point and all of the infinite number of points on the perimeter of the polygon.
Now before coding, we should write out in basic terms what we want to do. We can assume that the shortest distance between the polygon and the point outside the polygon will always be one of two things: a vertex of the polygon or a point on a line between two vertices. With this in mind, we do the following steps:
Calculate the distances between all vertices and the target point.
Find the two vertices closest to the target point.
If either:
(a) the two closest vertices are not adjacent or
(b) the inside angles of either of the two vertices is greater or equal to 90 degrees,
then the closest vertex is the closest point. Calculate the distance between the closest point and the target point.
Otherwise, calculate the height of the triangle formed between the two points.
We're basically just looking to see if a vertex is closest to the point or if a point on a line is closest to the point. We have to use a few trig functions to make this work.
The code
To make this work properly, we want to avoid any 'for' loops and want to only use vectorized functions when looking at the entire list of polygon vertices. Luckily, this is pretty easy in R. We accept a data frame with 'x' and 'y' columns for our polygon's vertices, and we accept a vector with one 'x' and 'y' value for the point's location.
get_Point_Dist_from_Polygon <- function(.polygon, .point){
# Calculate all vertex distances from the target point.
vertex_Distance <- sqrt((.point[1] - .polygon$x)^2 + (.point[2] - .polygon$y)^2)
# Select two closest vertices.
min_1_Index <- which.min(vertex_Distance)
min_2_Index <- which.min(vertex_Distance[-min_1_Index])
# Calculate lengths of triangle sides made of
# the target point and two closest points.
a <- vertex_Distance[min_1_Index]
b <- vertex_Distance[min_2_Index]
c <- sqrt(diff(.polygon$x[c(min_1_Index, min_2_Index)])^2 + diff(.polygon$y[c(min_1_Index, min_2_Index)])^2)
if(abs(min_1_Index - min_2_Index) != 1 |
acos((b^2 + c^2 - a^2)/(2*b*c)) >= pi/2 |
acos((a^2 + c^2 - b^2)/(2*a*c)) >= pi/2
){
# Step 3 of algorithm.
return(vertex_Distance[min_1_Index])
} else {
# Step 4 of algorithm.
# Here we are using the law of cosines.
return(sqrt((a+b-c) * (a-b+c) * (-a+b+c) * (a+b+c)) / (2 * c))
}
}
Demo
polygon <- read.table(text="
x, y
0, 1
1, 0.8
2, 1.3
3, 1.4
2.5,0.3
1.5,0.5
0.5,0.1", header=TRUE, sep=",")
point <- c(3.2, 4.1)
get_Point_Dist_from_Polygon(polygon, point)
# 2.707397
Otherwise:
p2poly <- function(pt, poly){
# Closing the polygon
if(!identical(poly[1,],poly[nrow(poly),])){poly<-rbind(poly,poly[1,])}
# A simple distance function
dis <- function(x0,x1,y0,y1){sqrt((x0-x1)^2 +(y0-y1)^2)}
d <- c() # Your distance vector
for(i in 1:(nrow(poly)-1)){
ba <- c((pt[1]-poly[i,1]),(pt[2]-poly[i,2])) #Vector BA
bc <- c((poly[i+1,1]-poly[i,1]),(poly[i+1,2]-poly[i,2])) #Vector BC
dbc <- dis(poly[i+1,1],poly[i,1],poly[i+1,2],poly[i,2]) #Distance BC
dp <- (ba[1]*bc[1]+ba[2]*bc[2])/dbc #Projection of A on BC
if(dp<=0){ #If projection is outside of BC on B side
d[i] <- dis(pt[1],poly[i,1],pt[2],poly[i,2])
}else if(dp>=dbc){ #If projection is outside of BC on C side
d[i] <- dis(poly[i+1,1],pt[1],poly[i+1,2],pt[2])
}else{ #If projection is inside of BC
d[i] <- sqrt(abs((ba[1]^2 +ba[2]^2)-dp^2))
}
}
min(d)
}
Example:
pt <- c(3,2)
triangle <- matrix(c(1,3,2,3,4,2),byrow=T, nrow=3)
p2poly(pt,triangle)
[1] 0.3162278
I used distm() function in geosphere package to calculate the distence when points and apexes are presented in coordinate system. Also, you can easily make some alternation by substitude dis <- function(x0,x1,y0,y1){sqrt((x0-x1)^2 +(y0-y1)^2)}
for distm() .
algo.p2poly <- function(pt, poly){
if(!identical(poly[1,],poly[nrow(poly),])){poly<-rbind(poly,poly[1,])}
library(geosphere)
n <- nrow(poly) - 1
pa <- distm(pt, poly[1:n, ])
pb <- distm(pt, poly[2:(n+1), ])
ab <- diag(distm(poly[1:n, ], poly[2:(n+1), ]))
p <- (pa + pb + ab) / 2
d <- 2 * sqrt(p * (p - pa) * (p - pb) * (p - ab)) / ab
cosa <- (pa^2 + ab^2 - pb^2) / (2 * pa * ab)
cosb <- (pb^2 + ab^2 - pa^2) / (2 * pb * ab)
d[which(cosa <= 0)] <- pa[which(cosa <= 0)]
d[which(cosb <= 0)] <- pb[which(cosb <= 0)]
return(min(d))
}
Example:
poly <- matrix(c(114.33508, 114.33616,
114.33551, 114.33824,
114.34629, 114.35053,
114.35592, 114.35951,
114.36275, 114.35340,
114.35391, 114.34715,
114.34385, 114.34349,
114.33896, 114.33917,
30.48271, 30.47791,
30.47567, 30.47356,
30.46876, 30.46851,
30.46882, 30.46770,
30.47219, 30.47356,
30.47499, 30.47673,
30.47405, 30.47723,
30.47872, 30.48320),
byrow = F, nrow = 16)
pt1 <- c(114.33508, 30.48271)
pt2 <- c(114.6351, 30.98271)
algo.p2poly(pt1, poly)
algo.p2poly(pt2, poly)
Outcome:
> algo.p2poly(pt1, poly)
[1] 0
> algo.p2poly(pt2, poly)
[1] 62399.81

How to use the sum function in a for loop in R?

We want to calculate the value of an integral in linear plot.
For a better understanding look at the photo. Let's say the overall area is 1. We want to find what the value in a certain part is. For instance we want to know how much % of the overall 100% lay within the 10th and 11th month if everything refers to months and A as maximum stands for 24.
We can calculate a integral and then should be able to get the searched area by F(x) - F(x-1)
I thoght about the following code:
a <- 24
tab <-matrix(0,a,1)
tab <-cbind(seq(1,a),tab)
tab<-data.frame(tab)
#initialization for first point
tab[1,2] <- (2*tab[1,1] / a - tab[1,1]^2 / a^2)
#for loop for calculation of integral of each point - integral until to the area
for(i in 2:nrow(tab))
{tab[i,2] <- (2*tab[i,1] / a - tab[i,1]^2/ a^2) - sum(tab[1,2]:tab[i-1,2])}
#plotting
plot(tab[,2], type="l")
If you see the plot - it's confusing. Any ideas how to handle this correct?
The base R function integrate() can do this for you:
f <- function(x, A) 2/A - x / A^2
integrate(function(x)f(x, 24), lower=10, upper=11)
0.06510417 with absolute error < 7.2e-16
Using the formulas directly:
a <- 24 # number of divisions
x <- c(seq(1,a)) #
y <- x*2/a - x^2/a^2 # F(x)
z <- (x*2/a - x^2/a^2) - ((x-1)*2/a - (x-1)^2/a^2) # F(x) - F(x-1)
Then do the binding afterward.
> sum(z)
[1] 1

Resources