I know the use of for-loop in R is often unnecessary, because it supports vectorization. I want to program as efficient as possible, there for my question concerning the following example code.
I have a hexagonal grid, and I am calculating the number of the cell, this counts from 1 to 225 in my example starting in the left lower corner, going to the right. So cell 16 is placed a bit offset right above cell 1.
see snapshot:
Therefor, if I have the Y coordinate, the X coordinate has to be either rounded, or ceiling. In my application the user points out cells, I save this and in a for loop go through the cells to determine the cells he chose as follows, with toy input values for Xcells and Ycells the user would have chosen:
gridsize <- 15
Xcells <-c(0.8066765, 1.8209879, 3.0526517, 0.5893240)
Ycells <-c(0.4577802, 0.4577802, 0.5302311, 1.5445425)
clicks <- length(Xcells)
cells <-vector('list', clicks)
This corresponds to cell 1 2 3 and 16. 4 clicks. Now to determine the cell numbers:
Y <- ceiling(Ycells)
for(i in 1:clicks){
if(Y[i]%%2==1){
X[i] <- round(Xcells[i])
}
else{
X[i]<- ceiling(Xcells[i])
}
#determine the cell numbers and store in predefined list
cells[[i]] <- (Y[i]-1)*gridsize + X[i]
}
So if the Y is 'even' the X has to be rounded, and if the Y is 'un-even' it has to be the ceiling value.
Is there a way to do this without the for loop, by using the vectorization?
You can vectorize this as follows
(Y - 1) * gridsize + ifelse(Y %% 2 == 1, round(Xcells), ceiling(Xcells))
# [1] 1 2 3 16
(I'm not sure pre-calculating round(Xcells) and ceiling(Xcells) will improve this a bit more - you could try)
Another option (if you want to avoid ifelse) could be
(Y - 1) * gridsize + cbind(ceiling(Xcells), round(Xcells))[cbind(1:length(Xcells), Y %% 2 + 1)]
# [1] 1 2 3 16
Related
Taking the ideas from the following links:
the local minimum between the two peaks
How to explain ...
I look for the local minimum or minimums, avoiding the use of functions already created for this purpose [max / min locale or global].
Our progress:
#DATA
simulate <- function(lambda=0.3, mu=c(0, 4), sd=c(1, 1), n.obs=10^5) {
x1 <- rnorm(n.obs, mu[1], sd[1])
x2 <- rnorm(n.obs, mu[2], sd[2])
return(ifelse(runif(n.obs) < lambda, x1, x2))
}
data <- simulate()
hist(data)
d <- density(data)
#
#https://stackoverflow.com/a/25276661/8409550
##Since the x-values are equally spaced, we can estimate dy using diff(d$y)
d$x[which.min(abs(diff(d$y)))]
#With our data we did not obtain the expected value
#
d$x[which(diff(sign(diff(d$y)))>0)+1]#pit
d$x[which(diff(sign(diff(d$y)))<0)+1]#peak
#we check
#1
optimize(approxfun(d$x,d$y),interval=c(0,4))$minimum
optimize(approxfun(d$x,d$y),interval=c(0,4),maximum = TRUE)$maximum
#2
tp <- pastecs::turnpoints(d$y)
summary(tp)
ind <- (1:length(d$y))[extract(tp, no.tp = FALSE, peak = TRUE, pit = TRUE)]
d$x[ind[2]]
d$x[ind[1]]
d$x[ind[3]]
My questions and request for help:
Why did the command lines fail:
d$x[which.min(abs(diff(d$y)))]
It is possible to eliminate the need to add one to the index in the command lines:
d$x[which(diff(sign(diff(d$y)))>0)+1]#pit
d$x[which(diff(sign(diff(d$y)))<0)+1]#peak
How to get the optimize function to return the two expected maximum values?
Question 1
The answer to the first question is straighforward. The line d$x[which.min(abs(diff(d$y)))] asks for the x value at which there was the smallest change in y between two consecutive points. The answer is that this happened at the extreme right of the plot where the density curve is essentially flat:
which.min(abs(diff(d$y)))
#> [1] 511
length(abs(diff(d$y)))
#> [1] 511
This is not only smaller than the difference at your local maxima /minima points; it is orders of magnitude smaller. Let's zoom in to the peak value of d$y, including only the peak and the point on each side:
which.max(d$y)
#> [1] 324
plot(d$x[323:325], d$y[323:325])
We can see that the smallest difference is around 0.00005, or 5^-5, between two consecutive points. Now look at the end of the plot where it is flattest:
plot(d$x[510:512], d$y[510:512])
The difference is about 1^-7, which is why this is the flattest point.
Question 2
The answer to your second question is "no, not really". You are taking a double diff, which is two elements shorter than x, and if x is n elements long, a double diff will correspond to elements 2 to (n - 1) in x. You can remove the +1 from the index, but you will have an off-by-one error if you do that. If you really wanted to, you could concatenate dummy zeros at each stage of the diff, like this:
d$x[which(c(0, diff(sign(diff(c(d$y, 0))))) > 0)]
which gives the same result, but this is longer, harder to read and harder to justify, so why would you?
Question 3
The answer to the third question is that you could use the "pit" as the dividing point between the minimum and maximum value of d$x to find the two "peaks". If you really want a single call to get both at once, you could do it inside an sapply:
pit <- optimize(approxfun(d$x,d$y),interval=c(0,4))$minimum
peaks <- sapply(1:2, function(i) {
optimize(approxfun(d$x, d$y),
interval = c(min(d$x), pit, max(d$x))[i:(i + 1)],
maximum = TRUE)$maximum
})
pit
#> [1] 1.691798
peaks
#> [1] -0.02249845 3.99552521
I'm trying to find sites to collect snails by using a semi-random selection method. I have set a 10km2 grid around the region I want to collect snails from, which is broken into 10,000 10m2 cells. I want to randomly this grid in R to select 200 field sites.
Randomly sampling a matrix in R is easy enough;
dat <- matrix(1:10000, nrow = 100)
sample(dat, size = 200)
However, I want to bias the sampling to pick cells closer to a single position (representing sites closer to the research station). It's easier to explain this with an image;
The yellow cell with a cross represents the position I want to sample around. The grey shading is the probability of picking a cell in the sample function, with darker cells being more likely to be sampled.
I know I can specify sampling probabilities using the prob argument in sample, but I don't know how to create a 2D probability matrix. Any help would be appreciated, I don't want to do this by hand.
I'm going to do this for a 9 x 6 grid (54 cells), just so it's easier to see what's going on, and sample only 5 of these 54 cells. You can modify this to a 100 x 100 grid where you sample 200 from 10,000 cells.
# Number of rows and columns of the grid (modify these as required)
nx <- 9 # rows
ny <- 6 # columns
# Create coordinate matrix
x <- rep(1:nx, each=ny);x
y <- rep(1:ny, nx);y
xy <- cbind(x, y); xy
# Where is the station? (edit: not snails nest)
Station <- rbind(c(x=3, y=2)) # Change as required
# Determine distance from each grid location to the station
library(SpatialTools)
D <- dist2(xy, Station)
From the help page of dist2
dist2 takes the matrices of coordinates coords1 and coords2 and
returns the inter-Euclidean distances between coordinates.
We can visualize this using the image function.
XY <- (matrix(D, nr=nx, byrow=TRUE))
image(XY) # axes are scaled to 0-1
# Create a scaling function - scales x to lie in [0-1)
scale_prop <- function(x, m=0)
(x - min(x)) / (m + max(x) - min(x))
# Add the coordinates to the grid
text(x=scale_prop(xy[,1]), y=scale_prop(xy[,2]), labels=paste(xy[,1],xy[,2],sep=","))
Lighter tones indicate grids closer to the station at (3,2).
# Sampling probabilities will be proportional to the distance from the station, which are scaled to lie between [0 - 1). We don't want a 1 for the maximum distance (m=1).
prob <- 1 - scale_prop(D, m=1); range (prob)
# Sample from the grid using given probabilities
sam <- sample(1:nrow(xy), size = 5, prob=prob) # Change size as required.
xy[sam,] # Thse are your (**MY!**) 5 samples
x y
[1,] 4 4
[2,] 7 1
[3,] 3 2
[4,] 5 1
[5,] 5 3
To confirm the sample probabilities are correct, you can simulate many samples and see which coordinates were sampled the most.
snail.sam <- function(nsamples) {
sam <- sample(1:nrow(xy), size = nsamples, prob=prob)
apply(xy[sam,], 1, function(x) paste(x[1], x[2], sep=","))
}
SAMPLES <- replicate(10000, snail.sam(5))
tab <- table(SAMPLES)
cols <- colorRampPalette(c("lightblue", "darkblue"))(max(tab))
barplot(table(SAMPLES), horiz=TRUE, las=1, cex.names=0.5,
col=cols[tab])
If using a 100 x 100 grid and the station is located at coordinates (60,70), then the image would look like this, with the sampled grids shown as black dots:
There is a tendency for the points to be located close to the station, although the sampling variability may make this difficult to see. If you want to give even more weight to grids near the station, then you can rescale the probabilities, which I think is ok to do, to save costs on travelling, but these weights need to be incorporated into the analysis when estimating the number of snails in the whole region. Here I've cubed the probabilities just so you can see what happens.
sam <- sample(1:nrow(xy), size = 200, prob=prob^3)
The tendency for the points to be located near the station is now more obvious.
There may be a better way than this but a quick way to do it is to randomly sample on both x and y axis using a distribution (I used the normal - bell shaped distribution, but you can really use any). The trick is to make the mean of the distribution the position of the research station. You can change the bias towards the research station by changing the standard deviation of the distribution.
Then use the randomly selected positions as your x and y coordinates to select the positions.
dat <- matrix(1:10000, nrow = 100)
#randomly selected a position for the research station
rs <- c(80,30)
# you can change the sd to change the bias
x <- round(rnorm(400,mean = rs[1], sd = 10))
y <- round(rnorm(400, mean = rs[2], sd = 10))
position <- rep(NA, 200)
j = 1
i = 1
# as some of the numbers sampled can be outside of the area you want I oversampled # and then only selected the first 200 that were in the area of interest.
while (j <= 200) {
if(x[i] > 0 & x[i] < 100 & y[i] > 0 & y [i]< 100){
position[j] <- dat[x[i],y[i]]
j = j +1
}
i = i +1
}
plot the results:
plot(x,y, pch = 19)
points(x =80,y = 30, col = "red", pch = 19) # position of the station
I have data which is movement of an object in 3D space at regular time intervals. Data is as below:
Time X Y Z
1 1 1 1
2 2 1 2
3 2 0 1
4 3 2 1
.....
(x,y,z) is the position of object at time t. I want to plot a 3D graph where it shows the complete movement of object in 3d space, but to have a slider or something of that sort where I can select a time range (say 500 to 750) and see the movement of the object in 3D space. So, here we have 4 dimensions: x,y,z are positions and time as 4th dimension and use a slider to control the plotting of points with in that time. [Example in Mathematica below gives a good idea about this]
To make it more clear. We first draw the complete movement of the object in 3D space from time 1 to N. Then, by controlling the slider, we draw the movement of same object between t1 to t2 time stamps. It is also important to display at what time the slider is at (as I have to make a note of some interested time stamps based on the movement).
I have Googled the same, but no example was close enough to get me what I want. All of those bind the slider to one of the axis variables (say x or y which might be time) but we have to bind it to 4th dimension, time. dygraphs was promising but I had similar issues as discussed above (also, didn't find any 3d support).
This one in Mathematica is interesting. But I don't have license for it. It just moves a point on the 3D path traced. This can solve my problem as well, but I should be able to know the time-stamp values when I pause it.
Solution in R is good for me because it does not have any licensing issues. Or in Matlab if it does not use any advanced visualization toolboxes. Or Python.
Thanks in Advance.
This is a raw example that can be customized as desired. It uses manipulate and plot3D
library(manipulate)
library(plot3D)
min_time <- 1
max_time <- 100
time_interval <- min_time:max_time
# Create data frame
DF <- data.frame(t = time_interval)
# Time parametric functions
X <- function(t) {
return(2 * t)
}
Y <- function(t) {
return(t ** 2)
}
Z <- function(t) {
return(10 * cos(t / 100))
}
# Update data frame
DF$x <- sapply(DF$t, X)
DF$y <- sapply(DF$t, Y)
DF$z <- sapply(DF$t, Z)
# Use manipulate with RStudio
manipulate({
lines3D(x = DF$x, y = DF$y, z = DF$z)
scatter3D(
x = DF$x[t],
y = DF$y[t],
z = DF$z[t],
add = TRUE
)
}, t = slider(min_time, max_time))
I'm writing a Python script to generate problems for mental arithmetic drills. The addition and multiplication ones were easy, but I'm running into trouble trying to generate unbiased problems for the subtraction ones.
I want to be able to specify a minimum and maximum value that the minuend (first number) will be -- e.g., for two-digit subtraction it should be between 20 and 99. The subtrahend should also have a range option (11-99, say). The answer needs to be positive and preferably also bounded by a minimum of, say, 10 for this situation.
So:
20 < Minuend < 99
11 < Subtrahend < 99
Answer = Minuend - Subtrahend
Answer >= 10
All the numeric values should be used as variables, of course.
I have these conditions met as follows:
ansMin, ansMax = 10, 99
subtrahendMin, minuendMax = 11,99
# the other max and min did not seem to be necessary here,
# and two ranges was the way I had the program set up
answer = randint(ansMin, ansMax)
subtrahend = randint(subtrahendMin, minuendMax - answer)
minuend = answer + subtrahend # rearranged subtraction equation
The problem here is that the minuend values wind up being nearly all over 50 because the answer and subtrahend were generated first and added together, and only the section of them that were both in the bottom 25% of the range will get the result below 50%. (Edit: that's not strictly true -- for instance, bottom 1% plus bottom 49% would work, and percentages are a bad way of describing it anyway, but I think the idea is clear.)
I also considered trying generating the minuend and subtrahend values both entirely randomly, then throwing out the answer if it didn't match the criteria (namely, that the minuend be greater than the subtrahend by a value at least greater than the answerMin and that they both be within the criteria listed above), but I figured that would result in a similar bias.
I don't care about it being perfectly even, but this is too far off. I'd like the minuend values to be fully random across the allowable range, and the subtrahend values random across the range allowed by the minuends (if I'm thinking about it right, this will be biased in favor of lower ones). I don't think I really care about the distribution of the answers (as long as it's not ridiculously biased). Is there a better way to calculate this?
There are several ways of defining what "not biased" means in this case. I assume that what you are looking for is that every possible subtraction problem from the allowed problem space is chosen with equal probability. Quick and dirty approach:
Pick random x in [x_min, x_max]
Pick random y in [y_min, y_max]
If x - y < answer_min, discard both x and y and start over.
Note the bold part. If you discard only y and keep the x, your problems will have an uniform distribution in x, not in the entire problem space. You need to ensure that for every valid x there is at least one valid y - this is not the case for your original choice of ranges, as we'll see later.
Now the long, proper approach. First we need to find out the actual size of the problem space.
The allowed set of subtrahends is determined by the minuend:
x in [21, 99]
y in [11, x-10]
or using symbolic constants:
x in [x_min, x_max]
y in [y_min, x - answer_min]
We can rewrite that as
x in [21, 99]
y = 11 + a
a in [0, x-21]
or again using symbolic constants
x in [x_min, x_max]
y = y_min + a
a in [0, x - (answer_min + y_min)].
From this, we see that valid problems exist only for x >= (answer_min + y_min), and for a given x there are x - (answer_min + y_min) + 1 possible subtrahents.
Now we assume that x_max does not impose any further constraints, e.g. that answer_min + y_min >= 0:
x in [21, 99], number of problems:
(99 - 21 + 1) * (1 + 78+1) / 2
x in [x_min, x_max], number of problems:
(x_max - x_min + 1) * (1 + x_max - (answer_min + y_min) + 1) / 2
The above is obtained using the formula for the sum of an arithmetic sequence. Therefore, you need to pick a random number in the range [1, 4740]. To transform this number into a subtraction problem, we need to define a mapping between the problem space and the integers. An example mapping is as follows:
1 <=> x = 21, y = 11
2 <=> x = 22, y = 12
3 <=> x = 22, y = 11
4 <=> x = 23, y = 13
5 <=> x = 23, y = 12
6 <=> x = 23, y = 11
and so on. Notice that x jumps by 1 when a triangular number is exceeded. To compute x and y from the random number r, find the lowest triangular number t greater than or equal to r, preferably by searching in a precomputed table; write this number as q*(q+1)/2. Then x = x_min + q-1 and y = y_min + t - r.
Complete program:
import random
x_min, x_max = (21, 99)
y_min = 11
answer_min = 10
triangles = [ (q*(q+1)/2, q) for q in range(1, x_max-x_min+2) ]
upper = (x_max-x_min+1) * (1 + x_max - (answer_min + y_min) + 1) / 2
for i in range(0, 20):
r = 1 + random.randrange(0, upper)
(t, q) = next(a for a in triangles if a[0] >= r)
x = x_min + q - 1
y = y_min + t - r
print "%d - %d = ?" % (x, y)
Note that for a majority of problems (around 75%), x will be above 60. This is correct, because for low values of the minuend there are fewer allowed values of the subtrahend.
I can see a couple of issues with your starting values - if you want the answer to always be greater than 10 - then you need to either increase MinuendMin, or decrease SubtrahendMin because 20-11 is less than 10... Also you have defined the answer min and max as 3,9 - which means the answer will never be more than 10...
Apart from that I managed to get a nice even distribution of values by selecting the minuend value first, then selecting the subtrahend value based on it and the answerMin:
ansMin = 10
minuendMin, minuendMax = 20,99
subtrahendMin = 9;
minuend = randint(minuendMin, minuendMax )
subtrahend = randint(subtrahendMin,(minuend-ansMin) )
answer = minuend - subtrahend
You say you've already got addition working properly. Assuming you have similar restrictions for the addends/sum you could rearrange the factors so that:
minuend <= sum
subtrahend <= first addend
answer <= second addend
A similar mapping can be made for multiplication/division, if required.
I'm trying to calculate all the possible values of a grid size (x by y) that lead to the same number of cells, so for example a 2x2 grid has a cell size of 4. I want the y to be half of the x, and the total to be, for example 4000. So I guess I want R to calculate all the possible positive integer values of x and y where
function (total) {
x*y=total
x/y=2
x!=total
y!= total.
}
I suppose one way to get positive integers and to consider different solutions would be to allow the total to be up to 10% larger than its original value (but not smaller, I need the grid to be at least as big as the total value I give), in which case the function could have two fields, tot (e.g. 4000) and tolerance (e.g. 10%). Total (as used in the sketch function above) than has to be between tot and (tot+tolerance*tot)
I have several cell sizes so 4000 is only one example. I'm trying to build a quick function which returns positive integers only and returns a matrix of Xs and Ys.
Any ideas?
Many thanks
What about this:
possible.sizes <- function(total, tolerance) {
min.total <- total
max.total <- total * (1 + tolerance)
min.y <- ceiling(sqrt(min.total/2))
max.y <- floor(sqrt(max.total/2))
if (max.y < min.y)
return(data.frame(x=numeric(0), y=numeric(0)))
y <- seq(min.y, max.y)
x <- 2*y
return(data.frame(x=x, y=y))
}
possible.sizes(4000, 0.1)
# x y
# 1 90 45
# 2 92 46