creating a loop with combinatorics in r - r

I'm trying to create a combinatoric function in R and it is dependent on what number I set a variable to. This tells me I would have to use a loop. Here is an example of the equation that I am trying to create and I filled in the numbers given in the example:
The equation changes depending on what number if put in for b. I'm guessing I would need a sum and a loop.
comb = function(n, x) {return(factorial(n) / (factorial(x) * factorial(n-x)))}
a <- 8
b <- 4
c <- 0:b
p <- 0.05
total = function(n) {
return(((comb(a,b + c)*comb((n-a), (c - b)*(-1))/comb(n,a) - 0.05)
}
I will then find out what n is equal too by setting it equal to 0.

You don't need to define comb -- this is provided by the function choose in base R. I think all you need is to combine choose and sum to compute your numerator:
total <- function(n) {
sum(choose(a, b:(2*b)) * choose(n-a, b:0)) / choose(n, a) - p
}
# Plot total(n) for various n values:
a <- 8
b <- 4
p <- 0.05
n <- 20:100
plot(n, sapply(n, total))
For these parameters, total(n) crossed 0 between n=36 and n=37.

Related

While-Loop to generate a sample

Hellou
I've had problems with the following while loop in R. I try to know with what number of samples (n), I can achieve a variance less than 0.01 (dtest) and that let me to know the values of n, m, s and d:
n <- 100
x <- rnorm(n,0,1)
sd(x)
d <- sd(x)/sqrt(n)
dtest <- 0.01
while(dtest <=0.01) {
x <- rnorm(n,0,1)
n <- n+1
m <- mean(x)
s <- sd(x)
d <- s/sqrt(n)
return(output <- data.frame(n,m,s,d))
}
The first time I did the cycle without problems and it marked a n of approx 27K. Now only every time I execute the loop it accumulates
There are a number of issues:
Your condition should compare d to dtest. Currently, it’s comparing two values that aren’t changed within the loop, so will run forever.
Increment n at the start of the loop. Otherwise you’re using a different n to compute x and d.
Just create your results dataframe once, after the loop, rather than creating and discarding with each loop. And don’t use return(), which is meant for use inside functions.
Note that sd(x)/sqrt(n) is standard error, not variance. Variance would be sd(x)^2.
set.seed(13)
n <- 99
x <- rnorm(n,0,1)
d <- sd(x)/sqrt(n)
dtest <- 0.01
while(dtest <= d) {
n <- n+1
x <- rnorm(n,0,1)
s <- sd(x)
d <- s/sqrt(n)
}
output <- data.frame(n,m = mean(x),s,d)
output
n m s d
1 9700 0.01906923 0.9848469 0.009999605

Recursion in R pseudo random numbers

I have to use recursion to produce pseudo random numbers. For fixed values a, b and c, I need to calculate:
x_n+1 = (a * x_n + c) modulo 2^b. Random numbers are obtained by the function R_n = x_n / (2^b). I need to save these R_n values to make a histogram. How can I make a function in R that uses it's previous values x_n to produce x_n+1? I have made a start with my code, it's listed below.
a=5
b=4
c=3
k=10000
random <- function(x) {
if(x<k){
x = (a*x+c)%%2^b
k++
}
}
Here's a thought for starters,
random <- function(a = 5, b = 4, c = 3, k = 10000, x0 = 1) {
x <- x0 # or some other sane default
function(n = 1) {
newx <- Reduce(function(oldx, ign) (a*oldx + c) %% (2^b), seq_len(n),
init = x, accumulate = TRUE)[-1]
# if (x >= k)? do something else
if (length(newx)) {
x <<- newx[length(newx)]
k <<- k + n
}
newx
}
}
The premise is that the random function is a setup function that returns a function. This inner function has its a, b, c, k, and previous x variables stored within it.
thisran <- random()
thisran()
# [1] 8
thisran(3)
# [1] 11 10 5
I haven't studied creating PRNG in depth, but I'm inferring that x0 here is effectively your seed. I'm not certain why you had a if (x<k) conditional in your function; since k was never used otherwise, just incremented, I'm thinking it only serves as a termination indicator for your PRNG (so it is not infinite).
If need be, the current k value (and other variables, for that matter) can be peeked-at with
get("k", environment(thisran))
# [1] 10003
BTW: the use of Reduce might seem like an unnecessary complication, but it enables the ran(n) functionality, similar to other PRNGs in R. That is, one can do runif(7) for seven random numbers, and I thought it would be useful to do that here. The use of Reduce is required in that case since each calculation depends on the results from the previous calculation, so a sample replicate or sapply would not work (without some contrived coding that I wanted to avoid).

Using two for loops to fill one empty matrix by rows

I'm trying to do a forecast analysis with some error measures. My question relates more to a technical Problem.
Here is a short example:
Im studying error measures regarding to the forecast length (h) and the k-step-forecast. I want to compare forecast lengths of 12, 18 and 24 months.
h<-c(12,18,24)
And for that lengths I'm comparing the 1-12 step ahead forecasts.
k <- c(1:12)
I've written two functions:
The first one (foo) is computing the hole code and the second one (forecast_analysis) is doing my forecast Analysis.
foo <- function(series, k, h){
Outfinal <- matrix(nrow = length(h)*length(k), ncol = 5)
for(i in 1:length(h)){
for(j in 1:length(k)){
Outfinal[j,] <- forecast_analysis(series,k[j],h[i])
}
}
return(Outfinal)
}
my Problem is, that I couldnt find a way to fill the Matrix by rows like this:
h k measure 1 measure 2 measure3 measure 4 measure 5
12 1
12 2
12 3
. .
. .
. .
24 10
24 11
24 12
So, first I want to fill the Matrix for a fixed value of h for all values of k. And then repeating this for all values of h. I hope ure understanding my Problem.
I know that apply functions would be more efficient here. But I'm not yet able to do so.
You can build a table of all h x k combinations plus a result using expand.grid.
This code should get you started
dummy_forecast <- function(h, k) 42
h<-c(12,18,24)
k <- 1:12 # no need for the c function here
combinations <- expand.grid(h = h, k = k, forecast = NA)
for (row in seq_along(combinations$h)) {
combinations[row, "forecast"] <-
with(combinations[row,], dummy_forecast(h, k))
}
If you return more than one value from your function, you need to assign to more than one column in combinations[row,...], but otherwise it should work.
Update
To handle a function that returns more than one value, do something like this:
dummy_forecast <- function(h, k) rep(42, 5)
result <- matrix(nrow = length(h) * length(k), ncol = 7)
combinations <- expand.grid(h = h, k = k)
for (row in seq_along(combinations$h)) {
result[row,] <- with(combinations[row,], c(h, k, dummy_forecast(h, k)))
}

Without calling polym(), how can I count the number of interactions it will return?

How can I count number of interactions poly will return?
If I have two variables, then the number of interactions poly will return in function of degree is given by:
degree <- 2
dim(poly(rnorm(10), rnorm(10), degree = degree))[2]
That is the same as:
(degree^2+3*degree)/2
Is there anyway to count the number of interactions depending on the number of degree and variables (in case I use more than two)?
Math result from combinations
Suppose you have p variables, the number of interactions associated with degree d is computed by:
fd <- function (p, d) {
k <- choose(p, d)
if (d > 1) k <- k + p * sum(choose(p-1, 0:(d-2)))
return(k)
}
The function poly (actually polym in this case), with p input variables and a degree = D, will construct interactions from degree = 1 up to degree = D. So the following function counts it:
fD <- function (p, D) {
if (D < 1) return(0)
component <- sapply(1:D, fd, p = p)
list(component = component, ncol = sum(component))
}
The entry component gives the number of interaction for each degree from 1 to D, and ncol component gives total number of interactions.
A quick test:
a <- runif(50)
b <- runif(50)
c <- runif(50)
d <- runif(50)
X <- poly(a, b, c, d, degree = 3)
ncol(X)
# 34
fD(4, 3)
# component
# [1] 4 10 20
#
# ncol
# [1] 34
How R does this?
The first few lines of the source code for polym explains how R addresses this problem. An expand.grid is first called to get all possible interactions, then a rowSums is called to compute the degree of all available interactions. Finally, a filter is applied to retain only interactions terms with degree between 1 and D.
More than three years later I had to work with degree >=3 polynomials. Unfortunately #李哲源 solution fails for degrees larger than 3. I could, however, build two solutions:
Expand Grid Solution
This method emulates polym original behavior, which is not very elegant for our purposes but is a natural benchmark.
expand_grid_solution <- function(nd, degree){
z <- do.call(expand.grid, c(rep.int(list(0:degree), nd),
KEEP.OUT.ATTRS = FALSE))
s <- rowSums(z)
ind <- 0 < s & s <= degree
z <- z[ind, , drop = FALSE]
s <- s[ind]
return(length(s))
}
Combination with repetion solution
combination_with_repetition <- function(n, r){
factorial(r+n-1)/(factorial(n-1)*factorial(r))
}
poly_elements <- function(n, d) {
x <- sapply(1:d, combination_with_repetition, n = n)
return(sum(x))
}
A quick test:
mapply(expand_grid_solution, c(2,2,2,3,3,3,4), c(2,3,4,2,3,4,4))
#[1] 5 9 14 9 19 34 69
mapply(poly_elements, c(2,2,2,3,3,3,4), c(2,3,4,2,3,4,4))
#[1] 5 9 14 9 19 34 69

Explain the quantile() function in R

I've been mystified by the R quantile function all day.
I have an intuitive notion of how quantiles work, and an M.S. in stats, but boy oh boy, the documentation for it is confusing to me.
From the docs:
Q[i](p) = (1 - gamma) x[j] + gamma
x[j+1],
I'm with it so far. For a type i quantile, it's an interpolation between x[j] and x [j+1], based on some mysterious constant gamma
where 1 <= i <= 9, (j-m)/n <= p <
(j-m+1)/ n, x[j] is the jth order
statistic, n is the sample size, and m
is a constant determined by the sample
quantile type. Here gamma depends on
the fractional part of g = np+m-j.
So, how calculate j? m?
For the continuous sample quantile
types (4 through 9), the sample
quantiles can be obtained by linear
interpolation between the kth order
statistic and p(k):
p(k) = (k - alpha) / (n - alpha - beta
+ 1),
where α and β are constants determined
by the type. Further, m = alpha + p(1
- alpha - beta), and gamma = g.
Now I'm really lost. p, which was a constant before, is now apparently a function.
So for Type 7 quantiles, the default...
Type 7
p(k) = (k - 1) / (n - 1). In this case, p(k) = mode[F(x[k])]. This is used by S.
Anyone want to help me out? In particular I'm confused by the notation of p being a function and a constant, what the heck m is, and now to calculate j for some particular p.
I hope that based on the answers here, we can submit some revised documentation that better explains what is going on here.
quantile.R source code
or type: quantile.default
You're understandably confused. That documentation is terrible. I had to go back to the paper its based on (Hyndman, R.J.; Fan, Y. (November 1996). "Sample Quantiles in Statistical Packages". American Statistician 50 (4): 361–365. doi:10.2307/2684934) to get an understanding. Let's start with the first problem.
where 1 <= i <= 9, (j-m)/n <= p < (j-m+1)/ n, x[j] is the jth order statistic, n is the sample size, and m is a constant determined by the sample quantile type. Here gamma depends on the fractional part of g = np+m-j.
The first part comes straight from the paper, but what the documentation writers omitted was that j = int(pn+m). This means Q[i](p) only depends on the two order statistics closest to being p fraction of the way through the (sorted) observations. (For those, like me, who are unfamiliar with the term, the "order statistics" of a series of observations is the sorted series.)
Also, that last sentence is just wrong. It should read
Here gamma depends on the fractional part of np+m, g = np+m-j
As for m that's straightforward. m depends on which of the 9 algorithms was chosen. So just like Q[i] is the quantile function, m should be considered m[i]. For algorithms 1 and 2, m is 0, for 3, m is -1/2, and for the others, that's in the next part.
For the continuous sample quantile types (4 through 9), the sample quantiles can be obtained by linear interpolation between the kth order statistic and p(k):
p(k) = (k - alpha) / (n - alpha - beta + 1), where α and β are constants determined by the type. Further, m = alpha + p(1 - alpha - beta), and gamma = g.
This is really confusing. What the documentation calls p(k) is not the same as the p from before. p(k) is the plotting position. In the paper, the authors write it as pk, which helps. Especially since in the expression for m, the p is the original p, and the m = alpha + p * (1 - alpha - beta). Conceptually, for algorithms 4-9, the points (pk, x[k]) are interpolated to get the solution (p, Q[i](p)). Each algorithm only differs in the algorithm for the pk.
As for the last bit, R is just stating what S uses.
The original paper gives a list of 6 "desirable properties for a sample quantile" function, and states a preference for #8 which satisfies all by 1. #5 satisfies all of them, but they don't like it on other grounds (it's more phenomenological than derived from principles). #2 is what non-stat geeks like myself would consider the quantiles and is what's described in wikipedia.
BTW, in response to dreeves answer, Mathematica does things significantly differently. I think I understand the mapping. While Mathematica's is easier to understand, (a) it's easier to shoot yourself in the foot with nonsensical parameters, and (b) it can't do R's algorithm #2. (Here's Mathworld's Quantile page, which states Mathematica can't do #2, but gives a simpler generalization of all the other algorithms in terms of four parameters.)
There are various ways of computing quantiles when you give it a vector, and don't have a known CDF.
Consider the question of what to do when your observations don't fall on quantiles exactly.
The "types" are just determining how to do that. So, the methods say, "use a linear interpolation between the k-th order statistic and p(k)".
So, what's p(k)? One guy says, "well, I like to use k/n". Another guy says, "I like to use (k-1)/(n-1)" etc. Each of these methods have different properties that are better suited for one problem or another.
The \alpha's and \beta's are just ways to parameterize the functions p. In one case, they're 1 and 1. In another case, they're 3/8 and -1/4. I don't think the p's are ever a constant in the documentation. They just don't always show the dependency explicitly.
See what happens with the different types when you put in vectors like 1:5 and 1:6.
(also note that even if your observations fall exactly on the quantiles, certain types will still use linear interpolation).
I believe the R help documentation is clear after the revisions noted in #RobHyndman's comment, but I found it a bit overwhelming. I am posting this answer in case it helps someone move quickly through the options and their assumptions.
To get a grip on quantile(x, probs=probs), I wanted to check out the source code. This too was trickier than I anticipated in R so I actually just grabbed it from a github repo that looked recent enough to run with. I was interested in the default (type 7) behavior, so I annotated that some, but didn't do the same for each option.
You can see how the "type 7" method interpolates, step by step, both in the code and also I added a few lines to print some important values as it goes.
quantile.default <-function(x, probs = seq(0, 1, 0.25), na.rm = FALSE, names = TRUE
, type = 7, ...){
if(is.factor(x)) { #worry about non-numeric data
if(!is.ordered(x) || ! type %in% c(1L, 3L))
stop("factors are not allowed")
lx <- levels(x)
} else lx <- NULL
if (na.rm){
x <- x[!is.na(x)]
} else if (anyNA(x)){
stop("missing values and NaN's not allowed if 'na.rm' is FALSE")
}
eps <- 100*.Machine$double.eps #this is to deal with rounding things sensibly
if (any((p.ok <- !is.na(probs)) & (probs < -eps | probs > 1+eps)))
stop("'probs' outside [0,1]")
#####################################
# here is where terms really used in default type==7 situation get defined
n <- length(x) #how many observations are in sample?
if(na.p <- any(!p.ok)) { # set aside NA & NaN
o.pr <- probs
probs <- probs[p.ok]
probs <- pmax(0, pmin(1, probs)) # allow for slight overshoot
}
np <- length(probs) #how many quantiles are you computing?
if (n > 0 && np > 0) { #have positive observations and # quantiles to compute
if(type == 7) { # be completely back-compatible
index <- 1 + (n - 1) * probs #this gives the order statistic of the quantiles
lo <- floor(index) #this is the observed order statistic just below each quantile
hi <- ceiling(index) #above
x <- sort(x, partial = unique(c(lo, hi))) #the partial thing is to reduce time to sort,
#and it only guarantees that sorting is "right" at these order statistics, important for large vectors
#ties are not broken and tied elements just stay in their original order
qs <- x[lo] #the values associated with the "floor" order statistics
i <- which(index > lo) #which of the order statistics for the quantiles do not land on an order statistic for an observed value
#this is the difference between the order statistic and the available ranks, i think
h <- (index - lo)[i] # > 0 by construction
## qs[i] <- qs[i] + .minus(x[hi[i]], x[lo[i]]) * (index[i] - lo[i])
## qs[i] <- ifelse(h == 0, qs[i], (1 - h) * qs[i] + h * x[hi[i]])
qs[i] <- (1 - h) * qs[i] + h * x[hi[i]] # This is the interpolation step: assemble the estimated quantile by removing h*low and adding back in h*high.
# h is the arithmetic difference between the desired order statistic amd the available ranks
#interpolation only occurs if the desired order statistic is not observed, e.g. .5 quantile is the actual observed median if n is odd.
# This means having a more extreme 99th observation doesn't matter when computing the .75 quantile
###################################
# print all of these things
cat("floor pos=", c(lo))
cat("\nceiling pos=", c(hi))
cat("\nfloor values= ", c(x[lo]))
cat( "\nwhich floors not targets? ", c(i))
cat("\ninterpolate between ", c(x[lo[i]]), ";", c(x[hi[i]]))
cat( "\nadjustment values= ", c(h))
cat("\nquantile estimates:")
}else if (type <= 3){## Types 1, 2 and 3 are discontinuous sample qs.
nppm <- if (type == 3){ n * probs - .5 # n * probs + m; m = -0.5
} else {n * probs} # m = 0
j <- floor(nppm)
h <- switch(type,
(nppm > j), # type 1
((nppm > j) + 1)/2, # type 2
(nppm != j) | ((j %% 2L) == 1L)) # type 3
} else{
## Types 4 through 9 are continuous sample qs.
switch(type - 3,
{a <- 0; b <- 1}, # type 4
a <- b <- 0.5, # type 5
a <- b <- 0, # type 6
a <- b <- 1, # type 7 (unused here)
a <- b <- 1 / 3, # type 8
a <- b <- 3 / 8) # type 9
## need to watch for rounding errors here
fuzz <- 4 * .Machine$double.eps
nppm <- a + probs * (n + 1 - a - b) # n*probs + m
j <- floor(nppm + fuzz) # m = a + probs*(1 - a - b)
h <- nppm - j
if(any(sml <- abs(h) < fuzz)) h[sml] <- 0
x <- sort(x, partial =
unique(c(1, j[j>0L & j<=n], (j+1)[j>0L & j<n], n))
)
x <- c(x[1L], x[1L], x, x[n], x[n])
## h can be zero or one (types 1 to 3), and infinities matter
#### qs <- (1 - h) * x[j + 2] + h * x[j + 3]
## also h*x might be invalid ... e.g. Dates and ordered factors
qs <- x[j+2L]
qs[h == 1] <- x[j+3L][h == 1]
other <- (0 < h) & (h < 1)
if(any(other)) qs[other] <- ((1-h)*x[j+2L] + h*x[j+3L])[other]
}
} else {
qs <- rep(NA_real_, np)}
if(is.character(lx)){
qs <- factor(qs, levels = seq_along(lx), labels = lx, ordered = TRUE)}
if(names && np > 0L) {
names(qs) <- format_perc(probs)
}
if(na.p) { # do this more elegantly (?!)
o.pr[p.ok] <- qs
names(o.pr) <- rep("", length(o.pr)) # suppress <NA> names
names(o.pr)[p.ok] <- names(qs)
o.pr
} else qs
}
####################
# fake data
x<-c(1,2,2,2,3,3,3,4,4,4,4,4,5,5,5,5,5,5,5,5,5,6,6,7,99)
y<-c(1,2,2,2,3,3,3,4,4,4,4,4,5,5,5,5,5,5,5,5,5,6,6,7,9)
z<-c(1,2,2,2,3,3,3,4,4,4,4,4,5,5,5,5,5,5,5,5,5,6,6,7)
#quantiles "of interest"
probs<-c(0.5, 0.75, 0.95, 0.975)
# a tiny bit of illustrative behavior
quantile.default(x,probs=probs, names=F)
quantile.default(y,probs=probs, names=F) #only difference is .975 quantile since that is driven by highest 2 observations
quantile.default(z,probs=probs, names=F) # This shifts everything b/c now none of the quantiles fall on an observation (and of course the distribution changed...)... but
#.75 quantile is stil 5.0 b/c the observations just above and below the order statistic for that quantile are still 5. However, it got there for a different reason.
#how does rescaling affect quantile estimates?
sqrt(quantile.default(x^2, probs=probs, names=F))
exp(quantile.default(log(x), probs=probs, names=F))

Resources