How do I sample real numbers? - r

Is it possible to sample real numbers with the sample() function? (I.e., sample from a continuous distribution, e.g. [1,10] so that when I sample I won't get an integer but a real number, e.g 5.467.)
If not, is there another function to do that?

The purpose of sample is to... well sample, uniformly or not, with replacement or not, from a finite set.
If you want a sample from another kind of distribution, it depends on the distribution, and there are many functions in R to achieve that. The most common are runif (uniform distribution on a bounded interval) and rnorm (the normal distribution).
See ?sample and ?Distributions in the documentation. A more comprehensive list can be found here.
For instance:
Uniform sample with replacement, from {1, 2, 3, 4}:
> sample(1:4, 10, replace=T)
[1] 3 1 4 1 2 1 1 3 4 3
Uniform sample from [0, 1]:
> runif(10)
[1] 0.6529710 0.1977511 0.7946746 0.6351793 0.9518663
0.9356524 0.2945506 0.4623502 0.8198065 0.8516961
Normal sample with mean 0 and variance 1:
> rnorm(10)
[1] -1.13195127 0.43434313 -0.44183752 0.13632993 0.75902968
0.76079877 0.37919727 0.41350485 0.76233633 0.09400158
These functions can take parameters: runif(n, a, b) yields n samples from the interval [a, b] for instance.

Related

R function to find difference in mean greater than or equal to a specific number

I have just started my basic statistic course using R and we're studying using R for paired t-tests. I have come across questions where we're given two sets of data and we're asked to find whether the difference in mean is equal to 0 or greater than 0 so on so forth. The function we use for two samples x and y with an unknown variance is similar to the one below;
t.test(x, y, var.equal=TRUE, alternative="greater")
My question is, how would we to do this if we wanted to test the difference in mean is more than or equal to a specified number against the alternative that its less than a specific number and not 0.
For example, say we're given two datas for before and after weights of 10 people. How do we test that the mean difference in weight is more than or equal to say 3kg against the alternative where the mean difference in weight is less than 3kg. Is there a way to do this? Would really appreciate any guidance on this matter.
It might be worthwhile posting on https://stats.stackexchange.com/ as well if you're in need of more theoretical proof. Is it ok to add/subtract the 3kg from either x or y and then use the t-test to check for similarity? I think this would tell you at least which outcome is more likely, if that's the end goal. It would be good to get feedback on this
# number of obs, and rnorm dist for simulating
N <- 10
mu <- 70
sd <- 10
set.seed(1)
x <- round(rnorm(N, mu, sd), 1)
# three outcomes
# (1) no change
y_same <- x + round(rnorm(N, 0, 5), 1)
# (2) average increase of 3
y_imp <- x + rnorm(N, 3, 5)
# (3) average decrease of 3
y_dec <- x + rnorm(N, -3, 5)
# say y_imp is true
y_act <- y_imp
# can we test whether we're closer to the output by altering
# the original data? or conversely, altering y_imp
t_inc <- t.test(x+3, y_act, var.equal=TRUE, alternative="two.sided")
t_dec <- t.test(x-3, y_act, var.equal=TRUE, alternative="two.sided")
t_inc$p.value
[1] 0.8279801
t_dec$p.value
[1] 0.0956033
# one with the highest p.value has the closest distribution, so
# +3 kg more likely than -3kg
You can set mu=3 to change the null hypothesis from 0 to 3 assuming your x variables are in the units you describe above.
t.test(x, y, mu=3, alternative="greater", paired=TRUE)
More (general) information on Stack Exchange [here].(https://stats.stackexchange.com/questions/206316/can-a-paired-or-two-group-t-test-test-if-the-difference-between-two-means-is-l/206317#206317)

R: How do i aggregate losses by a poisson observation?

I'm new to R but i am trying to use it in order to aggregate losses that are observed from a severity distribution by an observation from a frequency distribution - essentially what rcompound does. However, i need a more granular approach as i need to manipulate the severity distribution before 'aggregation'.
Lets take an example. Suppose you have:
rpois(10,lambda=3)
Thereby, giving you something like:
[1] 2 2 3 5 2 5 6 4 3 1
Additionally, suppose we have severity of losses determined by:
rgamma(20,shape=1,scale=10000)
So that we also have the following output:
[1] 233.0257 849.5771 7760.4402 731.5646 8982.7640 24172.2369 30824.8424 22622.8826 27646.5168 1638.2333 6770.9010 2459.3722 782.0580 16956.1417 1145.4368 5029.0473 3485.6412 4668.1921 5637.8359 18672.0568
My question is: what is an efficient way to get R to take each Poisson observation in turn and then aggregate losses from my severity distribution? For example, the first Poisson observation is 2. Therefore, adding two observations (the first two) from my Gamma distribution gives 1082.61.
I say this needs to be 'efficient' (run time) due to the fact:
- The Poisson parameter may be come significantly large, i.e. up to 1000 or so.
- The realisations are likely to be up to 1,000,000, i.e. up to a million Poisson and Gamma observations to sort through.
Any help would be greatly appreciated.
Thanks, Dave.
It looks like you want to split the gamma vector at positions indicated by the accumulation of the poisson vector.
The following function (from here) does the splitting:
splitAt <- function(x, pos) unname(split(x, cumsum(seq_along(x) %in% pos)))
pois <- c(2, 2, 3, 5, 2, 5, 6, 4, 3, 1)
gam <- c(233.0257, 849.5771, 7760.4402, 731.5646, 8982.7640, 24172.2369, 30824.8424, 22622.8826, 27646.5168, 1638.2333, 6770.9010, 2459.3722, 782.0580, 16956.1417, 1145.4368, 5029.0473, 3485.6412, 4668.1921, 5637.8359, 18672.0568)
posits <- cumsum(pois)
Then do the following:
sapply(splitAt(gam, posits + 1), sum)
[1] 1082.603 8492.005 63979.843 61137.906 17738.200 19966.153 18672.057
According to post I linked to above, the splitAt() function slows down for large arrays, so you could (if necessary) consider the alternatives proposed in that post. For my part, I generated 1e6 poissons and 1e6 gammas, and the above function ran in 0.78 sec on my machine.

Generating power-law distributed numbers from uniform distribution – found 2 approaches: which one is correct?

I am trying to generate power-law distributed numbers ranging from 0 to 1 from a uniform distribution. I found two approaches and I am not sure which one is right and which one is wrong.
1st Source: Wolfram:
2nd Source: Physical Review (Page 2):
Where: y = uniform variate, n = distribution power, x0 and x1 = range of the distribution, x = power-law distributed variate.
The second one only gives decent results for x0 = 0 and x1 = 1, when n is between 0 and 1.
If y is a uniform random variable between 0 and 1, then 1-y also is. Thereby letting z = 1-y you can transform your formula (1) as :
x = [(x_1^{n+1}-(x_1^{n+1}-x_0^{n+1}) z]^{1/(n+1)}
which is then the same as your formula (2) except for the change n -> (-n).
So I suppose that the only difference between these two formula in the notation on how n relates to the power law decay (unfortunately the link you gave for the Wolfram alpha formula is invalid so I cannot check which notation they use).

Fastest way to sample real values using a proportional probability

Given a numeric vector with N real numbers, what's the fastest way to sample k values, such that higher values have greater probability of being selected?
mathematically
prob(X) > prob(Y) when X > Y (Linearly)
This is easy with sample() when all entries are positive, just use the prob arg:
N = 1000
k = 600
x = runif(N, 0, 10)
results = sample(x, k, replace = TRUE, prob = x)
But it does'n work in my case, because some values might be negative. I cannot drop or ignore negative values, that's the problem.
So, what's the fastest (code speed) way of doing this? Obviously i know how to solve this, the issue is code speed - one method should be slower than other i guess:
1 - Normalize the x vector (a call to `range()` would be necessary + division)
2 - Sum max(x) to x (a call to `max()` then sum)
Thanks.
A few comments. First, it's still not exactly clear what you want. Obviously, you want larger numbers to be chosen with higher probability, but there are a lot of ways of doing this. For example, either rank(x) or x-min(x) will produce a vector of non-negative weights which are monotonic in x.
Another point, you don't need to normalize the weights, because sample will do that for you, provided that the weights are non-negative:
> set.seed(1)
> sample(1:10,prob=1:10)
[1] 9 8 6 2 10 3 1 5 7 4
> set.seed(1)
> sample(1:10,prob=(1:10)/sum(1:10))
[1] 9 8 6 2 10 3 1 5 7 4
On edit: The OP is now asking for a weighting function which is "linear" in the input vector. Technically this is impossible, because linear functions are of the form f(X)=cX, so if a vector x contains both positive and negative values, then any linear function of x will also contain both positive and negative values, unless c=0, in which case it still does not give a valid vector of probability weights.
I think what you mean by "linear" is simply x-min(x). This is not a linear function, but an affine function. Moreover, even if you had specified that you wanted P(X) to vary as an affine function of X, that still would not have uniquely determined the probability weights, because there are an infinite number of possible affine functions that would yield valid weights (e.g. x-min(x)+1, etc.)
In any case, assuming x-min(x) is what you want, the question now becomes, what is the fastest way to compute x-min(x) in R. And I'm pretty sure that the answer is just x-min(x).
Finally, for constants anywhere near what you have in your example, there is not much point in trying to optimize the calculation of weights, because the random sampling is going to take much longer anyway. For example:
> x<-rnorm(1000)
> k<-600
> p<-x-min(x)
> microbenchmark(x-min(x),sample(x,k,T,p))
Unit: microseconds
expr min lq median uq max neval
x - min(x) 6.56 6.9105 7.0895 7.2515 13.629 100
sample(x, k, T, p) 50.30 51.4360 51.7695 52.1970 66.196 100

Generate 3 random number that sum to 1 in R

I am hoping to create 3 (non-negative) quasi-random numbers that sum to one, and repeat over and over.
Basically I am trying to partition something into three random parts over many trials.
While I am aware of
a = runif(3,0,1)
I was thinking that I could use 1-a as the max in the next runif, but it seems messy.
But these of course don't sum to one. Any thoughts, oh wise stackoverflow-ers?
This question involves subtler issues than might be at first apparent. After looking at the following, you may want to think carefully about the process that you are using these numbers to represent:
## My initial idea (and commenter Anders Gustafsson's):
## Sample 3 random numbers from [0,1], sum them, and normalize
jobFun <- function(n) {
m <- matrix(runif(3*n,0,1), ncol=3)
m<- sweep(m, 1, rowSums(m), FUN="/")
m
}
## Andrie's solution. Sample 1 number from [0,1], then break upper
## interval in two. (aka "Broken stick" distribution).
andFun <- function(n){
x1 <- runif(n)
x2 <- runif(n)*(1-x1)
matrix(c(x1, x2, 1-(x1+x2)), ncol=3)
}
## ddzialak's solution (vectorized by me)
ddzFun <- function(n) {
a <- runif(n, 0, 1)
b <- runif(n, 0, 1)
rand1 = pmin(a, b)
rand2 = abs(a - b)
rand3 = 1 - pmax(a, b)
cbind(rand1, rand2, rand3)
}
## Simulate 10k triplets using each of the functions above
JOB <- jobFun(10000)
AND <- andFun(10000)
DDZ <- ddzFun(10000)
## Plot the distributions of values
par(mfcol=c(2,2))
hist(JOB, main="JOB")
hist(AND, main="AND")
hist(DDZ, main="DDZ")
just random 2 digits from (0, 1) and if assume its a and b then you got:
rand1 = min(a, b)
rand2 = abs(a - b)
rand3 = 1 - max(a, b)
When you want to randomly generate numbers that add to 1 (or some other value) then you should look at the Dirichlet Distribution.
There is an rdirichlet function in the gtools package and running RSiteSearch('Dirichlet') brings up quite a few hits that could easily lead you to tools for doing this (and it is not hard to code by hand either for simple Dirichlet distributions).
I guess it depends on what distribution you want on the numbers, but here is one way:
diff(c(0, sort(runif(2)), 1))
Use replicate to get as many sets as you want:
> x <- replicate(5, diff(c(0, sort(runif(2)), 1)))
> x
[,1] [,2] [,3] [,4] [,5]
[1,] 0.66855903 0.01338052 0.3722026 0.4299087 0.67537181
[2,] 0.32130979 0.69666871 0.2670380 0.3359640 0.25860581
[3,] 0.01013117 0.28995078 0.3607594 0.2341273 0.06602238
> colSums(x)
[1] 1 1 1 1 1
I would simply randomly select 3 numbers from uniform distribution and then divide by their sum:
n <- 3
x <- runif(n, 0, 1)
y <- x / sum(x)
sum(y) == 1
n could be any number you like.
This problem and the different solutions proposed intrigued me. I did a little test of the three basic algorithms suggested and what average values they would yield for the numbers generated.
choose_one_and_divide_rest
means: [ 0.49999212 0.24982403 0.25018384]
standard deviations: [ 0.28849948 0.22032758 0.22049302]
time needed to fill array of size 1000000 was 26.874945879 seconds
choose_two_points_and_use_intervals
means: [ 0.33301421 0.33392816 0.33305763]
standard deviations: [ 0.23565652 0.23579615 0.23554689]
time needed to fill array of size 1000000 was 28.8600130081 seconds
choose_three_and_normalize
means: [ 0.33334531 0.33336692 0.33328777]
standard deviations: [ 0.17964206 0.17974085 0.17968462]
time needed to fill array of size 1000000 was 27.4301018715 seconds
The time measurements are to be taken with a grain of salt as they might be more influenced by the Python memory management than by the algorithm itself. I'm too lazy to do it properly with timeit. I did this on 1GHz Atom so that explains why it took so long.
Anyway, choose_one_and_divide_rest is the algorithm suggested by Andrie and the poster of the question him/herself (AND): you choose one value a in [0,1], then one in [a,1] and then you look what you have left. It adds up to one but that's about it, the first division is twice as large as the other two. One might have guessed as much ...
choose_two_points_and_use_intervals is the accepted answer by ddzialak (DDZ). It takes two points in the interval [0,1] and uses the size of the three sub-intervals created by these points as the three numbers. Works like a charm and the means are all 1/3.
choose_three_and_normalize is the solution by Anders Gustafsson and Josh O'Brien (JOB). It just generates three numbers in [0,1] and normalizes them back to a sum of 1. Works just as well and surprisingly a little bit faster in my Python implementation. The variance is a bit lower than for the second solution.
There you have it. No idea to what beta distribution these solutions correspond or which set of parameters in the corresponding paper I referred to in a comment but maybe someone else can figure that out.
The simplest solution is the Wakefield package probs() function
probs(3) will yield a vector of three values with a sum of 1
given that you can rep(probs(3),x) where x is "over and over"
no drama

Resources