Preferentially Sampling Based upon Value Size - r

So, this is something I think I'm complicating far too much but it also has some of my other colleagues stumped as well.
I've got a set of areas represented by polygons and I've got a column in the dataframe holding their areas. The distribution of areas is heavily right skewed. Essentially I want to randomly sample them based upon a distribution of sampling probabilities that is inversely proportional to their area. Rescaling the values to between zero and one (using the {​​​​​​​​x-min(x)}​​​​​​​​/{​​​​​​​​max(x)-min(x)}​​​​​​​​ method) and subtracting them from 1 would seem to be the intuitive approach, but this would simply mean that the smallest are almost always the one sampled.
I'd like a flatter (but not uniform!) right-skewed distribution of sampling probabilities across the values, but I am unsure on how to do this while taking the area values into account. I don't think stratifying them is what I am looking for either as that would introduce arbitrary bounds on the probability allocations.
Reproducible code below with the item of interest (the vector of probabilities) given by prob_vector. That is, how to generate prob_vector given the above scenario and desired outcomes?
# Data
n= 500
df <- data.frame("ID" = 1:n,"AREA" = replicate(n,sum(rexp(n=8,rate=0.1))))
# Generate the sampling probability somehow based upon the AREA values with smaller areas having higher sample probability::
prob_vector <- ??????
# Sampling:
s <- sample(df$ID, size=1, prob=prob_vector)```

There is no one best solution for this question as a wide range of probability vectors is possible. You can add any kind of curvature and slope.
In this small script, I simulated an extremely right skewed distribution of areas (0-100 units) and you can define and directly visualize any probability vector you want.
area.dist = rgamma(1000,1,3)*40
area.dist[area.dist>100]=100
hist(area.dist,main="Probability functions")
area = seq(0,100,0.1)
prob_vector1 = 1-(area-min(area))/(max(area)-min(area)) ## linear
prob_vector2 = .8-(.6*(area-min(area))/(max(area)-min(area))) ## low slope
prob_vector3 = 1/(1+((area-min(area))/(max(area)-min(area))))**4 ## strong curve
prob_vector4 = .4/(.4+((area-min(area))/(max(area)-min(area)))) ## low curve
legend("topright",c("linear","low slope","strong curve","low curve"), col = c("red","green","blue","orange"),lwd=1)
lines(area,prob_vector1*500,col="red")
lines(area,prob_vector2*500,col="green")
lines(area,prob_vector3*500,col="blue")
lines(area,prob_vector4*500,col="orange")
The output is:
The red line is your solution, the other ones are adjustments to make it weaker. Just change numbers in the probability function until you get one that fits your expectations.

Related

Test multivariate normality of 2D binned data in R

I have some heatmap data and I want a notion as to whether that heat map is 'centered' around the middle of my image or skewed to one side (in R). My data is too big to give an example here, so this is some fake data of the same form (but in real life my intensity values are not uniformly distributed, I assume they are binned counts from an underlying multivariate normal distribution but I don't know how to code that as a reproducible example).
set.seed(42)
tibble(
x = rep(0:7, each = 8),
y = rep(0:7, 8),
intensity = sample(0:10, 64, replace = TRUE)
)
The x value here is the horizontal index of a pixel, the y value is the vertical index of a pixel and intensity is the value of that pixel according to a heatmap. I have managed to find a "centre" of the heatmap by marginalising these intensity values and finding the marginalised mean for x and y, but how would I perform a hypothesis test on whether the underlying multivariate normal distribution was centered around a certain point? In this case I would like to have a test statistic (more specifically a -log10 p-value) as to whether the underlying multivariate normal distibution that generated this count data is centered around the point c(3.5, 3.5).
Furthermore, I would also like a test statistic (again, more specifically a -log10 p-value) as to whether the underlying distribution that generated the count data actually is multivariate normal.
This is all part of a larger pipeline where I would like to use dplyr and group_by to perform this test on multiple heatmaps at once so if it is possible to keep this in tidy format that would be great.
A little bit of googling finds this web page which suggests mvnormtest::mshapiro.test.
mshap <- function(z, nrow = round(sqrt(length(z)))) {
mvnormtest::mshapiro.test(matrix(z, nrow = nrow))
}
mshap(dd$intensity)
If you want to make this more tidy-like you could do something with map/nest/etc..
I'm not quite sure how to test the centering hypothesis (likelihood ratio test using mnormt::dmnorm ?)

Generating groups of skewed size but whose elements add to a fixed sum

I have some fixed number of people (e.g. 1000). I would like to split these 1000 people into some random number of classes Y (e.g. 5), but not equally. I want them to be distributed unevenly, according to some probability distribution that is heavily skewed (something like a power-law distribution).
My intuition is that I need to generate a distribution of probabilities that is (1) skewed and (2) which also adds up to 1.
My ad hoc solution was to generate random numbers from a power law distribution, multiply these by some scalar that ensures these add up to something close to my target number, adjust my target number to that new number, and then split accordingly.
But it seems awfully inelegant, and 'y_size' doesn't always sum to 1000, which requires looping through and trying again. What's a better approach?
require(poweRlaw)
x<-1000
y<-10
y_sizes<-rpldis(10,xmin=5,alpha=2,discrete_max=x)
y_sizes<-round(y_sizes * x/sum(y_sizes))
newx<-y_sizes #newx only approx = x rather than = x
people<-1:x
groups<-cut(
people,
c(0,cumsum(y_sizes))
) %>% as.numeric
data.frame(
people=people,
group=groups
)
The algorithm presented by Smith and Tromble in "Sampling Uniformly from the Unit Simplex" shows a solution. I have pseudocode on this algorithm in my section "Random Integers with a Given Positive Sum".

R - simulate data for probability density distribution obtained from kernel density estimate

First off, I'm not entirely sure if this is the correct place to be posting this, as perhaps it should go in a more statistics-focussed forum. However, as I'm planning to implement this with R, I figured it would be best to post it here. Please apologise if I'm wrong.
So, what I'm trying to do is the following. I want to simulate data for a total of 250.000 observations, assigning a continuous (non-integer) value in line with a kernel density estimate derived from empirical data (discrete), with original values ranging from -5 to +5. Here's a plot of the distribution I want to use.
It's quite essential to me that I don't simulate the new data based on the discrete probabilities, but rather the continuous ones as it's really important that a value can be say 2.89 rather than 3 or 2. So new values would be assigned based on the probabilities depicted in the plot. The most frequent value in the simulated data would be somewhere around +2, whereas values around -4 and +5 would be rather rare.
I have done quite a bit of reading on simulating data in R and about how kernel density estimates work, but I'm really not moving forward at all. So my question basically entails two steps - how do I even simulate the data (1) and furthermore, how do I simulate the data using this particular probability distribution (2)?
Thanks in advance, I hope you guys can help me out with this.
With your underlying discrete data, create a kernel density estimate on as fine a grid as you wish (i.e., as "close to continuous" as needed for your application (within the limits of machine precision and computing time, of course)). Then sample from that kernel density, using the density values to ensure that more probable values of your distribution are more likely to be sampled. For example:
Fake data, just to have something to work with in this example:
set.seed(4396)
dat = round(rnorm(1000,100,10))
Create kernel density estimate. Increase n if you want the density estimated on a finer grid of points:
dens = density(dat, n=2^14)
In this case, the density is estimated on a grid of 2^14 points, with distance mean(diff(dens$x))=0.0045 between each point.
Now, sample from the kernel density estimate: We sample the x-values of the density estimate, and set prob equal to the y-values (densities) of the density estimate, so that more probable x-values will be more likely to be sampled:
kern.samp = sample(dens$x, 250000, replace=TRUE, prob=dens$y)
Compare dens (the density estimate of our original data) (black line), with the density of kern.samp (red):
plot(dens, lwd=2)
lines(density(kern.samp), col="red",lwd=2)
With the method above, you can create a finer and finer grid for the density estimate, but you'll still be limited to density values at grid points used for the density estimate (i.e., the values of dens$x). However, if you really need to be able to get the density for any data value, you can create an approximation function. In this case, you would still create the density estimate--at whatever bandwidth and grid size necessary to capture the structure of the data--and then create a function that interpolates the density between the grid points. For example:
dens = density(dat, n=2^14)
dens.func = approxfun(dens)
x = c(72.4588, 86.94, 101.1058301)
dens.func(x)
[1] 0.001689885 0.017292405 0.040875436
You can use this to obtain the density distribution at any x value (rather than just at the grid points used by the density function), and then use the output of dens.func as the prob argument to sample.

How to generate OUTLIER-FREE data in R?

I would like to know how can I generate an OUTLIER-FREE data using R.
I'm generating data using RNORM.
Say I have a linear equation
Y = B0 + B1*X + E, where X~N(5,9) and E~N(0,1).
I'm going to use RNORM in generating X and E.
Below are the codes used:
X <- rnorm(50,5,3) #I'm generating 50 Xi's w/ mean=5 & var=9
E <- rnorm(50,0,1) #I'm generating 50 residuals w/ mean=0 & var=1
Now, I'm going to generate Y by plugging the generated data on X & E above in the linear equation.
If the data I've generated above is outlier-free (no influential observation), then no Cook's Distance of observations should exceed 4/n, which is the usual cut-off for detecting influential/outlying observations.
But I wasn't not able to get this so far. I'm still getting outliers once I generate data following this procedure.
Can you help me out on this? Do you know a way how can I generate data which is OUTLIER-FREE.
Thanks a lot!
Well, one way would be to detect and delete those outliers by finding the generated points that exceed some cutoff. Of course this would harm the "randomness" in your generated data but your request for outlier-free data implies that by definition. Possibly, decreasing the variance of X could also help.
Is there a particular reason you need the X's to be normally distributed? The assumption of normality in regression is for the residuals (the error term). Typically the measured independent variable won't be normally distributed -- in a balanced, (quasi-)experimental setup, the X's should be close to uniformly distributed. A uniform distribution for the X's (or even an evenly divided sequence generated with seq()) would help you here because the "outlierness" of outliers arises from being both being far from the center from the sample space and being comparatively few in number. With a uniform distribution, they are no longer few in number, which reduces their leverage.
As a sidebar: real-data has outliers. This is actually one of the ways we can detect touched-up or even faked data in science. If you're interested in simulations that correspond to something in reality, then outliers may not be a bad thing. And there is a whole world of robust methods for dealing with data with arbitrarily bad outliers in a principled way as opposed to arbitrary cutoff points.

Scaling of covariance matrices

For the question "Ellipse around the data in MATLAB", in the answer given by Amro, he says the following:
"If you want the ellipse to represent
a specific level of standard
deviation, the correct way of doing is
by scaling the covariance matrix"
and the code to scale it was given as
STD = 2; %# 2 standard deviations
conf = 2*normcdf(STD)-1; %# covers around 95% of population
scale = chi2inv(conf,2); %# inverse chi-squared with dof=#dimensions
Cov = cov(X0) * scale;
[V D] = eig(Cov);
I don't understand the first 3 lines of the above code snippet. How is the scale calculated by chi2inv(conf,2), and what is the rationale behind multiplying it with the covariace matrix?
Additional Question:
I also found that if I scale it with 1.5 STD, i.e. 86% tiles, the ellipse can cover all of the points, my points set are clumping together, at almost all the cases. On the other hand, if I scale it with 3 STD, i.e. 99%tiles, the ellipse is far too big. Then how can I choose a STD to just tightly cover the clumping points?
Here is an example:
The inner ellipse corresponds to 1.5 STD and outer to 2.5 STD. why 1.5 STD is tightly cover the clumping white points? Is there any approach or reason to define it?
The objective of displaying an ellipse around the data points is to show the confidence interval, or in other words, "how much of the data is within a certain standard deviation way from the mean"
In the above code, he has chosen to display an ellipse that covers 95% of the data points. For a normal distribution, ~67% of the data is 1 s.d. away from the mean, ~95% within 2 s.d. and ~99% within 3 s.d. (the numbers are off the top of my head, but you can easily verify this by calculating the area under the curve). Hence, the value STD=2; You'll find that conf is approx 0.95.
The distance of the data points from the centroid of the data goes something like (xi^2+yi^2)^0.5, ignoring coefficients. Sums of squares of random variables follow a chi-square distribution and hence to get the corresponding 95 percentile, he uses the inverse chi-square function, with d.o.f. 2, as there are two variables.
Lastly, the rationale behind multiplying the scaling constant follows from the fact that for a square matrix A with eigenvalues a1,...,an, the eigenvalues of a matrix kA, where k is a scalar is simply ka1,...,kan. The eigenvalues give the corresponding lengths of the major/minor axis of the ellipse, and so scaling the ellipse or the eigenvalues to the 95%tile is equivalent to multiplying the covariance matrix with the scaling factor.
EDIT
Cheng, although you might already know this, I suggest that you also read this answer to a question on randomness. Consider a Gaussian random variable with zero mean, unit variance. The PDF of a collection of such random variables looks like this
Now, if I were to take two such collections of random variables, square them separately and add them to form a single collection of a new random variable, its distribution looks like this
This is the chi-square distribution with 2 degrees of freedom (since we added two collections).
The equation of the ellipse in the above code can be written as x^2/a^2 +y^2/b^2=k, where x,y are the two random variables, a and b are the major/minor axes, and k is some scaling constant that we need to figure out. As you can see, the above can be interpreted as squaring and adding two collections of Gaussian random variables, and we just saw above what its distribution looks like. So, we can say that k is a random variable that is chi-square distributed with 2 degrees of freedom.
Now all that needs to be done is to find a value for k such that 95%ile of the data is within it. Just like the 1s.d, 2s.d, 3s.d. percentiles that we're familiar with Gaussians, the 95%tile for chi-square with 2 degrees of freedom is around 6.18. This is what Amro obtains from the chi2inv function. He could have just as well written scale=chi2inv(0.95,2) and it would have been the same. It's just that talking in terms of n s.d. away from the mean is intuitive.
Just to illustrate, here's a PDF of the chi-square distribution above, with 95% of the area < some x shaded in red. This x is ~6.18.
Hope this helped.

Resources