multivariate skew normal in R - r

I'm trying to generate random numbers with a multivariate skew normal distribution using the rmsn command from the sn package in R. I would like, ideally, to be able to get three columns of numbers with a specified variances and covariances, while having one column strongly skewed. But I'm struggling to achieve both goals simultaneously.
The post at skew normal distribution was related and useful (and the source of some of the code below), but hasn't completely clarified the issue for me.
I've been trying:
a <- c(5, 0, 0) # set shape parameter
s <- diag(3) # create variance-covariance matrix
w <- sqrt(1/(1-((2*(a^2)/(1 + a^2))/pi))) # determine scale parameter to get sd of 1
xi <- w*a/sqrt(1 + a^2)*sqrt(2/pi) # determine location parameter to get mean of 0
apply(rmsn(n=1000, xi=c(xi), Omega=s, alpha=a), 2, sd)
colMeans(rmsn(n=1000, xi=c(xi), Omega=s, alpha=a))
The columns means and SDs are correct for the second and third columns (which have no skew) but not the first (which does). Can anyone clarify where my code above, or my thinking, has gone wrong? I may be misunderstanding how to use rmsn, or the output. Any assistance would be appreciated.

The location is not the mean (except when there is no skew). From the documentation:
Notice that the location vector ‘xi’ does not represent the mean
vector of the distribution (which in fact may not even exist if ‘df <=
1’), and similarly ‘Omega’ is not the covariance matrix of the
distribution
And you may want to replace Omega=s with Omega=w.
And this is supposed to be a variance matrix: there should be no square root.

Related

Centering/standardizing variables in R [duplicate]

I'm trying to understand the definition of scale that R provides. I have data (mydata) that I want to make a heat map with, and there is a VERY strong positive skew. I've created a heatmap with a dendrogram for both scale(mydata) and log(my data), and the dendrograms are different for both. Why? What does it mean to scale my data, versus log transform my data? And which would be more appropriate if I want to look at the dendrogram illustrating the relationship between the columns of my data?
Thank you for any help! I've read the definitions but they are whooping over my head.
log simply takes the logarithm (base e, by default) of each element of the vector.
scale, with default settings, will calculate the mean and standard deviation of the entire vector, then "scale" each element by those values by subtracting the mean and dividing by the sd. (If you use scale(x, scale=FALSE), it will only subtract the mean but not divide by the std deviation.)
Note that this will give you the same values
set.seed(1)
x <- runif(7)
# Manually scaling
(x - mean(x)) / sd(x)
scale(x)
It provides nothing else but a standardization of the data. The values it creates are known under several different names, one of them being z-scores ("Z" because the normal distribution is also known as the "Z distribution").
More can be found here:
http://en.wikipedia.org/wiki/Standard_score
This is a late addition but I was looking for information on the scale function myself and though it might help somebody else as well.
To modify the response from Ricardo Saporta a little bit.
Scaling is not done using standard deviation, at least not in version 3.6.1 of R, I base this on "Becker, R. (2018). The new S language. CRC Press." and my own experimentation.
X.man.scaled <- X/sqrt(sum(X^2)/(length(X)-1))
X.aut.scaled <- scale(X, center = F)
The result of these rows are exactly the same, I show it without centering because of simplicity.
I would respond in a comment but did not have enough reputation.
I thought I would contribute by providing a concrete example of the practical use of the scale function. Say you have 3 test scores (Math, Science, and English) that you want to compare. Maybe you may even want to generate a composite score based on each of the 3 tests for each observation. Your data could look as as thus:
student_id <- seq(1,10)
math <- c(502,600,412,358,495,512,410,625,573,522)
science <- c(95,99,80,82,75,85,80,95,89,86)
english <- c(25,22,18,15,20,28,15,30,27,18)
df <- data.frame(student_id,math,science,english)
Obviously it would not make sense to compare the means of these 3 scores as the scale of the scores are vastly different. By scaling them however, you have more comparable scoring units:
z <- scale(df[,2:4],center=TRUE,scale=TRUE)
You could then use these scaled results to create a composite score. For instance, average the values and assign a grade based on the percentiles of this average. Hope this helped!
Note: I borrowed this example from the book "R In Action". It's a great book! Would definitely recommend.

R: How to generate several random variables at once

I want to generate 1000 random variables coming from different normal distributions. I use the function "rmvnorm" for that and in a small setting, it is easily done but I have no idea how to automate it, especially for the sigma matrix (I want no correlation between the Xs). I don't really care about their means or their standard deviation. I was thinking of using a loop (e.g. increase by A the mean and by B the variance) but I want something more random and have no idea how I can do that. Again, writing down a matrix of 1000 dimension is not smart (with the condition that the off-diag elements are 0).
I have searched online but I am probably not using the rights words so I apologize if it was already asked and answered.
Thanks!
You can pass equal-length vectors for the parameters of rnorm. The first value returned will be a random draw from a normal distribution with a mean equal to the first value in the mean vector and sd equal to the first value in the sd vector:
rnorm(1e3, 1:1e3, 1:1e3)
Not sure what is meant by "I want something more random", but you can use random values for the mean and sd vectors:
rnorm(1e3, runif(1e3)*1e3, 1/rgamma(1e3, 10, 20))

How to estimate a parameter in R- sample with replace

I have a txt file with numbers that looks like this(but with 100 numbers) -
[1] 7.1652348 5.6665965 4.4757553 4.8497086 15.2276296 -0.5730937
[7] 4.9798067 2.7396933 5.1468304 10.1221489 9.0165661 65.7118194
[13] 5.5205704 6.3067488 8.6777177 5.2528503 3.5039562 4.2477401
[19] 11.4137624 -48.1722034 -0.3764006 5.7647536 -27.3533138 4.0968204
I need to estimate MLE theta parameter from this distrubution -
[![this is my distrubution ][1]][1]
and I need to estimate theta from a sample of 1000 observations with replace, and save the sample, and do a hist.
How can I estimate theta from my sample? I have no information about normal distrubation.
I wrote something like this -
data<-read.table(file.choose(), header = TRUE, sep= "")
B <- 1000
sample.means <- numeric(data)
sample.sd <- numeric(data)
for (i in 1:B) {
MySample <- sample(data, length(data), replace = TRUE)
sample.means <- c(sample.means,mean(MySample))
sample.sd <- c(sample.sd,sd(MySample))
}
sd(sample.sd)
but it doesn't work..
This question incorporates multiple different ones, so let's tackle each step by step.
First, you will need to draw a random sample from your population (with replacement). Assuming your 100 population-observations sit in a vector named pop.
rs <- sample(pop, 1000, replace = True)
gives you your vector of random samples. If you wanna save it, you can write it to your disk in multiple formats, so I'll just suggest a few related questions (How to Export/Import Vectors in R?).
In a second step, you can use the mle()-function of the stats4-package (https://stat.ethz.ch/R-manual/R-devel/library/stats4/html/mle.html) and specify the objective function explicitly.
However, the second part of your question is more of a statistical/conceptual question than R related, IMO.
Try to understand what MLE actually does. You do not need normally distributed variables. The idea behind MLE is to choose theta in such a way, that under the resulting distribution the random sample is the most probable. Check https://en.wikipedia.org/wiki/Maximum_likelihood_estimation for more details or some youtube videos, if you'd like a more intuitive approach.
I assume, in the description of your task, it is stated that f(x|theta) is the conditional joint density function and that the observations x are iir?
What you wanna do in this case, is to select theta such that the squared difference between the observation x and the parameter theta is minimized.
For your statistical understanding, in such cases, it makes sense to perform log-linearization on the equation, instead of dealing with a non-linear function.
Minimizing the squared difference is equivalent to maximizing the log-transformed function since the sum is negative (<=> the product was in the denominator) and the log, as well as the +1 are solely linear transformations.
This leaves you with the maximization problem:
And the first-order condition:
Obviously, you would also have to check that you are actually dealing with a maximum via the second-order condition but I'll omit that at this stage for simplicity.
The algorithm in R does nothing else than solving this maximization problem.
Hope this helps for your understanding. Maybe some smarter people can give some additional input.

Finding highest values in a Poisson distribution in R

I do have a script in R that provides me predictions for football matches.
It uses a Poisson Distribution formula to find the results that are most likely to happen in a match, and working on them, u can find who wins or lose (if you sum the probabilities for 1-0,2-0,2-1, etc... u find the chance of winning for team1, and so on...)
What I do need is to identify the 2 highest values inside the poisson distribution table and their relative "fathers".
I mean, as you see in the pic here I should identify 0.08652817 and 0.07346077 and thei relative "fathers" (3-2 and 4-2)
So the script should provide something like
1°: 0,0865 (3-2)
2°: 0,073 (4-2)
I tried using
max(match, na.rm=T)
but obviously it just shows 0,0865 and not it's "father" (3-2)
I do need the same also for the second highest value.
What should I do?
using the packages dplyr and purrr:
m <- matrix(ncol=7, nrow=7, runif(49)) #Fake data that look like your probabilities
d <- expand.grid(home=1:7, away=1:7) #data.frame with all possible outcomes
d$prob <- purrr::pmap_dbl(d, function(home, away){m[home,away]})
dplyr::top_n(d, 3, prob)

Is it possible to specify a range for numbers randomly generated by mvrnorm( ) in R?

I am trying to generate a random set of numbers that exactly mirror a data set that I have (to test it). The dataset consists of 5 variables that are all correlated with different means and standard deviations as well as ranges (they are likert scales added together to form 1 variable). I have been able to get mvrnorm from the MASS package to create a dataset that replicated the correlation matrix with the observed number of observations (after 500,000+ iterations), and I can easily reassign means and std. dev. through z-score transformation, but I still have specific values within each variable vector that are far above or below the possible range of the scale whose score I wish to replicate.
Any suggestions how to fix the range appropriately?
Thank you for sharing your knowledge!
To generate a sample that does "exactly mirror" the original dataset, you need to make sure that the marginal distributions and the dependence structure of the sample matches those of the original dataset.
A simple way to achieve this is with resampling
my.data <- matrix(runif(1000, -1, 2), nrow = 200, ncol = 5) # Some dummy data
my.ind <- sample(1:nrow(my.data), nrow(my.data), replace = TRUE)
my.sample <- my.data[my.ind, ]
This will ensure that the margins and the dependence structure of the sample (closely) matches those of the original data.
An alternative is to use a parametric model for the margins and/or the dependence structure (copula). But as staded by #dickoa, this will require serious modeling effort.
Note that by using a multivariate normal distribution, you are (implicity) assuming that the dependence structure of the original data is the Gaussian copula. This is a strong assumption, and it would need to be validated beforehand.

Resources