Use of rmvnorm and dmvnorm for two variables - r

I have been trying to apply this functions but I am having some problems.
For one variable(x) I have
mean <- rnorm(K,mean=mean(x),sd=sd(x))
sigma2 <- rep(sd(x),K)
for (k in 1:K)
{
f[,k] <- dnorm(x,mu[k],sigma2[k]) ##pdf ##
}
I want to do the same but now I have a matrix(T) with two variables x and y
Could somebody help me, please. I am new with R. Thanks

The mu object is not defined (and neither are K, or x, so I'm going to assume your brain skipped a beat and that you really wanted that mu to be what you called mean that you had defined one line earlier. I'm further going make them both named mu since naming objects by the function that creates them is a bad idea. Your for-loop is entirely unnecessary since dnorm is vectorized:
K= 100; x <- rnorm(10)
mu <- rnorm(K,mean=mean(x),sd=sd(x))
sigma2 <-sd(x)
f <- dnorm(x,mean=mu, sd=sigma2) ##pdf ##
str(f)
# num [1:100] 0.39342 0.42177 0.00906 0.38493 0.29362 ...
So now you know how to work with dnorm. Tp make it work with a matrix by column you can do this:
apply(T, 2, dnorm, mean=mu, sd=sigma2)
Your question title said dmvnorm but you code said dnorm, so if you wnat to use a multivariate density then you need to specify which package you are using and provide quite a bit more detail of what the goals are.

Related

why when I evaluate my function in nleqslv(fun)$x the results aren't near 0?

I want to determine the value of z = z1, ..., zn such that
z1 = g1(z1,..,zn)
...
zn = gn(z1,..,zn)
for some nonlinear gi(z1,..,zn), 1<=i<=n. And I declared the function:
vecfun <- function(z){
fun1 <- z1 - g1(z1,..,zn)
...
funn <- zn - gn(z1,..,zn)
return(c(fun1, ..., funn))
}
Then, I will call:
sol <- nleqslv(cond0, vectfun)
And I get some output different from cond0 (I also had some times when the output/solution allegedly is the same as cond0, but in reality, it isn't). However when I do:
vecfun(sol$x)
The results aren't zero, nor something relatively close. In principle, nleqslv should return the z's that solve the system for vectfun to be the n-dimensional zero vector (right?). But why doesn't it hold when I try to confirm it?
Any help is highly appreciated as well as alternative methods to find roots of vector functions.

Issue with the dimension of matrix being optimised in R

I am attempting to calculate some weights in order to perform an indirect treatment comparison using R. I have altered some code slightly, in order to reflect that I am only centring the mean. However, this code will not run.
I believe this is due to the a1 matrix having two columns instead of one, but I really can't work out how to change this. I have tried adding a column of zeros and ones to the matrix, but I'm not sure if this will give me a correct result.
Of course, this may not be the issue at all, but I fail to see what else could be causing this. I have included the code and any advice would be appreciated.
# Objective function
objfn <- function(a1, X){
sum(exp(X %*% a1))
}
# Gradient function
gradfn <- function(a1, X){
colSums(sweep(X, 1, exp(X %*% a1), "*"))
}
X.EM.0 = data$A-age.mean
# Estimate weights
print(opt1 <- optim(par = c(0,0), fn = objfn, gr = gradfn, X = X.EM.0, method = "BFGS"))
a1 <- opt1$par
Such a simple solution, I'm slightly embarrassed to have posted this.
par=c(0,0) should be altered to match the columns of data. Here it should have been changed to one.

Find out which percentile a number has [duplicate]

Using R, it is trivial to calculate the quantiles for given probabilities in a sampled distribution:
x <- rnorm(1000, mean=4, sd=2)
quantile(x, .9) # results in 6.705755
However, I can't find an easy way to do the inverseā€”calculate the probability for a given quantile in the sample x. The closest I've come is to use pnorm() with the same mean and standard deviation I used when creating the sample:
pnorm(5, mean=4, sd=2) # results in 0.6914625
However, because this is calculating the probability from the full normal distribution, and not the sample x, it's not entirely accurate.
Is there a function that essentially does the inverse of quantile()? Something that essentially lets me do the same thing as pnorm() but with a sample? Something like this:
backwards_quantile(x, 5)
I've found the ecdf() function, but can't figure out a way to make it result in a single probability instead of a full equation object.
ecdf returns a function: you need to apply it.
f <- ecdf(x)
f( quantile(x,.91) )
# Equivalently:
ecdf(x)( quantile(x,.91) )
Just for convenience, this function helps:
quantInv <- function(distr, value) ecdf(distr)(value)
set.seed(1)
x <- rnorm(1000, mean=4, sd=2)
quantInv(x, c(4, 5, 6.705755))
[1] 0.518 0.685 0.904
You more or less have the answer yourself. When you want to write
backwards_quantile(x, 5)
just write
ecdf(x)(5)
This corresponds to the inverse of quantile() with type=1. However, if you want other types (I favour the NIST standard, corresponding to Excel's Percentile.exc, which is type=6), you have more work to do.
In these latter cases, consider which use you are going to put it to. If all you want is to plot it, for instance, then consider
yVals<-seq(0,1,0.01)
plot(quantile(x,yVals,type=6))
But if you want the inverse for a single value, like 5, then you need to write a solving function to find the P that makes
quantile(x,P,type=6) = 5
For instance this, which uses binary search between the extreme values of x:
inverse_quantile<-function(x,y,d=0.01,type=1) {
A<-min(x)
B<-max(x)
k<-(log((B-A)/d)/log(2))+1
P=0.5
for (i in 1:k) {
P=P+ifelse((quantile(x,P,type=type)<y),2^{-i-1},-2^{-i-1})
}
P
}
So if you wanted the type 4 quantile of your set x for the number 5, with precision 0.00001, then you would write
inverse_quantile<-function(x,5,d=0.00001,type=4)

Error in Gradient Descent Calculation

I tried to write a function to calculate gradient descent for a linear regression model. However the answers I was getting does not match the answers I get using the normal equation method.
My sample data is:
df <- data.frame(c(1,5,6),c(3,5,6),c(4,6,8))
with c(4,6,8) being the y values.
lm_gradient_descent <- function(df,learning_rate, y_col=length(df),scale=TRUE){
n_features <- length(df) #n_features is the number of features in the data set
#using mean normalization to scale features
if(scale==TRUE){
for (i in 1:(n_features)){
df[,i] <- (df[,i]-mean(df[,i]))/sd(df[,i])
}
}
y_data <- df[,y_col]
df[,y_col] <- NULL
par <- rep(1,n_features)
df <- merge(1,df)
data_mat <- data.matrix(df)
#we need a temp_arr to store each iteration of parameter values so that we can do a
#simultaneous update
temp_arr <- rep(0,n_features)
diff <- 1
while(diff>0.0000001){
for (i in 1:(n_features)){
temp_arr[i] <- par[i]-learning_rate*sum((data_mat%*%par-y_data)*df[,i])/length(y_data)
}
diff <- par[1]-temp_arr[1]
print(diff)
par <- temp_arr
}
return(par)
}
Running this function,
lm_gradient_descent(df,0.0001,,0)
the results I got were
c(0.9165891,0.6115482,0.5652970)
when I use the normal equation method, I get
c(2,1,0).
Hope someone can shed some light on where I went wrong in this function.
You used the stopping criterion
old parameters - new parameters <= 0.0000001
First of all I think there's an abs() missing if you want to use this criterion (though my ignorance of R may be at fault).
But even if you use
abs(old parameters - new parameters) <= 0.0000001
this is not a good stopping criterion: it only tells you that progress has slowed down, not that it's already sufficiently accurate. Try instead simply to iterate for a fixed number of iterations. Unfortunately it's not that easy to give a good, generally applicable stopping criterion for gradient descent here.
It seems that you have not implemented a bias term. In a linear model like this, you always want to have an additional additive constant, i.e., your model should be like
w_0 + w_1*x_1 + ... + w_n*x_n.
Without the w_0 term, you usually won't get a good fit.
I know this is a couple of weeks old at this point but I'm going to take a stab at for several reasons, namely
Relatively new to R so deciphering your code and rewriting it is good practice for me
Working on a different Gradient Descent problem so this is all fresh to me
Need the stackflow points and
As far as I can tell you never got a working answer.
First, regarding your data structures. You start with a dataframe, rename a column, strip out a vector, then strip out a matrix. It would be a lot easier to just start with an X matrix (capitalized since its component 'features' are referred to as xsubscript i) and a y solution vector.
X <- cbind(c(1,5,6),c(3,5,6))
y <- c(4,6,8)
We can easily see what the desired solutions are, with and without scaling by fitting a linear fit model. (NOTE We only scale X/features and not y/solutions)
> lm(y~X)
Call:
lm(formula = y ~ X)
Coefficients:
(Intercept) X1 X2
-4 -1 3
> lm(y~scale(X))
Call:
lm(formula = y ~ scale(X))
Coefficients:
(Intercept) scale(X)1 scale(X)2
6.000 -2.646 4.583
With regards to your code, one of the beauties of R is that it can perform matrix multiplication which is significantly faster than using loops.
lm_gradient_descent <- function(X, y, learning_rate, scale=TRUE){
if(scale==TRUE){X <- scale(X)}
X <- cbind(1, X)
theta <- rep(0, ncol(X)) #your old temp_arr
diff <- 1
old.error <- sum( (X %*% theta - y)^2 ) / (2*length(y))
while(diff>0.000000001){
theta <- theta - learning_rate * t(X) %*% (X %*% theta - y) / length(y)
new.error <- sum( (X %*% theta - y)^2 ) / (2*length(y))
diff <- abs(old.error - new.error)
old.error <- new.error
}
return(theta)
}
And to show it works...
> lm_gradient_descent(X, y, .01, 0)
[,1]
[1,] -3.9360685
[2,] -0.9851775
[3,] 2.9736566
vs expected of (-4, -1, 3)
For what its worth while I agree with #cfh that I would prefer a loop with a defined number of iterations, I'm actually not sure you need the abs function. If diff < 0 then your function is not converging.
Finally rather than using something like old.error and new.error I'd suggest using a a vector that records all errors. You can then plot that vector to see how quickly your function converges.

Does cattell's profile similarity coefficient (Rp) exist as a function in R?

i'm comparing different measures of distance and similarity for vector profiles (Subtest results) in R, most of them are easy to compute and/or exist in dist().
Unfortunately, one that might be interesting and is to difficult for me to calculate myself is Cattel's Rp. I can not find it in R.
Does anybody know if this exists already?
Or can you help me to write a function?
The formula (Cattell 1994) of Rp is this:
(2k-d^2)/(2k + d^2)
where:
k is the median for chi square on a sample of size n;
d is the sum of the (weighted=m) difference between the two profiles,
sth like: sum(m(x(i)-y(i)));
one thing i don't know is, how to get the chi square median in there
Thank you
What i get without defining the k is:
Rp.Cattell <- function(x,y){z <- (2k-(sum(x-y))^2)/(2k+(sum(x-y))^2);return(z)}
Vector examples are:
x <- c(-1.2357,-1.1999,-1.4727,-0.3915,-0.2547,-0.4758)
y <- c(0.7785,0.9357,0.7165,-0.6067,-0.4668,-0.5925)
They are measures by the same device, but related to different bodyparts. They don't need to be standartised or weighted, i would say.
This page gives a general formula for k, and then gives a more thorough method using SAS/IML which pretty much gives the same results. So I used the general formula, added calculation of degrees of freedom, which leads to this:
Rp.Cattell <- function(x,y) {
dof <- (2-1) * (length(y)-1)
k <- (1-2/(9*dof))^3
z <- (2*k-sum(sum(x-y))^2)/(2*k+sum(sum(x-y))^2)
return(z)
}
x <- c(-1.2357,-1.1999,-1.4727,-0.3915,-0.2547,-0.4758)
y <- c(0.7785,0.9357,0.7165,-0.6067,-0.4668,-0.5925)
Rp.Cattell(x, y)
# [1] -0.9012083
Does this figure appear to make sense?
Trying to verify the function, I found out now that the median of chisquare is the chisquare value for 50% probability - relating to random. So the function should be:
Rp.Cattell <- function(x,y){
dof <- (2-1) * (length(y)-1)
k <- qchisq(.50, df=dof)
z <- (2k-(sum(x-y))^2)/(2k+(sum(x-y))^2);
return(z)}
It is necessary though to standardize the Values before, so the results are distributed correctly.
So:
library ("stringr")
# they are centered already
x <- as.vector(scale(c(-1.2357,-1.1999,-1.4727,-0.3915,-0.2547,-0.4758),center=F, scale=T))
y <- as.vector(scale(c(0.7785,0.9357,0.7165,-0.6067,-0.4668,-0.5925),center=F, scale=T))
Rp.Cattell(x, y) -0.584423
This sounds reasonable now - or not?
I consider calculation of z is incorrect.
You need to calculate the sum of the squared differences. Not the square of the sum of differences. Besides product operator is missing in 2k.
It should be
z <- (2*k-sum((x-y)^2))/(2*k+sum((x-y)^2))
Do you agree?

Resources