Mahalanobis distance in R - r

I have found the mahalanobis.dist function in package StatMatch (http://cran.r-project.org/web/packages/StatMatch/StatMatch.pdf) but it isn't doing exactly what I want. It seems to be calculating the mahalanobis distance from each observation in data.y to each observation in data.x
I would like to calculate the mahalanobis distance of one observation in data.y to all observations in data.x. Basically calculate a mahalanobis distance of one point to a "cloud" of points if that makes sense. Kind of getting at the idea of the probability of an observation being part of another group of observations
This person (http://people.revoledu.com/kardi/tutorial/Similarity/MahalanobisDistance.html) seems to be doing this and I've tried to replicate his process in R but it is failing when I get to the bottom part of the equation:
mahaldist = sqrt((inversepooledcov %*% t(meandiffmatrix)) %*% meandiffmatrix)
All the code I am working with is here:
a = rbind(c(2,2), c(2,5), c(6,5),c(7,3))
colnames(a) = c('x', 'y')
b = rbind(c(6,5),c(3,4))
colnames(b) = c('x', 'y')
acov = cov(a)
bcov = cov(b)
meandiff1 = mean(a[,1]) - mean(b[,1])
meandiff2 = mean(a[,2]) - mean(b[,2])
meandiffmatrix = rbind(c(meandiff1,meandiff2))
totaldata = dim(a)[1] + dim(b)[1]
pooledcov = (dim(a)[1]/totaldata * acov) + (dim(b)[1]/totaldata * bcov)
inversepooledcov = solve(pooledcov)
mahaldist = sqrt((inversepooledcov %*% t(meandiffmatrix)) %*% meandiffmatrix)

How about using the mahalanobis function in the stats package:
mahalanobis(x, center, cov, inverted = FALSE, ...)

I've been trying this out from the same website that you looked at and then stumbled upon this question. I managed to get the script to work, But I get a different result.
#WORKING EXAMPLE
#MAHALANOBIS DIST OF TWO MATRICES
#define matrix
mat1<-matrix(data=c(2,2,6,7,4,6,5,4,2,1,2,5,5,3,7,4,3,6,5,3),nrow=10)
mat2<-matrix(data=c(6,7,8,5,5,5,4,7,6,4),nrow=5)
#center data
mat1.1<-scale(mat1,center=T,scale=F)
mat2.1<-scale(mat2,center=T,scale=F)
#cov matrix
mat1.2<-cov(mat1.1,method="pearson")
mat2.2<-cov(mat2.1,method="pearson")
n1<-nrow(mat1)
n2<-nrow(mat2)
n3<-n1+n2
#pooled matrix
mat3<-((n1/n3)*mat1.2) + ((n2/n3)*mat2.2)
#inverse pooled matrix
mat4<-solve(mat3)
#mean diff
mat5<-as.matrix((colMeans(mat1)-colMeans(mat2)))
#multiply
mat6<-t(mat5) %*% mat4
#multiply
sqrt(mat6 %*% mat5)
I think the function mahalanobis() is used to compute mahalanobis distances between individuals (rows) in one matrix. The function pairwise.mahalanobis() from package(HDMD) can compare two or more matrices and give mahalanobis distances between the matrices.

You can wrap the function stats::mahalanobis as bellow to output a mahalanobis distance matrix (pairwise mahalanobis distances):
# x - data frame
# cx - covariance matrix; if not provided,
# it will be estimated from the data
mah <- function(x, cx = NULL) {
if(is.null(cx)) cx <- cov(x)
out <- lapply(1:nrow(x), function(i) {
mahalanobis(x = x,
center = do.call("c", x[i, ]),
cov = cx)
})
return(as.dist(do.call("rbind", out)))
}
Then, you can cluster your data and plot it, for example:
# Dummy data
x <- data.frame(X = c(rnorm(10, 0), rnorm(10, 5)),
Y = c(rnorm(10, 0), rnorm(10, 7)),
Z = c(rnorm(10, 0), rnorm(10, 12)))
rownames(x) <- LETTERS[1:20]
plot(x, pch = LETTERS[1:20])
# Comute the mahalanobis distance matrix
d <- mah(x)
d
# Cluster and plot
hc <- hclust(d)
plot(hc)

Your output before taking the square root is :
inversepooledcov %*% t(meandiffmatrix) %*% meandiffmatrix
[,1] [,2]
x -0.004349227 -0.01304768
y 0.114529639 0.34358892
I think you can'take the square root of negative numbers number, so you have NAN for negative elements:
sqrt(inversepooledcov %*% t(meandiffmatrix) %*% meandiffmatrix)
[,1] [,2]
x NaN NaN
y 0.3384223 0.5861646
Warning message:
In sqrt(inversepooledcov %*% t(meandiffmatrix) %*% meandiffmatrix) :
NaNs produced

Mahalanobis distance is equivalent to (squared) Euclidean distance if the covariance matrix is identity. If you have covariance between your variables, you can make Mahalanobis and sq Euclidean equal by whitening the matrix first to remove the covariance. I.e., do:
#X is your matrix
if (!require("whitening")) install.packages("whitening")
X <- whitening::whiten(X) # default is ZCA (Mahalanobis) whitening
X_dist <- dist(X, diag = T, method = "euclidean")^2
You can confirm that this gives you the same distance matrix as the code that Davit provided in one of the previous answers.

There a very easy way to do it using R Package "biotools". In this case you will get a Squared Distance Mahalanobis Matrix.
#Manly (2004, p.65-66)
x1 <- c(131.37, 132.37, 134.47, 135.50, 136.17)
x2 <- c(133.60, 132.70, 133.80, 132.30, 130.33)
x3 <- c(99.17, 99.07, 96.03, 94.53, 93.50)
x4 <- c(50.53, 50.23, 50.57, 51.97, 51.37)
#size (n x p) #Means
x <- cbind(x1, x2, x3, x4)
#size (p x p) #Variances and Covariances
Cov <- matrix(c(21.112,0.038,0.078,2.01, 0.038,23.486,5.2,2.844,
0.078,5.2,24.18,1.134, 2.01,2.844,1.134,10.154), 4, 4)
library(biotools)
Mahalanobis_Distance<-D2.dist(x, Cov)
print(Mahalanobis_Distance)

You can calculate Mahalanobis distance now through metan package. Refer functions mahala() and mahala_design(). Package documet

Related

Generating 3D data with cube as a decision surface

I am new to using r program. I have a task to use r to create a function to simulate standard normal distribution containing 500 observations and three variables, x,y,& z.
I am to use cube as a decision surface to categorize observations based on whether they fell within or outside the cube.
Below is my code. I am able to plot the 3D data, but I am not sure of how to categorize the datasets into two classes.
library(scatterplot3d)
set.seed (1234)
nObs <- 500
x <- matrix (rnorm (1.25*nObs), ncol =2)
y <- matrix (rnorm (1.25*nObs), ncol =2)
z <- matrix (rnorm (1.25*nObs), ncol =2)
mSample <- function(nObs,x,y,z){
x1 <- rnorm(1,x)
x1[y==1,] <- x[y==1,] + 1
mSample <- as_tibble(rbind(mvnfast::rmvn(x,y = y1,z = z1), mvnfast::rmvn(x,y = y1,z = z1)))
mSample[1:x1, 1.25] <- 0
mSample[(x1 + 1):(x1 + 1), 1.25] <- 1
mSample <- mSample[sample(nrow(mSample)), ]
colnames(mSample <- c("x", "y", "class"))
mSample
}
spl <- scatterplot3d(x,y,z)
spl <- scatterplot3d(x,y,z,pch=16,highlight.3d=TRUE)
I had a similar question to this recently. Basically, to know if a given point is inside or outside of a cube, first you need to know the length of the cube.
Then, simply iterate over all the points (nObs) and do an if statement
if (x > -cubeLength ** x < cubeLength && y > -cubeLength ** y < cubeLength && z > -cubeLength ** z < cubeLength) {
classify positive
}
else {
classify negative
}

How to make out-of-sample forecasts with Dynamic Linear Model of MARSS package?

I'm trying to understand how to use Dynamic Linear Modeling for forecasting. I found an example of the DLM functionality of the MARSS package in R being used for forecasting. Below is all the code in the example, starting with loading the data and ending with creating the in-sample forecasts.
What I don't understand is how I would make an out-of-sample forecast? The code below generates "in-sample" forecasts, where it uses already-known information to generate predictions about already-existing data.
Say I want to forecast the Salmon Survival tomorrow rather than throughout the last several weeks. How would I do that?
Any help would be appreciated.
# load the data
data(SalmonSurvCUI, package = "MARSS")
# get time indices
years <- SalmonSurvCUI[, 1]
# number of years of data
TT <- length(years)
# get response variable: logit(survival)
dat <- matrix(SalmonSurvCUI[, 2], nrow = 1)
# get predictor variable
CUI <- SalmonSurvCUI[, 3]
## z-score the CUI
CUI.z <- matrix((CUI - mean(CUI))/sqrt(var(CUI)), nrow = 1)
# number of regr params (slope + intercept)
m <- dim(CUI.z)[1] + 1
# for process eqn
B <- diag(m) ## 2x2; Identity
U <- matrix(0, nrow = m, ncol = 1) ## 2x1; both elements = 0
Q <- matrix(list(0), m, m) ## 2x2; all 0 for now
diag(Q) <- c("q.alpha", "q.beta") ## 2x2; diag = (q1,q2)
# for observation eqn
Z <- array(NA, c(1, m, TT)) ## NxMxT; empty for now
Z[1, 1, ] <- rep(1, TT) ## Nx1; 1's for intercept
Z[1, 2, ] <- CUI.z ## Nx1; predictor variable
A <- matrix(0) ## 1x1; scalar = 0
R <- matrix("r") ## 1x1; scalar = r
# only need starting values for regr parameters
inits.list <- list(x0 = matrix(c(0, 0), nrow = m))
# list of model matrices & vectors
mod.list <- list(B = B, U = U, Q = Q, Z = Z, A = A, R = R)
# fit univariate DLM
dlm1 <- MARSS(dat, inits = inits.list, model = mod.list)
# get list of Kalman filter output
kf.out <- MARSSkfss(dlm1)
## forecasts of regr parameters; 2xT matrix
eta <- kf.out$xtt1
## ts of E(forecasts)
fore.mean <- vector()
for (t in 1:TT) {
fore.mean[t] <- Z[, , t] %*% eta[, t, drop = FALSE]
}
# variance of regr parameters; 1x2xT array
Phi <- kf.out$Vtt1
## obs variance; 1x1 matrix
R.est <- coef(dlm1, type = "matrix")$R
## ts of Var(forecasts)
fore.var <- vector()
for (t in 1:TT) {
tZ <- matrix(Z[, , t], m, 1) ## transpose of Z
fore.var[t] <- Z[, , t] %*% Phi[, , t] %*% tZ + R.est
}
The model of the beta and alpha is a random walk without drift so the prediction of beta(TT+k) and alpha(TT+k) will just be beta(TT) and alpha(TT) where TT is the last time step in the data (in this case CUI.z).
So your prediction is
logit.survival(TT+k) = alpha(TT) + beta(TT)*CUI.z(TT+k)
alpha(TT) and beta(TT) would be output via kf.out$xtT[,TT], i.e. last state estimate. You will need to provide a CUI.z at t=TT+k.
MARSS version 3.11.0 will have predict function and will output these predictions along with the prediction intervals. But release date is sometime late summer 2020. The functionality is in the GitHub development site (under the resids_update branch) but final testing is still being done.

How to generate a probability density function and expectation in r?

The task:
Eric the fly has a friend, Ernie. Assume that the two flies sit at independent locations, uniformly distributed on the globe’s surface. Let D denote the Euclidean distance between Eric and Ernie (i.e., on a straight line through the interior of the globe).
Make a conjecture about the probability density function of D and give an
estimate of its expected value, E(D).
So far I have made a function to generate two points on the globe's surface, but I am unsure what to do next:
sample3d <- function(2)
{
df <- data.frame()
while(n > 0){
x <- runif(1,-1,1)
y <- runif(1,-1,1)
z <- runif(1,-1,1)
r <- x^2 + y^2 + z^2
if (r < 1){
u <- sqrt(x^2+y^2+z^2)
vector = data.frame(x = x/u,y = y/u, z = z/u)
df <- rbind(vector,df)
n = n- 1
}
}
df
}
E <- sample3d(2)
This is an interesting problem. I'll outline a computational approach; I'll leave the math up to you.
First we fix a random seed for reproducibility.
set.seed(2018);
We sample 10^4 points from the unit sphere surface.
sample3d <- function(n = 100) {
df <- data.frame();
while(n > 0) {
x <- runif(1,-1,1)
y <- runif(1,-1,1)
z <- runif(1,-1,1)
r <- x^2 + y^2 + z^2
if (r < 1) {
u <- sqrt(x^2 + y^2 + z^2)
vector = data.frame(x = x/u,y = y/u, z = z/u)
df <- rbind(vector,df)
n = n- 1
}
}
df
}
df <- sample3d(10^4);
Note that sample3d is not very efficient, but that's a different issue.
We now randomly sample 2 points from df, calculate the Euclidean distance between those two points (using dist), and repeat this procedure N = 10^4 times.
# Sample 2 points randomly from df, repeat N times
N <- 10^4;
dist <- replicate(N, dist(df[sample(1:nrow(df), 2), ]));
As pointed out by #JosephWood, the number N = 10^4 is somewhat arbitrary. We are using a bootstrap to derive the empirical distribution. For N -> infinity one can show that the empirical bootstrap distribution is the same as the (unknown) population distribution (Bootstrap theorem). The error term between empirical and population distribution is of the order 1/sqrt(N), so N = 10^4 should lead to an error around 1%.
We can plot the resulting probability distribution as a histogram:
# Let's plot the distribution
ggplot(data.frame(x = dist), aes(x)) + geom_histogram(bins = 50);
Finally, we can get empirical estimates for the mean and median.
# Mean
mean(dist);
#[1] 1.333021
# Median
median(dist);
#[1] 1.41602
These values are close to the theoretical values:
mean.th = 4/3
median.th = sqrt(2)

Example for dimension reduction (SVD vs Random Projection) in R

I am learning about dimension reduction techniques in R. I take one image as input and I have reduced dimension using svd using this code
library(raster)
img <- raster("C:/Users/***/Pictures/pansy.jpg")
img_flip <- flip(img, direction = "y")
img <- t(as.matrix(img_flip))
dim(img)
image(img,col=grey(seq(0,1,length=256))) # Actual
img_svd <- svd(img)
u <- img_svd$u
d <- diag(img_svd$d)
v <- img_svd$v
u1 <- as.matrix(u[, 1:50])
d1 <- as.matrix(d[1:50, 1:50])
v1 <- as.matrix(v[, 1:50])
photo1 <- u1 %*% d1 %*% t(v1)
image(photo1, col = grey(seq(0, 1, length = 256))) #Reduced
After that, I got Output Like this
Then I learned about Random Projection to compare with svd,
I followed the below steps for the same Image
Convert the image into matrix [465 X 600]
Create a Random matrix of size [600 X 300] filled with +1 and -1 and equal probability
Multiply both the matrix and I get a matrix of size [465 X 300]
Code:
rp_img <- raster("C:/Users/***/Pictures/pansy.jpg")
img_flip <- flip(rp_img, direction = "y")
rp_img <- t(as.matrix(img_flip))
rm <- form_sparse_matrix(600,n_cols = ncol(rp_img),TRUE,0.5,method = "probability") # To create a random matrix with +1 and -1 of equal probability
photo2 <- rp_img %*% rm
image(photo2, col = grey(seq(0, 1, length = 256)))
After that, I view the image, It show like below
Whether my steps for performing Random projection is correct or wrong? Where have I done a mistake?

Adding two random variables via convolution in R

I would like to compute the convolution of two probability distributions in R and I need some help. For the sake of simplicity, let's say I have a variable x that is normally distributed with mean = 1.0 and stdev = 0.5, and y that is log-normally distributed with mean = 1.5 and stdev = 0.75. I want to determine z = x + y. I understand that the distribution of z is not known a priori.
As an aside the real world example I am working with requires addition to two random variables that are distributed according to a number of different distributions.
Does anyone know how to add two random variables by convoluting the probability density functions of x and y?
I have tried generating n normally distributed random values (with above parameters) and adding them to n log-normally distributed random values. However, I wish to know if I can use the convolution method instead. Any help would be greatly appreciated.
EDIT
Thank you for these answers. I define a pdf, and try to do the convolution integral, but R complains on the integration step. My pdfs are Log Pearson 3 and are as follows
dlp3 <- function(x, a, b, g) {
p1 <- 1/(x*abs(b) * gamma(a))
p2 <- ((log(x)-g)/b)^(a-1)
p3 <- exp(-1* (log(x)-g) / b)
d <- p1 * p2 * p3
return(d)
}
f.m <- function(x) dlp3(x,3.2594,-0.18218,0.53441)
f.s <- function(x) dlp3(x,9.5645,-0.07676,1.184)
f.t <- function(z) integrate(function(x,z) f.s(z-x)*f.m(x),-Inf,Inf,z)$value
f.t <- Vectorize(f.t)
integrate(f.t, lower = 0, upper = 3.6)
R complains at the last step since the f.t function is bounded and my integration limits are probably not correct. Any ideas on how to solve this?
Here is one way.
f.X <- function(x) dnorm(x,1,0.5) # normal (mu=1.5, sigma=0.5)
f.Y <- function(y) dlnorm(y,1.5, 0.75) # log-normal (mu=1.5, sigma=0.75)
# convolution integral
f.Z <- function(z) integrate(function(x,z) f.Y(z-x)*f.X(x),-Inf,Inf,z)$value
f.Z <- Vectorize(f.Z) # need to vectorize the resulting fn.
set.seed(1) # for reproducible example
X <- rnorm(1000,1,0.5)
Y <- rlnorm(1000,1.5,0.75)
Z <- X + Y
# compare the methods
hist(Z,freq=F,breaks=50, xlim=c(0,30))
z <- seq(0,50,0.01)
lines(z,f.Z(z),lty=2,col="red")
Same thing using package distr.
library(distr)
N <- Norm(mean=1, sd=0.5) # N is signature for normal dist
L <- Lnorm(meanlog=1.5,sdlog=0.75) # same for log-normal
conv <- convpow(L+N,1) # object of class AbscontDistribution
f.Z <- d(conv) # distribution function
hist(Z,freq=F,breaks=50, xlim=c(0,30))
z <- seq(0,50,0.01)
lines(z,f.Z(z),lty=2,col="red")
I was having trouble getting integrate() to work for different density parameters, so I came up with an alternative to #jlhoward's using Riemann approximation:
set.seed(1)
#densities to be convolved. could also put these in the function below
d1 <- function(x) dnorm(x,1,0.5) #
d2 <- function(y) dlnorm(y,1.5, 0.75)
#Riemann approximation of convolution
conv <- function(t, a, b, d) { #a to b needs to cover the range of densities above. d needs to be small for accurate approx.
z <- NA
x <- seq(a, b, d)
for (i in 1:length(t)){
print(i)
z[i] <- sum(d1(x)*d2(t[i]-x)*d)
}
return(z)
}
#check against sampled convolution
X <- rnorm(1000, 1, 0.5)
Y <- rlnorm(1000, 1.5, 0.75)
Z <- X + Y
t <- seq(0, 50, 0.05) #range to evaluate t, smaller increment -> smoother curve
hist(Z, breaks = 50, freq = F, xlim = c(0,30))
lines(t, conv(t, -100, 100, 0.1), type = "s", col = "red")

Resources