Merging covariance from two sets to create new covariance - math

Is there any way to combine co-variance from two data sets instead of calculating the new co-variance by merging the data. Suppose I have already calculated co-variance from 1 million data and then if I get another 2 million data that has already calculated co-variance, can i combine the already calculated co-variance to produce the new co-variance. I am mostly interested in reducing the computation that is required when i calculate the co-variance from the combined 3 million data.
This can be easily done for mean.
new mean = (data_size_1* mean_1 + data_size_2*mean_2)/((data_size_1 + data_size_2)
Is there any similar way to calculate co-variance so that i can take advantage of the pre-computed data. I can also store some information while calculating co-variance of data_size_1 and data data_size_2 if that can help me to find the new merged co-variance easily.

The complete derivation is given in this pdf http://prod.sandia.gov/techlib/access-control.cgi/2008/086212.pdf

I found formula for combining variances of two sets here:
https://www.emathzone.com/tutorials/basic-statistics/combined-variance.html
Replacing (X1–Xc)2 with
(X1–Xc)(Y1–Yc),
and (X2–Xc)2 with
(X2–Xc)(Y2–Yc)
gives the correct results for covariances.
Unlike the formula from the first answer, which is only approximately correct.
Here is a code fragment that combines covariances a and b
into resulting covariance r.
r.n = a.n + b.n;
r.mean_x = (a.n * a.mean_x + b.n * b.mean_x) / r.n;
r.mean_y = (a.n * a.mean_y + b.n * b.mean_y) / r.n;
r.sum = a.sum + a.n * (a.mean_x - r.mean_x) * (a.mean_y - r.mean_y)
+ b.sum + b.n * (b.mean_x - r.mean_x) * (b.mean_y - r.mean_y);
a, b and r are structs that contain:
n – number of elements,
mean_x – mean of X,
mean_y – mean of Y,
sum – the covariance multiplied by n.

Related

How to solve equation with rotation and translation matrices?

I working on computer vision task and have this equation:
R0*c + t0 = R1*c + t1 = Ri*c + ti = ... = Rn*c + tn ,
n is about 20 (but can be more if needs)
where each pair of R,t (rotation matrix and translation vector in 3D) is a result of i-measurement and they are known, and vector c is what I whant to know.
I've got result with ceres solver. It's good that it can handle outliers but I think it's overkill for this task.
So what methods I should use for two situations:
With outliers
Without outliers
To handle outliers you can use RANSAC:
* In each iteration randomly pick i,j (a "sample") and solve c:
Ri*c + ti = Rj*c + tj
- Set Y = Ri*c + ti
* Apply to a larger population:
- Select S={k} for which ||Rk*c + tk - Y||<e
e ~ 3*RMS of errors without outliers
- Find optimal c for all k equations (with least mean square)
- Give it a "grade": size of S
* After few iterations use optimal c found for Max "grade".
* Number of iterations: log(1-p)/log(1-w^2)
[https://en.wikipedia.org/wiki/Random_sample_consensus]
p = 0.001 (for example. It is the required certainty of the result)
w is an assumption of nonoutliers/n.

How to simulate a dataset with a binary target in proportions determined 'a-priori'?

Can someone tell me what is the best way to simulate a dataset with a binary target?
I understand the way in which a dataset can be simulated but what I'm looking for is to determine 'a-priori' the proportion of each class. What I thought was to change the intercept to achieve it but I couldn't do it and I don't know why. I guess because the average is playing a trick on me.
set.seed(666)
x1 = rnorm(1000)
x2 = rnorm(1000)
p=0.25 # <<< I'm looking for a 25%/75%
mean_z=log(p/(1-p))
b0 = mean( mean_z - (4*x1 + 3*x2)) # = mean_z - mean( 2*x1 + 3*x2)
z = b0 + 4*x1 + 3*x2 # = mean_z - (4*x1 + 3*x2) + (4*x1 + 3*x2) = rep(mean_z,1000)
mean( b0 + 4*x1 + 3*x2 ) == mean_z # TRUE!!
pr = 1/(1+exp(-z))
y = rbinom(1000,1,pr)
mean(pr) # ~ 40% << not achieved
table(y)/1000
What I'm looking for is to simulate the typical "logistic" problem in which the binary target can be modeled as a linear combination of features.
These 'logistic' models assume that the log-odd ratio of the binary variable behaves linearly. That means:
log (p / (1-p)) = z = b0 + b1 * x1 + b2 * x2 where p = prob (y = 1)
Going back to my sample code, we could do, for example: z = 1.3 + 4 * x1 + 2 * x2 , but the probability of the class would be a result. Or instead we could choose coefficient b0 such that the probability is (statistically) similar to the one sought :
log (0.25 / 0.75) = b0 + 4 * x1 + 2 * x2
This is my approach, but there may be betters
I gather that you are considering a logistic regression model, right? If so, one way to generate a data set is to create two Gaussian bumps and say that one is class 1 and the other is class 0. Then generate 25 items from class 1 and 75 items from class 0. Then each generated item plus its label is a datum or record or whatever you want to call it.
Obviously you can choose any proportions of 1's and 0's. It is also interesting to make the problem "easy" by making the Gaussian bumps farther apart (i.e. variances smaller in comparison to difference of means) or "hard" by making the bumps overlapping (i.e. variances larger compared to difference of means).
EDIT: In order to make sample data which correspond exactly to a logistic regression model, just make the variances of the two Gaussian bumps the same. When the variances (by this I mean specifically the covariance matrix) are the same, the surfaces of equal posterior class probability are planes; when the covariances are different, the surfaces of equal probability are quadratics. This is a standard result which will appear in many textbooks. I also have some notes online about this, which I can locate if it will help.
Aside from generating the two classes separately and then merging the results into one set, you can also sample from a single distribution over x, plug x into a logistic regression model with some weights (which you choose by any means you wish), and then use the resulting output as a probability for a coin toss. This method isn't guaranteed to output proportions that correspond exactly to prior class probabilities.

ar(1) simulation with non-zero mean

I can't seem to find the correct way to simulate an AR(1) time series with a mean that is not zero.
I need 53 data points, rho = .8, mean = 300.
However, arima.sim(list(order=c(1,0,0), ar=.8), n=53, mean=300, sd=21)
gives me values in the 1500s. For example:
1480.099 1480.518 1501.794 1509.464 1499.965 1489.545 1482.367 1505.103 (and so on)
I have also tried arima.sim(n=52, model=list(ar=c(.8)), start.innov=300, n.start=1)
but then it just counts down like this:
238.81775870 190.19203239 151.91292491 122.09682547 96.27074057 [6] 77.17105923 63.15148491 50.04211711 39.68465916 32.46837830 24.78357345 21.27437183 15.93486092 13.40199333 10.99762449 8.70208879 5.62264196 3.15086491 2.13809323 1.30009732
and I have tried arima.sim(list(order=c(1,0,0), ar=.8), n=53,sd=21) + 300 which seems to give a correct answer. For example:
280.6420 247.3219 292.4309 289.8923 261.5347 279.6198 290.6622 295.0501
264.4233 273.8532 261.9590 278.0217 300.6825 291.4469 291.5964 293.5710
285.0330 274.5732 285.2396 298.0211 319.9195 324.0424 342.2192 353.8149
and so on..
However, I am in doubt that this is doing the correct thing? Is it still auto-correlating on the correct number then?
Your last option is okay to get the desired mean, "mu". It generates data from the model:
(y[t] - mu) = phi * (y[t-1] - mu) + \epsilon[t], epsilon[t] ~ N(0, sigma=21),
t=1,2,...,n.
Your first approach sets an intercept, "alpha", rather than a mean:
y[t] = alpha + phi * y[t-1] + epsilon[t].
Your second option sets the starting value y[0] equal to 300. As long as |phi|<1 the influence of this initial value will vanish after a few periods and will have no effect
on the level of the series.
Edit
The value of the standard deviation that you observe in the simulated data is correct. Be aware that the variance of the AR(1) process, y[t], is not equal the variance of the innovations, epsilon[t]. The variance of the AR(1) process, sigma^2_y, can be obtained obtained as follows:
Var(y[t]) = Var(alpha) + phi^2 * Var(y[t-1]) + Var(epsilon[t])
As the process is stationary Var(y[t]) = Var(t[t-1]) which we call sigma^2_y. Thus, we get:
sigma^2_y = 0 + phi^2 * sigma^2_y + sigma^2_epsilon
sigma^2_y = sigma^2_epsilon / (1 - phi^2)
For the values of the parameters that you are using you have:
sigma_y = sqrt(21^2 / (1 - 0.8^2)) = 35.
Use the rGARMA function in the ts.extend package
You can generate random vectors from any stationary Gaussian ARMA model using the ts.extend package. This package generates random vectors directly form the multivariate normal distribution using the computed autocorrelation matrix for the random vector, so it gives random vectors from the exact distribution and does not require "burn-in" iterations. Here is an example of generating multiple independent time-series vectors all from an AR(1) model.
#Load the package
library(ts.extend)
#Set parameters
MEAN <- 300
ERRORVAR <- 21^2
AR <- 0.8
m <- 53
#Generate n = 16 random vectors from this model
set.seed(1)
SERIES <- rGARMA(n = 16, m = m, mean = MEAN, ar = AR, errorvar = ERRORVAR)
#Plot the series using ggplot2 graphics
library(ggplot2)
plot(SERIES)
As you can see, the generated time-series vectors in this plot use the appropriate mean and error variance that were specified in the inputs.

How to calculate log(sum of terms) from its component log-terms

(1) The simple version of the problem:
How to calculate log(P1+P2+...+Pn), given log(P1), log(P2), ..., log(Pn), without taking the exp of any terms to get the original Pi. I don't want to get the original Pi because they are super small and may cause numeric computer underflow.
(2) The long version of the problem:
I am using Bayes' Theorem to calculate a conditional probability P(Y|E).
P(Y|E) = P(E|Y)*P(Y) / P(E)
I have a thousand probabilities multiplying together.
P(E|Y) = P(E1|Y) * P(E2|Y) * ... * P(E1000|Y)
To avoid computer numeric underflow, I used log(p) and calculate the summation of 1000 log(p) instead of calculating the product of 1000 p.
log(P(E|Y)) = log(P(E1|Y)) + log(P(E2|Y)) + ... + log(P(E1000|Y))
However, I also need to calculate P(E), which is
P(E) = sum of P(E|Y)*P(Y)
log(P(E)) does not equal to the sum of log(P(E|Y)*P(Y)). How should I get log(P(E)) without solving for P(E|Y)*P(Y) (they are extremely small numbers) and adding them.
You can use
log(P1+P2+...+Pn) = log(P1[1 + P2/P1 + ... + Pn/P1])
= log(P1) + log(1 + P2/P1 + ... + Pn/P1])
which works for any Pi. So factoring out maxP = max_i Pi results in
log(P1+P2+...+Pn) = log(maxP) + log(1+P2/maxP + ... + Pn/maxP)
where all the ratios are less than 1.

interpreting princomp results

I am currently trying to do PCA in R. This is my first project in Data mining.
I have around 200 features and around 3000 rows of data.
Data is not in normalized form and i need to do dimensionality reduction
So i am using PCA for the same. This is what i did till now
x <- princomp(data,scores=TRUE,cor=TRUE)
I suppose to do dimension reduction, i am supposed to look at score values. So i did to get top few values
head(x$scores)
This was the output
Comp.1 Comp.2 Comp.3 Comp.4 ...
[1,] 6.831452 -4.4316218 -1.9226226 -0.8344245
[2,] -1.808007 -4.2743390 1.0173944 0.4527465
[3,] -7.750329 -4.9523056 -1.6750438 1.6247354
.
.
.
Now I am not sure how to interpret these matrix and get the best attributes (and do dimension reduction). It would be great if someone could help me out with this.
P.S - I searched a lot but did not get an answer for the same.
scores is just one piece of the puzzle. The general formula is:
original_data =~ approximation = (scores * loadings) * scale + center
where:
1. `scores` are the coordinates in your new orthogonal base
1. `loadings` are the directions of the new axis in the old base
1. `scale` are the scaling applied to the dimensions
1. `center` are the coordinates of the new base origin in the old base
Using the R objects, the formula above is
data =~ t(t(x$scores %*% t(x$loadings)) * x$scale + x$center)
You'll want to reduce dimensions by only taking the first i loadings:
data =~ t(t(x$scores[, 1:i] %*% t(x$loadings[, 1:i ])) * x$scale + x$center)

Resources