Here is the setup. No assumptions for the values I am using.
n=2; % dimension of vectors x and (square) matrix P
r=2; % number of x vectors and P matrices
x1 = [3;5]
x2 = [9;6]
x = cat(2,x1,x2)
P1 = [6,11;15,-1]
P2 = [2,21;-2,3]
P(:,1)=P1(:)
P(:,2)=P2(:)
modePr = [-.4;16]
TransPr=[5.9,0.1;20.2,-4.8]
pred_modePr = TransPr'*modePr
MixPr = TransPr.*(modePr*(pred_modePr.^(-1))')
x0 = x*MixPr
Then it was time to apply the following formula to get myP
, where μij is MixPr. I used this code to get it:
myP=zeros(n*n,r);
Ptables(:,:,1)=P1;
Ptables(:,:,2)=P2;
for j=1:r
for i = 1:r;
temp = MixPr(i,j)*(Ptables(:,:,i) + ...
(x(:,i)-x0(:,j))*(x(:,i)-x0(:,j))');
myP(:,j)= myP(:,j) + temp(:);
end
end
Some brilliant guy proposed this formula as another way to produce myP
for j=1:r
xk1=x(:,j); PP=xk1*xk1'; PP0(:,j)=PP(:);
xk1=x0(:,j); PP=xk1*xk1'; PP1(:,j)=PP(:);
end
myP = (P+PP0)*MixPr-PP1
I tried to formulate the equality between the two methods and seems to be this one. To make things easier, I skipped the summation of matrix P in both methods .
where the first part denotes the formula that I used, and the second comes from his code snippet. Do you think this is an obvious equality? If yes, ignore all the above and just try to explain why. I could only start from the LHS, and after some algebra I think I proved it equals to the RHS. However I can't see how did he (or she) think of it in the first place.
Using E for expectation, the one dimensional version of your formula is the familiar:
Variance(X) = E((X-E(X))^2) = E(X^2) - E(X)^2
While the second form might be easier programming, I'd worry about ending up with a negative (or, in the multidimensional case, non positive definite) answer by using it, due to rounding error.
Related
I'm trying to simulate a problem in physics for which I require a Unitary operator in a Hilbert space with an inner product defined as transpose(x*)x. Given two orthogonal column vectors, I want to generate more orthonormal vectors. Here is the way I tried approaching this problem. I randomly generate complex vectors and subtract their projections onto other already available orthogonal vectors Similar to this. And then I check the norm with respect to the inner product (InProd). Here is an attempt to implement this using python.
def stinemod():
comped = [[1,0,0,0,0],[0,1,0,0,0]]
d = len(comped[0])
r = len(comped)
while(r<d):
randr = np.random.rand(d)
randc = np.random.rand(d)
vr = randr + 1j*randc
vo = vr
for v in comped:
vo = vo - (InProd(vr,v)*np.array(v)/InProd(v,v))
k = 1e-10
if(InProd(vo,vo)<k*InProd(vr,vr)):
pass
else:
r = r+1
comped.append(np.array(vr)/InProd(vr,vr))
return(np.transpose(comped))
But on running this code and checking unitarity using,
A = stinemod()
print(abs(np.matmul(np.transpose(np.conj(A)),A)))
Output:
[[1. 0. 0.28003392 0.24068132 0.1977418 ]
[0. 1. 0.53992755 0.24199218 0.06786818]
[0.28003392 0.53992755 0.58108559 0.29561698 0.23971144]
[0.24068132 0.24199218 0.29561698 0.21599542 0.18374313]
[0.1977418 0.06786818 0.23971144 0.18374313 0.21586778]]
I get an output suggesting that it is not Unitary which means columns are not orthonormal. I can't seem to figure out what the mistake in here is.
I'm trying to interpolate a Brownian motion. The function does not return me an error but it seems like Julia does not put the value on vector B. Here the codes.
function interpolation(i,j,N,BM)
if j-i>1
k = sqrt((j-i)/((2^N))/4)
d = (i+j)/2
BM[d] =((BM[i]+BM[j])/2)+k*randn(1)
BM = interpolation(i,d,N,BM)
BM = interpolation(d,j,N,BM)
end
end
plot(BM)
Thanks a lot!
I think that your code could be simplified by using array views. That eliminates all of the extra parameters from you code and makes it easier to see what it is doing. The normalization so that changes are smaller for interior steps could be simplified as well.
So here is a stab at this simplification:
function fractal(x)
if length(x) > 2
n = length(x)
mid = (n+1)÷2
x[mid] = (x[1] + x[n])/2 + randn() * sqrt(n)
fractal(#view x[1:mid])
fractal(#view x[mid:n])
end
end
And here is a result of this code running:
a = zeros(1024)
fractal(a)
plot(a, legend=false)
The point of the simplification is to highlight the idea that the algorithm involves:
Interpolating the middle value based on the end-points
Do the same to the left and right halves of the array
if we don't have a big enough array, just return
This approach avoids complicating the picture with all of the housekeeping and it worked first try, largely because, I think, I didn't have to keep all that stuff straight.
I'm a new user to R, and I am trying to create a function that will simulate a random walk. The issue for me is trying to integrate some initial values smoothly. Say I have this basic function.
y(t) = y(t-2) + eps(t)
Epsilon (or eps(t)) will be the randomness factor. I want to define y(-1)=0, and y(0)=0.
Here is my code:
ran.walk=function(n){ # 'n' steps will be the input
eps=rnorm(n) # creates a vector taking random values from N(0,1)
y= c(eps[1], eps[2]) # this will set up my initial vector
for (i in 3:n){
ytemp = y[i-2] + eps[i] ## !!! problem is here. Details below !!!
y= c(y, ytemp)
}
return(y)
}
I'm trying to get this start adding y3, y4, y5, etc, but I think there is a flaw in this design... I'm not sure if I should just set up two separate lines, with an if statement: testing if n is even or odd, perhaps with:
if i%%2 == 1 #using modulus
Since,
y1= eps1,
y2= eps2,
y3= y1 + eps3,
y4= y2 + eps4,
y5= y3 + eps5 and so on...
Currently, I see the error in my code.
I have y1, and y2 concatenated, but I don't think it knows how to incorporate y[1]
Can I define beforehand somehow y[-1]=0, and y[0]=0 ? I tried this also and got an error.
Thank you kindly in advance for any assistance. This is first times attempting a for loop with recursion.
-N (sorry for any formatting issues, I had a lot of problems getting this question to go through)
I found that your odd and even series is independent one of the other. Assuming that it is the case, I just split the problem in two columns and use cumsum to get the random walk. The final data frame include the random numbers and the random walk, so you can compare it is working properly.
Hoping it helps
ran.walk=function(n) {
eps=rnorm(ceiling(n / 2)*2)
dim(eps) <- c(2,ceiling(n/2))
# since each series is independent, we can tally each one in its own
eps2 <- apply(eps, 1, cumsum)
# and just reorganize it
eps2 <- as.numeric(t(eps2))
rndwlk <- data.frame(rnd=as.numeric(eps), walk=eps2)
# remove the extra value if needed
rndwlk <- rndwlk[1:n,]
return(rndwlk)
}
ran.walk(13)
After taking a break with my piano, it came to me. It's funny how simple the answer becomes once you discover it... almost trivial.
Setting the initial value to be a vector, that is:
[y(1) = y(-1) + eps(1), y(2)= y(0) + eps(2)]
everything works out. It is still true that the evens and odds don't interact, but there is no reason to specify any of that.
The method to split the iterations with modulus, then concatenating it back into the main vector would also work, but is unnecessary and more complicated. Shorter is better for users and computers. As Einstein said, make it as simple as possible, but no simpler.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Image of the problem at:
In my code I have 4 points: Q, R ,S , T.
I know the following
Coordinates for R, T, and S;
That segment RT < RQ < RS;
I need to figure out the coordinates of Q.
I already know point Q can be found on the line segment TS. However I need to get the coordinates for Q and I need it to be a relatively efficient calculation.
I have several solutions for this problem but they are all so convoluted and long I know I must be doing something wrong. I feel certain there must a simple elegant way to solve this. The best solution would be one that minimizes the number of more intensive calculations but that also isn't ridiculously long.
Q is the intersecting point between a circle of radius d around R and the line TS, which leads to a quadratic equation with a number of parameters in the coefficients. I don't know if the following if “the best” solution (it may even be better to use a numerical solver in between), but it is completely worked out. Because I think it's more readable, I've changed your coordinate names to put T at (T1, T2), S at (S1, S2) and, to keep the formulas shorter, R at (0, 0) – just adjust S and T and the returned values accordingly.
tmp1 = S1^2 - S2*T2 - S1*T1 + S2^2;
tmp2 = sqrt(- S1^2*T2^2 + S1^2*d^2 + 2*S1*S2*T1*T2 - 2*S1*T1*d^2 -
S2^2*T1^2 + S2^2*d^2 - 2*S2*T2*d^2 + T1^2*d^2 + T2^2*d^2);
tmp3 = S1^2 - 2*S1*T1 + S2^2 - 2*S2*T2 + T1^1 + T2^2;
t = (tmp1 + tmp2)/tmp3;
if (0 > t || t > 1) {
// pick the other solution instead
t = (tmp1 - tmp2)/tmp3;
}
Q1 = S1+t*(T1-S1);
Q2 = S2+t*(T2-S2);
Obviously, I take no warranties that I made no typos etc. :-)
EDIT: Alternatively, you could also get a good approximation by some iterative method (say, Newton) to find a zero of dist(S+t*(T-S), R)-d, as a function of t in [0,1]. That would take nine seven multiplications and one division per Newton step, if I count correctly. Re-using the names from above, that would look something like this:
t = 0.5;
d2 = d^2;
S1T1 = S1 - T1;
S2T2 = S2 - T2;
do {
tS1T1 = S1 - t*S1T1;
tS2T2 = S2 - t*S2T2;
f = tS1T1*tS1T1 + tS2T2*tS2T2 - d2;
fp = 2*(S1T1*tS1T1 + S2T2*tS2T2);
t = t + f/fp;
} while (f > eps);
Set eps to control your required accuracy, but do not set it too low – computing f does involve a subtraction that will have serious cancellation problems near the solution.
Since there are two solutions Q on the (TS) line (with only one solution between T and S), any solution probably involves some choice of sign, or arccos(), etc.
Therefore, a good solution is probably to put Q on the (TS) line like so (with vectors implied):
(1) TQ(t) = t * TS
(where O is some origin). Requiring that Q be at a distance d from R gives a 2nd degree equation in t, which is easy to solve (again, vectors are implied):
d^2 = |RQ(t)|^2 = |RT + TQ(t)|^2
The coordinates of Q can then be obtained by putting a solution t0 into equation (1), via OQ(t0) = OT + TQ(t). The solution 0 <= t <= 1 must be chosen, so that Q lies between T and S.
Now, it may happen that the final formula has some simple interpretation in terms of trigonometric functions… Maybe you can tell us what value of t and what coordinates you find with this method and we can look for a simpler formula?
Does anyone know how to minimize a function containing an integral in MATLAB? The function looks like this:
L = Int(t=0,t=T)[(AR-x)dt], A is a system parameter and R and x are related through:
dR/dt = axRY - bR, where a and b are constants.
dY/dt = -xRY
I read somewhere that I can use fminbnd and quad in combination but I am not able to make it work. Any suggestions?
Perhaps you could give more details of your integral, e.g. where is the missing bracket in [AR-x)dt]? Is there any dependence of x on t, or can we integrate dR/dt = axR - bR to give R=C*exp((a*x-b)*t)? In any case, to answer your question on fminbnd and quad, you could set A,C,T,a,b,xmin and xmax (the last two are the range you want to look for the min over) and use:
[x fval] = fminbnd(#(x) quad(#(t)A*C*exp((a*x-b)*t)-x,0,T),xmin,xmax)
This finds x that minimizes the integral.
If i didn't get it wrong you are trying to minimize respect to t:
\int_0^t{(AR-x) dt}
well then you just need to find the zeros of:
AR-x
This is just math, not matlab ;)
Here's some manipulation of your equations that might help.
Combining the second and third equations you gave gives
dR/dt = -a*(dY/dt)-bR
Now if we solve for R on the righthand side and plug it into the first equation you gave we get
L = Int(t=0,t=T)[(-A/b*(dR/dt + a*dY/dt) - x)dt]
Now we can integrate the first term to get:
L = -A/b*[R(T) - R(0) + Y(T) - Y(0)] - Int(t=0,t=T)[(x)dt]
So now all that matters with regards to R and Y are the endpoints. In fact, you may as well define a new function, Z which equals Y + R. Then you get
L = -A/b*[Z(T) - Z(0)] - Int(t=0,t=T)[(x)dt]
This next part I'm not as confident in. The integral of x with respect to t will give some function which is evaluated at t = 0 and t = T. This function we will call X to give:
L = -A/b*[Z(T) - Z(0)] - X(T) + X(0)
This equation holds true for all T, so we can set T to t if we want to.
L = -A/b*[Z(t) - Z(0)] - X(t) + X(0)
Also, we can group a lot of the constants together and call them C to give
X(t) = -A/b*Z(t) + C
where
C = A/b*Z(0) + X(0) - L
So I'm not sure what else to do with this, but I've shown that the integral of x(t) is linearly related to Z(t) = R(t) + Y(t). It seems to me that there are many equations that solve this. Anyone else see where to go from here? Any problems with my math?