Wave Function plots changing on increasing the number of basis - scilab

N=15//no. of basis
b=%pi^2/4;
e2=30^2;//sq. of stiffness costant
a=2;//width of inf. potential well
h=zeros(N,N);//call h matrix
//hnm matrix
for n=1:N;
h(n,n)=e2*((1-6/(%pi*%pi*n*n))*(%pi^2/48))+n^2;
for m=n+1:N;
q=1/((m-n)^2);
r=1/((m+n)^2);
t=((-1)^(m+n)+1)/4;
h(m,n)=e2*t*(q-r);
h(n,m)=h(m,n);
end
end
disp(h);
//to find eigen values and matrix
s=eye(N,N);//elementary matrix
disp(s);
[al,be,R]=spec(h,s);
e1=al./be;
c=R;
disp(e1);//eigen values
disp(c);
[E,k]=gsort(real(e1),"g","d");//sorting eigen values
E=E;
disp(E);
d=c;
d(:,1:N)=c(:,k);//sorting eigen vectors
p=1000;
x=linspace(0,a,p);
psi=zeros(N,p);//call psi wave fn.
for m2=1:N;
for n2=1:N;
psi(m2,:)=psi(m2,:)+d(n2,m2)*sqrt(2/a)*sin(n2*%pi*x/a);
end
if(m2<=15);
subplot(5,5,m2);
plot(x,psi(m2,:));
end
end
My question is this is an harmonic oscillator problem where i have chosen basis of an infinite square well and i have drawn the wave functions of it.My problem is as i increase my basis to 20 i.e N=20 the wave function plot just gets inverted which is not supposed to happen.So i want to know is there any problem in my code as physics cannot be wrong.

Related

Calculate Normals from Heightmap

I am trying to convert an heightmap into a matrix of normals using central differencing which will later correspond to the steepness of a giving point.
I found several links with correct results but without explaining the math behind.
T
L O R
B
From this link I realised I can just do:
Vec3 normal = Vec3(2*(R-L), 2*(B-T), -4).Normalize();
The thing is that I don't know where the 2* and -4 comes from.
In this explanation of central differencing I see that we should divide that value by 2, but I still don't know how to connect all of this.
What I really want to know is the linear algebra definition behind this.
I have an heightmap, I want to measure the central differences and I want to obtain the normal vector to use later to measure the steepness.
PS: the Z-axis is the height.
From vector calculus, the normal of a surface is given by the gradient operator:
A height map h(x, y) is a special form of the function f:
For a discretized height map, assuming that the grid size is 1, the first-order approximations to the two derivative terms above are given by:
Since the x step from L to R is 2, and same for y. The above is exactly the formula you had, divided through by 4. When this vector is normalized, the factor of 4 is canceled.
(No linear algebra was harmed in the writing of this answer)

apply fourier shift theorem to complex signal

Im trying to apply the fourier phase shift theorem to a complex signal in R. However, only the magnitude of my signal shifts as I expect it. I think it should be possible to apply this theorem to complex signals, so probably I make an error somewhere. My guess is that there is an error in the frequency axis I calculate.
How do I correctly apply the fourier shift theorem to a complex signal (using R)?
i = complex(0,0,1)
t.in = (1+i)*matrix(c(1,0,0,0,0,0,0,0,0,0))
n.shift = 5
#the output of fft() has the mean / 0 frequency at the first element
#it then increases to the highest frequency, flips to negative frequencies
#and then increases again to the negative frequency closest to 0
N = length(t.in)
if (N%%2){#odd
kmin = -(N-1)/2
kmax = (N-1)/2
} else {#even
kmin = -N/2
kmax = N/2-1
#center frequency negative, is that correct?
}
#create frequency axis for fft() output, no sampling frequency or sample duration needed
k = (kmin:kmax)
kflip = floor(N/2)
k = k[c((kflip+1):N,1:kflip)]
f = 2*pi*k/N
shiftterm = exp( -i*n.shift*f )
T.in = fft(t.in)
T.out = T.in*shiftterm
t.out = fft(T.out, inverse=T)/N
par(mfrow=c(2,2))
plot(Mod(t.in),col="green");
plot(Mod(t.out), col="red");
plot(Arg(t.in),col="green");
plot(Arg(t.out),col="red");
As you can see the magnitude of the signal is nicely shifted, but the phase is scrambled. I think the negative frequencies are where my error is, but I cant see it.
What am I doing wrong?
The questions about fourier phase shift theorem I could find:
real 2d signal in python
real 2d signal in matlab
real 1d signal in python
math question about what fourier shift does
But these were not about complex signals.
Answer
As Steve suggested in the comments, I checked the phase on the 6th element.
> Arg(t.out)[6]
[1] 0.7853982
> Arg(t.in)[1]
[1] 0.7853982
So the only element that has a magnitude (at least one order of magnitude higher than the EPS) does have the phase that I expected.
TL;DR The result from the original approach in the question was already correct, we see the Gibbs Phenomenon sliding by.
Just discard low magnitude elements?
If ever the phase of elements that should be zero will be a problem I can run t.out[Mod(t.out)<epsfactor*.Machine$double.eps] = 0 where in this case epsfactor has to be 10 to get rid of the '0' magnitude elements.
Adding that line before plotting gives the following result, which is what I expected to get beforehand. However, the 'scrambled' phase might actually be accurate in most cases as I'll explain below.
The original result really was correct
Just setting low magnitude elements to 0 does not make the phase of the shifted signal more intuitive however. This is a plot where I apply a 4.5 sample shift, the phase is still 'scrambled'.
Applying fourier shift equivalent to downsmapling shifted fourier interpolation
It occurred to me that applying a non-integer number of elements phase shift is equivalent to fourier interpolating the signal and then downsample the interpolated signal at points between the original elements. Since the vector I used as input is an impulse function, the fourier interpolated signal is just not well behaved. Then the signal after applying the fourier phase shift theorem can be expected to have exactly the phase that the fourier interpolated signal has, as seen below.
Gibbs Ringing
Its just at the discontinuities where phase is not well behaved and where small rounding errors might cause large errors in the reconstructed phase. So not really related to low magnitude but to not well defined fourier transform of the input vector. This is called Gibbs Ringing, I could use low-pass filtering with a gaussian filter to decrease it.
Questions related to fourier interpolation and phase shift
symbolic approach in R to estimate fourier transform error
non integer signal shift by use of linear interpolation
downsampling complex signal
fourier interpolation application
estimating sub-sample shift between two signals using fourier transforms
estimating sub-sample shift between two signals without interpolation

rotational matrix in R

I want to achieve an algorithm in R. I cannot start with the code because I am having problem figuring out the problem clearly. The problem is related to rotational matrix, which is actully pretty challenging.
The problem is as follow:
The historical data of monthly flows X is transformed into Y by the transformation matrix R where,
Y = RX (3)
The procedure for obtaining the transformation matrix is described in detail in the appendix of Tarboton et al. (1998), here we summarize from their description. The transformation matrix is developed from a standard basis (basis vectors aligned with the coordinate axes) which is orthonormal but does not have a basis vector perpendicular to the conditioning plane defined by (3). One of the standard basis vectors is replaced by a vector perpendicular to the conditioning plane. Operationally this amounts to starting with an identity matrix and replacing the last column with . Clearly the basis set in no longer orthonormal. Gram Schmidt orthonormalization procedure is applied to the remaining 1 standard basis vectors to obtain an orthonormal basis that now includes a vector perpendicular to the conditioning plane.
The last column of the matrix Y, , and the R matrix has the property
RT = R^(-1). The first components of the vector can be denoted as as the last component is , i.e., . Hence, the simulation involves re-sampling from the conditional PDF ( )

Phase/Amplitude Forumla in R for Fourier Transformation

So I am trying to find 3 things given a certain function in the x domain when transformed into the spectral domain.
the Amplitude
The Frequency
The Phase
In R (statistical software) I have coded the following function:
y=7*cos(2*pi*(seq(-50,50,by=.01)*(1/9))+32)
fty=fft(y,inverse=F)
angle=atan2(Im(fty), Re(fty))
x=which(abs(fty)[1:(length(fty)/2)]==max(abs(fty)[1:(length(fty)/2)]))
par(mfcol=c(2,1))
plot(seq(-50,50,by=.01),y,type="l",ylab = "Cosine Function")
plot(abs(fty),xlim=c(x-30,x+30),type="l",ylab="Spectral Density in hz")
I know I can compute the frequency manually by taking the bin value and dividing it by the size of the interval(total time of the domain). Since the bins started at 1, when it should be zero, it would thus be frequency=(BinValue-1)/MaxTime, which does get me the 1/9'th I have in the function above.
I have two quick questions:
First) I am having trouble computing the phase, is there a prebuilt R function that can give me the phase? From a manual calculation, the density function peaks at 12 (see bottom graph), shouldn't the value then the value of the phase be 2*pi+angle[12] but I am getting a value of
angle[12] [1] -2.558724
which puts the phase at 2*pi+angle[12]=3.724462. But that's wrong the phase should be 32 radians. What am I doing wrong?
Second) Is there a function that can automatically convert abs(fty)[12]=34351.41 , to the amplitude number I have in front of the cosine, which is 7?

What is SVD(singular value decomposition)

How does it actually reduce noise..can you suggest some nice tutorials?
SVD can be understood from a geometric sense for square matrices as a transformation on a vector.
Consider a square n x n matrix M multiplying a vector v to produce an output vector w:
w = M*v
The singular value decomposition M is the product of three matrices M=U*S*V, so w=U*S*V*v. U and V are orthonormal matrices. From a geometric transformation point of view (acting upon a vector by multiplying it), they are combinations of rotations and reflections that do not change the length of the vector they are multiplying. S is a diagonal matrix which represents scaling or squashing with different scaling factors (the diagonal terms) along each of the n axes.
So the effect of left-multiplying a vector v by a matrix M is to rotate/reflect v by M's orthonormal factor V, then scale/squash the result by a diagonal factor S, then rotate/reflect the result by M's orthonormal factor U.
One reason SVD is desirable from a numerical standpoint is that multiplication by orthonormal matrices is an invertible and extremely stable operation (condition number is 1). SVD captures any ill-conditioned-ness in the diagonal scaling matrix S.
One way to use SVD to reduce noise is to do the decomposition, set components that are near zero to be exactly zero, then re-compose.
Here's an online tutorial on SVD.
You might want to take a look at Numerical Recipes.
Singular value decomposition is a method for taking an nxm matrix M and "decomposing" it into three matrices such that M=USV. S is a diagonal square (the only nonzero entries are on the diagonal from top-left to bottom-right) matrix containing the "singular values" of M. U and V are orthogonal, which leads to the geometric understanding of SVD, but that isn't necessary for noise reduction.
With M=USV, we still have the original matrix M with all its noise intact. However, if we only keep the k largest singular values (which is easy, since many SVD algorithms compute a decomposition where the entries of S are sorted in nonincreasing order), then we have an approximation of the original matrix. This works because we assume that the small values are the noise, and that the more significant patterns in the data will be expressed through the vectors associated with larger singular values.
In fact, the resulting approximation is the most accurate rank-k approximation of the original matrix (has the least squared error).
To answer to the tittle question: SVD is a generalization of eigenvalues/eigenvectors to non-square matrices.
Say,
$X \in N \times p$, then the SVD decomposition of X yields X=UDV^T where D is diagonal and U and V are orthogonal matrices.
Now X^TX is a square matrice, and the SVD decomposition of X^TX=VD^2V where V is equivalent to the eigenvectors of X^TX and D^2 contains the eigenvalues of X^TX.
SVD can also be used to greatly ease global (i.e. to all observations simultaneously) fitting of an arbitrary model (expressed in an formula) to data (with respect to two variables and expressed in a matrix).
For example, data matrix A = D * MT where D represents the possible states of a system and M represents its evolution wrt some variable (e.g. time).
By SVD, A(x,y) = U(x) * S * VT(y) and therefore D * MT = U * S * VT
then D = U * S * VT * MT+ where the "+" indicates a pseudoinverse.
One can then take a mathematical model for the evolution and fit it to the columns of V, each of which are a linear combination the components of the model (this is easy, as each column is a 1D curve). This obtains model parameters which generate M? (the ? indicates it is based on fitting).
M * M?+ * V = V? which allows residuals R * S2 = V - V? to be minimized, thus determining D and M.
Pretty cool, eh?
The columns of U and V can also be inspected to glean information about the data; for example each inflection point in the columns of V typically indicates a different component of the model.
Finally, and actually addressing your question, it is import to note that although each successive singular value (element of the diagonal matrix S) with its attendant vectors U and V does have lower signal to noise, the separation of the components of the model in these "less important" vectors is actually more pronounced. In other words, if the data is described by a bunch of state changes that follow a sum of exponentials or whatever, the relative weights of each exponential get closer together in the smaller singular values. In other other words the later singular values have vectors which are less smooth (noisier) but in which the change represented by each component are more distinct.

Resources