How to plot Daubechies psi and phi wavelet functions in R? - r

The analysis with wavelets seems to be carried out as a discrete transform via matrix multiplication. So it is not surprising, I guess, that when plotting, for example, D4, the R package wmtsa returns the plot:
require(wmtsa)
filters <- wavDaubechies("d4")
plot(filters)
The question is how to go from this discretized plot to the plot in the Wikipedia entry:
Please note that I'm not interested in generating these curves precisely with wmtsa. Any other package will do - I don't have Matlab or Mathematica. But I wonder if the way to go is to start with translating this Mathematica chunk of code in this paper into R, rather than using built-in functions:
Wave1etTransform.m
c[k-1 := c[k] = Daubechies[4][[k+l]];
phi[l] = (l+Sqrt[3])/2 // N;
phi[2] = (l-Sqrt[3])/2 // N;
phi[xJ; xc=0 II x>=3] : = 0
phi[x-?NumberQ] := phi[x] =
N[Sqrt[2]] Sum[c[k] phi[2x-k],{k,0,3}];

In order to plot the wavelet and scaling function all you need are the four numbers shown in the first two plots. I'll focus on plotting the scaling function.
Integer shifts of the scaling function, 𝜑, form an orthonormal basis of the subspace V0 of the multiresolution analysis. We also have that V-1 ⊆ V0 and that 𝜑(x/2) ∈ V-1. Using this gives us the identity
𝜑(x/2) = ∑k ∈ ℤ hk𝜑(x-k)
Now we just need the values of hk. For the Daubechies wavelet these are the values show in the discrete plot you gave (and zero for every other value of k). For an exact value of the hk, first let 𝜇 = (1+sqrt(3))/2. Then we have that
h0 = 𝜇/4
h1 = (1+𝜇)/4
h2 = (2-𝜇)/4
h3 = (1-𝜇)/4
and hk = 0 otherwise.
Using these two things we are able to plot the function using what is known as the cascade algorithm. First notice that 𝜑(0) = 𝜑(0/2) = h0𝜑(0) + h1𝜑(0-1) + h2𝜑(0-2) + h3𝜑(0-3). The only way this equation can hold is if 𝜑(0) = 𝜑(-1) = 𝜑(-2) = 𝜑(-3) = 0. Extending this will show that for x ≦ 0 we have that 𝜑(x) = 0. Furthermore, a similar argument can show that 𝜑(x) = 0 for x ≥ 3.
Thus, we only need to worry about x = 1 and x = 2 to find non-zero values of 𝜑 for integer values of x. If we put x = 2 into the identity for 𝜑(x/2) we get that 𝜑(1) = h0𝜑(2) + h1𝜑(1). Putting x = 4 into the identity gives us that 𝜑(2) = h2𝜑(2) + h3𝜑(1).
We can rewrite the above two equations as a matrix multiplied by a vector equals a vector. In fact, it will be in the form v = Av (v is the same vector on both sides). This means that v is an eigenvector of the matrix A with eigenvalue 1. But v = (𝜑(1), 𝜑(2)) and so by finding this eigenvector using the standard methods we will be able to find the values of 𝜑(1) and 𝜑(2).
In fact, this gives us that 𝜑(1) = (1+sqrt(3))/2 and 𝜑(2) = (1-sqrt(3))/2 (this is where those values in the Mathematica code sample come from). Also note that we need to specifically chose the eigenvector of magnitude 2 for this algorithm to work so you must use those values for 𝜑(1) and 𝜑(2) even though you could rescale the eigenvector.
Now we can find the values of 𝜑(1/2), 𝜑(3/2), and 𝜑(5/2). For example, 𝜑(1/2) = h0𝜑(1) and 𝜑(3/2) = h1𝜑(2) + h2𝜑(1).
With these values, you can then find the values of 𝜑(1/4), 𝜑(3/4), and so on. Continuing this process will give you the value of 𝜑 for all dyadic rationals (rational numbers in the form k/2j.
The same process can be used to find the wavelet function. You only need to use the four different values shown in the first plot rather than the four shown in the second plot.
I recently implemented this Python. An R implementation will be fairly similar.
import numpy as np
import matplotlib.pyplot as plt
def cascade_algorithm(j: int):
mu = (1 + np.sqrt(3))/2
h_k = np.array([mu/4, (1+mu)/4, (2-mu)/4, (1-mu)/4])
# Array to store all the value of phi.
phi_vals = np.zeros((2, 3*2**j+1), dtype=np.float64)
for i in range(3*2**j+1):
phi_vals[0][i] = i/(2**j)
calced_vals = np.zeros((3*2**j+1), dtype=np.bool)
# Input values for 1 and 2.
phi_vals[1][1*2**j] = (1+np.sqrt(3))/2
phi_vals[1][2*2**j] = (1-np.sqrt(3))/2
# We now know the values for 0, 1, 2, and 3.
calced_vals[0] = True
calced_vals[1*2**j] = True
calced_vals[2*2**j] = True
calced_vals[3*2**j] = True
# Now calculate for all the dyadic rationals.
for k in range(1, j+1):
for l in range(1, 3*2**k):
x = l/(2**k)
if calced_vals[int(x*2**j)] != True:
calced_vals[int(x*2**j)] = True
two_x = 2*x
which_k = np.array([0, 1, 2, 3], dtype=np.int)
which_k = ((two_x - which_k > 0) & (two_x - which_k < 3))
phi = 0
for n, _ in enumerate(which_k):
if which_k[n] == True:
phi += h_k[n]*phi_vals[1][int((two_x-n)*2**j)]
phi_vals[1][int(x*2**j)] = 2*phi
return phi_vals
phi_vals = cascade_algorithm(10)
plt.plot(phi_vals[0], phi_vals[1])
plt.show()

If you just want to plot the graphs, then you can use the package "wavethresh" to plot for example the D4 with the following commands:
draw.default(filter.number=4, family="DaubExPhase", enhance=FALSE, main="D4 Mother", scaling.function = F) # mother wavelet
draw.default(filter.number=4, family="DaubExPhase", enhance=FALSE, main="D4 Father", scaling.function = T) # father wavelet
Notice that the mother wavelet and the father wavelets will be plotted depending on the variable "scaling.function". If true, then it plots the father wavelet (scaling), else it plots the mother wavelet.
If you want to generate it by yourself, without packages, I'd suggest you follow Daubechies-Lagarias algorithm, in this paper. It is not hard to implement.

Related

FFTW.jl for 2D array: Diffusion only happening in 1D

From what I have read, using FFTW.jl / AbstractFFTs.jl's fft(A) when A is a 2D array should perform fft in 2D, not column-wise. Any idea why I am seeing only column-wise diffusion when (I think) I'm adding scaled second spatial derivative to u(t,x), as if using explicit solver for time?
Thank you! I am quite new to this.
code and heatmap screenshot
using Random
using FFTW
using Plots
gr()
N = (100,100)
# initialize with gaussian noise
u = randn(Float16, (N[1], N[2])).*0.4.+0.4
# include square of high concentration to observe diffusion clearly
u[40:50,40:50] .= 3
N = size(x)
L = 100
k1 = fftfreq(51)
k2 = fftfreq(51)
lap_mat = -(k1.^2 + k2.^2)
function lap_fft(x)
lapF = rfft(x)
lap = irfft(lap_mat.*lapF, 100)
return lap
end
# ode stepper or Implicit-Explicit solver
for i in 1:100000
u+=lap_fft(u)*0.0001
end
# plot state
heatmap(u)
Just because you are performing a real FFT, doesn't mean that you can real inverse fft the result. rfft goes from R -> C. What you can however do is the following:
function lap_fft(x)
lapF = complex(zeros(100,100)); # only upper half filled
lapF[1:51,1:100] = rfft(x) .* lap_mat; # R -> C
return abs.(ifft(lapF)); # C -> R
end
Real FFT to complex frequency domain (only upper half filled because of data redundancy), multiply your filter in frequency domain, inverse FFT into complex image domain and obtain the magnitude abs.(), real part real.() etc.
But honestly, why the hassle with the real fft?
using Random
using FFTW
using Plots
gr()
N = (100,100)
# initialize with gaussian noise
u = randn(Float16, (N[1], N[2])).*0.4.+0.4;
# include square of high concentration to observe diffusion clearly
u[40:50,40:50] .= 3;
N = size(u);
L = 100;
k1 = fftfreq(100);
k2 = fftfreq(100);
tmp = -(k1.^2 + k2.^2);
lap_mat = sqrt.(tmp.*reshape(tmp,1,100));
function lap_fft(x)
return abs.(ifftshift(ifft(fftshift(ifftshift(fft(fftshift(x))).*lap_mat))));
end
# ode stepper or Implicit-Explicit solver
for i in 1:100000
u+=lap_fft(u)*0.001;
end
# plot state
heatmap(u)

plotting multiple function on octave. I already looked for an answer but something is not working

Let f be a continuous real function defined on the interval [a,b]. I want to aproximate this function by a piecewise quadratic polynomial. I already created a matrix that summarizes these polynomials. Let's say that I'm considering a uniform partition of the interval into N pieces ( therefore N+1 points).
I have a matrix A of size N times 3, where the k row represents the quadratic polynomial associated with the k-interval of this partition in the natural form ( the row [a b c] represents the polynomial a+bx+cx^2). I already created a method to find this matrix (obviously it depends on the choice of my interpolation points inside of each interval but that it doesn't matter for this question).
I'm trying to plot the corresponding function but I'm having some problems. I used the same idea given in Similar question. This is what I wrote
x=zeros(N+1,1);
%this is the set of points defining the uniform partition
for i=1:N+1
x(i)=a+(i-1)*((b-a)/(N));
end
%this is the length of my linspace for plotting the functions
l=100
And now I plot the functions:
figure;
hold on;
%first the original function
u=linspace(a,b,l*N);
v=arrayfun( f , u);
plot(u,v,'b')
% this is for plotting the other functions
for k=1:N
x0=linspace(x(k),x(k+1));
y0=arrayfun(#(t) [1,t,t^2]*A(k,:)',x0);
plot(x0, y0, 'r');
end
The problem is that the for is plotting the same function f and I don't know why. I tried with multiple different functions. I'm pretty sure that my matrix A is correct.
Please write a minimal working example that can be run as standalone code or copy/pasted from people here to check where you might have a bug -- often in the process of reducing your code to its bare principles in this manner, you end up figuring out what is the problem yourself in the first place. But, in any case, I have written one myself and cannot replicate the problem.
figure;
hold on;
# arbitrary values for Minimal Working Example
N = 10;
x = [10:10:110]; # (N+1, 1)
A = randn( N, 3 ); # (3 , N)
a = 100; b = 200; l = 3;
f = #(t) t.^2 .* sin(t);
%first the original function
u = linspace(a,b,l*N);
v = arrayfun( f , u);
plot(u,v,'b')
for k = 1 : N
x0 = linspace( x(k), x(k+1) )
y0 = arrayfun( #(t) ([1, t, t.^2]) * (A(k, :).'), x0 )
x0, y0
plot(x0, y0, 'r');
endfor
hold off;
Output:
Are you doing something different?

Differentiating a scalar with respect to matrix

I have a scalar function which is obtained by iterative calculations. I wish to differentiate(find the directional derivative) of the values with respect to a matrix elementwise. How should I employ the finite difference approximation in this case. Does diff or gradient help in this case. Note that I only want numerical derivatives.
The typical code that I would work on is:
n=4;
for i=1:n
for x(i)=-2:0.04:4;
for y(i)=-2:0.04:4;
A(:,:,i)=[sin(x(i)), cos(y(i));2sin(x(i)),sin(x(i)+y(i)).^2];
B(:,:,i)=[sin(x(i)), cos(x(i));3sin(y(i)),cos(x(i))];
R(:,:,i)=horzcat(A(:,:,i),B(:,:,i));
L(i)=det(B(:,:,i)'*A(:,:,i)B)(:,:,i));
%how to find gradient of L with respect to x(i), y(i)
grad_L=tr((diff(L)/diff(R)')*(gradient(R))
endfor;
endfor;
endfor;
I know that the last part for grad_L would syntax error saying the dimensions don't match. How do I proceed to solve this. Note that gradient or directional derivative of a scalar functionf of a matrix variable X is given by nabla(f)=trace((partial f/patial(x_{ij})*X_dot where x_{ij} denotes elements of matrix and X_dot denotes gradient of the matrix X
Both your code and explanation are very confusing. You're using an iteration of n = 4, but you don't do anything with your inputs or outputs, and you overwrite everything. So I will ignore the n aspect for now since you don't seem to be making any use of it. Furthermore you have many syntactical mistakes which look more like maths or pseudocode, rather than any attempt to write valid Matlab / Octave.
But, essentially, you seem to be asking, "I have a function which for each (x,y) coordinate on a 2D grid, it calculates a scalar output L(x,y)", where the calculation leading to L involves multiplying two matrices and then getting their determinant. Here's how to produce such an array L:
X = -2 : 0.04 : 4;
Y = -2 : 0.04 : 4;
X_indices = 1 : length(X);
Y_indices = 1 : length(Y);
for Ind_x = X_indices
for Ind_y = Y_indices
x = X(Ind_x); y = Y(Ind_y);
A = [sin(x), cos(y); 2 * sin(x), sin(x+y)^2];
B = [sin(x), cos(x); 3 * sin(y), cos(x) ];
L(Ind_x, Ind_y) = det (B.' * A * B);
end
end
You then want to obtain the gradient of L, which, of course, is a vector output. Now, to obtain this, ignoring the maths you mentioned for a second, if you're basically trying to use the gradient function correctly, then you just use it directly onto L, and specify the grid X Y used for it to specify the spacings between the different elements in L, and collect its output as a two-element array, so that you capture both the x and y vector-components of the gradient:
[gLx, gLy] = gradient(L, X, Y);

Interpolating height for a point inside a grid based on a discrete height function

I have been wracking my brain to come up with a solution to this problem.
I have a lookup table that returns height values for various points (x,z) on the grid. For instance I can calculate the height at A, B, C and D in Figure 1. However, I am looking for a way to interpolate the height at P (which has a known (x,z)). The lookup table only has values at the grid intervals, and P lies between these intervals. I am trying to calculate values s and t such that:
A'(s) = A + s(C-A)
B'(t) = B + t(P-B)
I would then use the these two equations to find the intersection point of B'(t) with A'(s) to find a point X on the line A-C. With this I can calculate the height at this point X and with that the height at point P.
My issue lies in calculating the values for s and t.
Any help would be greatly appreciated.
Try also bilinear interpolation or bicubic interpolation.
Depending on if you want to interpolate between ABC or ABCD the algorithm will change.
To interpolate between ABC (which I assume is what you want to do since you draw the diagonal) you will need to find the barycentric coordinates of P relative to ABC x and y positions then apply the barycentric coordinate to the height (z is assumed here) component of those triangles.
What about going this way: find u and v so that
P = B + u(A-B) + v(C-B)
If you write this out, you'll see that this is a 2x2 linear system with unknowns u and v, so I guess you know how to go on from there.
Oh, and once you have u and v you use the same exact formula as above for the height, only this time A,B,C,P will be the heights at these points.
Considering points value are available at four corners of a square of unit length, interpolated value at any point(x,y) inside the square is given by:
f(x,y) = [ (1-y)f(0,0) + yf(0,1) ](1-x) + [ (1-y)f(1,0)+y(f(1,1)) ]x
If square has length other than 1,say L then f(x,y) is given by:
f(x,y) = [ (L-y)f(0,0) + yf(0,L) ](L-x)/L^2 + [ (L-y)f(L,0)+y(f(L,L)) ]x/L^2
image
Here's an explicit example based on shape functions.
Consider the functions:
u1(x,z) = (x-x_b)/(x_c-x_b)
One has u1(x_b,z_b) = u1(x_a,z_a) = 0 (because x_a = x_b) and u1(x_c,z_c) = u1(x_d,z_d) = 1
u2(x,z) = 1 - u1(x,z)
Now we have u2(x_b,z_b) = u2(x_a,z_a) = 1 and u2(x_c,z_c) = u2(x_d,z_d) = 0
v1(x,z) = (z-z_b)/(z_a-z_b)
This function satisfies v1(x_a,z_a) = v1(x_d,z_d) = 1 and v1(x_b,z_b) = v1(x_c,z_c) = 0
v2(x,z) = 1 - v1(x,z)
We have v2(x_a,z_a) = v2(x_d,z_d) = 0 and v2(x_b,z_b) = v2(x_c,z_c) = 1
Now let's build new functions as follows:
S_D(x,z) = u1(x,z) * v1(x,z)
We get S_D(x_d, z_d) = 1 and S_D(x_a,z_a) = S_D(x_b,z_b) = S_D(x_c,z_c) = 0
S_C(x,z) = u1(x,z) * v2(x,z)
We get S_C(x_c, z_c) = 1 and S_C(x_a,z_a) = S_C(x_b,z_b) = S_C(x_d,z_d) = 0
S_A(x,z) = u2(x,z) * v1(x,z)
We get S_A(x_a, z_a) = 1 and S_A(x_b,z_b) = S_A(x_c,z_c) = S_A(x_d,z_d) = 0
S_B(x,z) = u2(x,z) * v2(x,z)
We get S_B(x_b, z_b) = 1 and S_B(x_a,z_a) = S_B(x_c,z_c) = S_B(x_d,z_d) = 0
Now define your interpolating function as
H(x,z) = h_a * S_A(x,z) + h_b * S_B(x,z) + h_c * S_C(x,z) + h_d * S_D(x,z),
where h_a is the heigh at point A, h_b is the height at point B, and so on.
You can easily verify that H is indeed an interpolating function:
H(x_a,z_a) = h_a, H(x_b,z_b) = h_b, H(x_c,z_c) = h_c and H(x_d,z_d) = h_d.
Now, in order to approximate the height at P, all you need to do is evaluate H at this point:
h_p = H(x_p, z_p)
The functions S are normally referred to as "shape functions". There's one such function for each node you want your interpolated value to depend on, and in this case they all satisfy Kronecker's delta property (they take the value one at one node and zero at all other nodes).
There are many ways to build shape functions for a given set of nodes. If I remember correctly, the construction of 2D shape functions by multiplication of 1D shape functions (as we've done in this case) is called "tensor product of functions" (easy in this case because the grid is rectangular). We have ended up with four functions (one per node), all of them linear combinations of {1, x, z, xz}.
If you want to use only three points for your interpolation, then you should be able to easily build three shape functions as linear combinations of {1, x, z} only, but you will loose a 25% of the height information provided by the grid and your interpolant will not be smooth inside the rectangle when h_b != h_d.

Finding intersect in triangle from a vector originating from a particular side

I know the coordinates of A, B and C.. I also know of a vector V originating from C..
I know that the vector intersects A and B, I just don't know how to find i.
Can anyone explain the steps involved in solving this problem?
Thanks alot.
http://img34.imageshack.us/img34/941/triangleprob.png
If you know A and B, you know equation for the line AB, and you said you know V, so you can form the equation for Line V.... Well i is only point that satisfies both those equations.
Equation for Line AB:
(bx-ax)(Y-ay) = (by-ay)(X-ax)
If you knpow the direction (or slope = m) of the vector, and any point that lies on the vector, then the equation of the line for vector V is
Y = mX = b
where m is the slope or direction of the line, and b is the y coordinate where it crosses thevertical y=axis (where X = 0)
if you know a point on the line (i.e., C = (s, t) then you solve for b by:
t = ms + b ==> b = t - ms,
so equation becomes
Y = mX + t-ms
i = C+kV
Lets call N the normal to the line A,B so N = [-(B-A).y, (B-A).x]
Also, for any point on the line:
(P-A)*N = 0 -- substitute from line 1 above:
(C+kV-A)*N = 0
(kV+C-A)*N = 0
kV*N + (C-A)*N = 0
kV*N = (A-C)*N
k = [(A-C)*N]/V*N
Now that we have k, plug it into line 1 above to get i.
Here I'm using * to represent dot product so expanding to regular multiplication:
k = ((A.x-C.x)*-(B.y-A.y) + (A.y-C.y)*(B.x-A.x)) / (V.x*-(B.y-A.y) + V.x*(B.x-A.x))
I.x = C.x + k*V.x
I.y = C.y + k*V.y
Unless I screwed something up....
Simple algebra. The hard part is often just writing down the basic equations, but once written down, the rest is easy.
Can you define a line that emanates from the point C = [c_x,c_y], and points along the vector V = [v_x,v_y]? A nice way to represent such a line is to use a parametric representation. Thus,
V(t) = C + t*V
In terms of the vector elements, we have it as
V(t) = [c_x + t*v_x, c_y + t*v_y]
Look at how this works. When t = 0, we get the point C back, but for any other value of t, we get some other point on the line.
How about the line segment that passes through A and B? One way to solve this problem would be to define a second line parametrically in the same fashion. Then solve for a system of two equations in two unknowns to find the intersection.
An easier approach is to look at the normal vector to the line segment AB. That vector is given as
N = [b_y - a_y , a_x - b_x]/sqrt((b_x - a_x)^2 + (b_y - a_y)^2)
Note that N is defined here to have a unit norm.
So now, when do we know if a point happens to lie along the line that connects A and B? This is easy now. That will happen when the dot product defined below is exactly zero.
dot(N,V(t) - A) = 0
Expand this, and solve for the parameter t. We can write it down using dot products.
t = dot(N,A-C)/dot(N,V)
Or, if you prefer,
t = (N_x*(a_x - c_x) + N_y*(a_y - c_y)) / (N_x*v_x + N_y*v_y))
And once we have t, substitute into the expression above for V(t). Lets see all of this work in practice. I'll pick some points A,B,C and a vector V.
A = [7, 3]
B = [2, 5]
C = [1, 0]
V = [1, 1]
Our normal vector N, after normalization, will look something like
N = [0.371390676354104, 0.928476690885259]
The line parameter, t, is then
t = 3.85714285714286
And we find the point of intersection as
C + t*V = [4.85714285714286, 3.85714285714286]
If you plot the points on a piece of paper it should all fit together, and all in only a few simple expressions.

Resources