From what I have read, using FFTW.jl / AbstractFFTs.jl's fft(A) when A is a 2D array should perform fft in 2D, not column-wise. Any idea why I am seeing only column-wise diffusion when (I think) I'm adding scaled second spatial derivative to u(t,x), as if using explicit solver for time?
Thank you! I am quite new to this.
code and heatmap screenshot
using Random
using FFTW
using Plots
gr()
N = (100,100)
# initialize with gaussian noise
u = randn(Float16, (N[1], N[2])).*0.4.+0.4
# include square of high concentration to observe diffusion clearly
u[40:50,40:50] .= 3
N = size(x)
L = 100
k1 = fftfreq(51)
k2 = fftfreq(51)
lap_mat = -(k1.^2 + k2.^2)
function lap_fft(x)
lapF = rfft(x)
lap = irfft(lap_mat.*lapF, 100)
return lap
end
# ode stepper or Implicit-Explicit solver
for i in 1:100000
u+=lap_fft(u)*0.0001
end
# plot state
heatmap(u)
Just because you are performing a real FFT, doesn't mean that you can real inverse fft the result. rfft goes from R -> C. What you can however do is the following:
function lap_fft(x)
lapF = complex(zeros(100,100)); # only upper half filled
lapF[1:51,1:100] = rfft(x) .* lap_mat; # R -> C
return abs.(ifft(lapF)); # C -> R
end
Real FFT to complex frequency domain (only upper half filled because of data redundancy), multiply your filter in frequency domain, inverse FFT into complex image domain and obtain the magnitude abs.(), real part real.() etc.
But honestly, why the hassle with the real fft?
using Random
using FFTW
using Plots
gr()
N = (100,100)
# initialize with gaussian noise
u = randn(Float16, (N[1], N[2])).*0.4.+0.4;
# include square of high concentration to observe diffusion clearly
u[40:50,40:50] .= 3;
N = size(u);
L = 100;
k1 = fftfreq(100);
k2 = fftfreq(100);
tmp = -(k1.^2 + k2.^2);
lap_mat = sqrt.(tmp.*reshape(tmp,1,100));
function lap_fft(x)
return abs.(ifftshift(ifft(fftshift(ifftshift(fft(fftshift(x))).*lap_mat))));
end
# ode stepper or Implicit-Explicit solver
for i in 1:100000
u+=lap_fft(u)*0.001;
end
# plot state
heatmap(u)
Related
I am very sorry for asking a question that is probably very easy if you know how to solve it, and where many versions of the same question has been asked before. However, I am creating a new post since I have not found an answer to this specific question.
Basically, I have a 200cm x 200cm square that I am recording with a camera above it. However, the camera distorts the square slightly, see example here.. I am wondering how I go from transforming the x,y coordinates in the camera to real-life x,y coordinates (e.g., between 0-200 cm for each side). I understand that I probably need to apply some kind of transformation matrix, but I do not know which one, nor how to determine the transformation matrix. I haven't done any serious linear-algebra in a long time, so I appreciate any pointers for what to read up on, or how to get it done. I am working in python, so if there is some ready code for doing the transformation that would also be useful to know.
Thanks a lot!
I will show this using python and numpy.
import numpy as np
First, you have to understand the projection model
def apply_homography(H, p1):
p = H # p1.T
return (p[:2] / p[2]).T
With some algebraic manipulation you can determine the points at the plane z=1 that produced the given points.
def revert_homography(H, p2):
Hb = np.linalg.inv(H)
# 1 figure out which z coordinate should be added to p2
# order to get z=1 for p1
z = 1/(Hb[2,2] + (Hb[2,0] * p2[:,0] + Hb[2,1]*p2[:,1]))
p2 = np.hstack([p2[:,:2] * z[:,None], z[:, None]])
return p2 # Hb.T
The projection is not invertible, but under the complanarity assumption it may be inverted successfully.
Now, let's see how to determine the H matrix from the given points (assuming they are coplanar).
If you have the four corners in order in order you can simply specify the (x,y) coordinates of the cornder, then you can use the projection equations to determine the homography matrix like here, or here.
This requires at least 5 points to be determined as there is 9 coefficients, but we can fix one element of the matrix and make it an inhomogeneous equation.
def find_homography(p1, p2):
A = np.zeros((8, 2*len(p1)))
# x2'*(H[2,0]*x1+H[2,1]*x2)
A[6,0::2] = p1[:,0] * p2[:,0]
A[7,0::2] = p1[:,1] * p2[:,0]
# - (H[0,0]*x1+H[0,1]*y1+H[0,2])
A[0,0::2] = -p1[:,0]
A[1,0::2] = -p1[:,1]
A[2,0::2] = -1
# y2'*(H[2,0]*x1+H[2,1]*x2)
A[6,1::2] = p1[:,0] * p2[:,1]
A[7,1::2] = p1[:,1] * p2[:,1]
# - (H[1,0]*x1+H[1,1]*y1+H[1,2])
A[3,1::2] = -p1[:,0]
A[4,1::2] = -p1[:,1]
A[5,1::2] = -1
# assuming H[2,2] = 1 we can pass its coefficient
# to the independent term making an inhomogeneous
# equation
b = np.zeros(2*len(p2))
b[0::2] = -p2[:,0]
b[1::2] = -p2[:,1]
h = np.ones(9)
h[:8] = np.linalg.lstsq(A.T, b, rcond=None)[0]
return h.reshape(3,3)
Here a complete usage example. I pick a random H and transform four random points, this is what you have, I show how to find the transformation matrix H_. Next I create a test set of points, and I show how to find the world coordinates from the image coordinates.
# Pick a random Homography
H = np.random.rand(3,3)
H[2,2] = 1
# Pick a set of random points
p1 = np.random.randn(4, 3);
p1[:,2] = 1;
# The coordinates of the points in the image
p2 = apply_homography(H, p1)
# testing
# Create a set of random points
p_test = np.random.randn(20, 3)
p_test[:,2] = 1;
p_test2 = apply_homography(H, p_test)
# Now using only the corners find the homography
# Find a homography transform
H_ = find_homography(p1, p2)
assert np.allclose(H, H_)
# Predict the plane points for the test points
p_test_predicted = revert_homography(H_, p_test2)
assert np.allclose(p_test_predicted, p_test)
I have a list of positive and negative values and a single temperature. I am trying to plot the Maxwell-Boltzmann Distribution using the equation for particles moving in only one direction.
m_e = 9.11E-28 # electron mass [g]
k = 1.38E-16 # boltzmann constant [erg*K^-1]
v = range(1e10, -1e10, step=-1e8) # velocity [cm/s]
T_M = 1e6 # temperature of Maxwellian [K]
function Maxwellian(v_Max, T_Max)
normal = (m_e/(2*pi*k*T_Max))^1.5
exp_term = exp(-((m_e).*v_Max.*v_Max)/(3*k*T_Max))
return normal*exp_term
end
# Initially comparing chosen distribution f_s to Maxwellian F_s
plot(v, Maxwellian.(v, T_M), label= L"F_s" * " (Maxwellian)")
xlabel!("velocity (cm/s)")
ylabel!("probability density")
However, when, plotting this, my whole function is 0:
I tested out if I wrote my function correctly by replacing return normal*exp_term with return exp_term (i.e. ignoring any normalization constants) and this seems to produce the distinct of the bell curve:
Yet, without the normalization constant, this will not preserve the area under the curve. I was wondering what may I be doing incorrectly with setting up my Maxwellian function and the constant in front of the exponential.
If you print the normalization term on its own:
julia> (m_e/(2*pi*k*T_M))^1.5
1.0769341115495682e-27
you can see that it is 10 orders of magnitude smaller than the Y-axis scale used for the plot. You can set the Y-axis limits during the plots with ylims argument, or after the plot with:
julia> ylims!(-1e-28, 2e-27)
which changes the plot to:
I would like to code a function (lets call it update!), that updates the position and velocity of particles inside a box.
I already have a function that describes the collision between two particles.
mutable struct Particle
pos :: Vector{Float64}
vel :: Vector{Float64}
end
p1 = Particle( rand(2) , rand(2) )
p2 = Particle( rand(2) , rand(2) )
function collision!(p1::Particle, p2::Particle)
# Find collision vector
n = p1.pos - p2.pos
# Normalize it, since you want an orthonormal basis
n ./= sqrt(n[1]^2 + n[2]^2)
# Construct M
M = [n[1] n[2]; -n[2] n[1]]
# Find transformed velocity vectors
v1ₙ = M*p1.vel
v2ₙ = M*p2.vel
# Swap first component
v1ₙ[1], v2ₙ[1] = v2ₙ[1], v1ₙ[1]
# Calculate and store new velocity vectors
p1.vel .= M'*v1ₙ
p2.vel .= M'*v2ₙ
return nothing
end
I know that the function update! has to:
find out if the particles are not too close (they cant be closer then 2*Radius, like shown in the image above).
Only consider the relevant particles.
For a particle p1 we can go through all the possible collisions with other particles, by using collsion!.
See if the particle is inside the box, if not then it "bounces" off the walls of the box. (I know that this can be done using xlims, ylims)
function update!(particles, xlims, ylims, dt)
for p1 in particles
# loop through pairs in order to find collisions
for p2 in # ...
# ... #
# walls
# ... #
end
# positions update
# ... #
end
Finally There should be another function, that defines the Number of particles , box size, time intervalls dt and the amount of time that the simulation is running.
function particles_in_box!(particles, xlims, ylims, T, dt=0.01)
# ... #
end
I think I got the theory right, but I am not sure how to implement it. Any help would be appreciated.
Let f be a continuous real function defined on the interval [a,b]. I want to aproximate this function by a piecewise quadratic polynomial. I already created a matrix that summarizes these polynomials. Let's say that I'm considering a uniform partition of the interval into N pieces ( therefore N+1 points).
I have a matrix A of size N times 3, where the k row represents the quadratic polynomial associated with the k-interval of this partition in the natural form ( the row [a b c] represents the polynomial a+bx+cx^2). I already created a method to find this matrix (obviously it depends on the choice of my interpolation points inside of each interval but that it doesn't matter for this question).
I'm trying to plot the corresponding function but I'm having some problems. I used the same idea given in Similar question. This is what I wrote
x=zeros(N+1,1);
%this is the set of points defining the uniform partition
for i=1:N+1
x(i)=a+(i-1)*((b-a)/(N));
end
%this is the length of my linspace for plotting the functions
l=100
And now I plot the functions:
figure;
hold on;
%first the original function
u=linspace(a,b,l*N);
v=arrayfun( f , u);
plot(u,v,'b')
% this is for plotting the other functions
for k=1:N
x0=linspace(x(k),x(k+1));
y0=arrayfun(#(t) [1,t,t^2]*A(k,:)',x0);
plot(x0, y0, 'r');
end
The problem is that the for is plotting the same function f and I don't know why. I tried with multiple different functions. I'm pretty sure that my matrix A is correct.
Please write a minimal working example that can be run as standalone code or copy/pasted from people here to check where you might have a bug -- often in the process of reducing your code to its bare principles in this manner, you end up figuring out what is the problem yourself in the first place. But, in any case, I have written one myself and cannot replicate the problem.
figure;
hold on;
# arbitrary values for Minimal Working Example
N = 10;
x = [10:10:110]; # (N+1, 1)
A = randn( N, 3 ); # (3 , N)
a = 100; b = 200; l = 3;
f = #(t) t.^2 .* sin(t);
%first the original function
u = linspace(a,b,l*N);
v = arrayfun( f , u);
plot(u,v,'b')
for k = 1 : N
x0 = linspace( x(k), x(k+1) )
y0 = arrayfun( #(t) ([1, t, t.^2]) * (A(k, :).'), x0 )
x0, y0
plot(x0, y0, 'r');
endfor
hold off;
Output:
Are you doing something different?
The analysis with wavelets seems to be carried out as a discrete transform via matrix multiplication. So it is not surprising, I guess, that when plotting, for example, D4, the R package wmtsa returns the plot:
require(wmtsa)
filters <- wavDaubechies("d4")
plot(filters)
The question is how to go from this discretized plot to the plot in the Wikipedia entry:
Please note that I'm not interested in generating these curves precisely with wmtsa. Any other package will do - I don't have Matlab or Mathematica. But I wonder if the way to go is to start with translating this Mathematica chunk of code in this paper into R, rather than using built-in functions:
Wave1etTransform.m
c[k-1 := c[k] = Daubechies[4][[k+l]];
phi[l] = (l+Sqrt[3])/2 // N;
phi[2] = (l-Sqrt[3])/2 // N;
phi[xJ; xc=0 II x>=3] : = 0
phi[x-?NumberQ] := phi[x] =
N[Sqrt[2]] Sum[c[k] phi[2x-k],{k,0,3}];
In order to plot the wavelet and scaling function all you need are the four numbers shown in the first two plots. I'll focus on plotting the scaling function.
Integer shifts of the scaling function, 𝜑, form an orthonormal basis of the subspace V0 of the multiresolution analysis. We also have that V-1 ⊆ V0 and that 𝜑(x/2) ∈ V-1. Using this gives us the identity
𝜑(x/2) = ∑k ∈ ℤ hk𝜑(x-k)
Now we just need the values of hk. For the Daubechies wavelet these are the values show in the discrete plot you gave (and zero for every other value of k). For an exact value of the hk, first let 𝜇 = (1+sqrt(3))/2. Then we have that
h0 = 𝜇/4
h1 = (1+𝜇)/4
h2 = (2-𝜇)/4
h3 = (1-𝜇)/4
and hk = 0 otherwise.
Using these two things we are able to plot the function using what is known as the cascade algorithm. First notice that 𝜑(0) = 𝜑(0/2) = h0𝜑(0) + h1𝜑(0-1) + h2𝜑(0-2) + h3𝜑(0-3). The only way this equation can hold is if 𝜑(0) = 𝜑(-1) = 𝜑(-2) = 𝜑(-3) = 0. Extending this will show that for x ≦ 0 we have that 𝜑(x) = 0. Furthermore, a similar argument can show that 𝜑(x) = 0 for x ≥ 3.
Thus, we only need to worry about x = 1 and x = 2 to find non-zero values of 𝜑 for integer values of x. If we put x = 2 into the identity for 𝜑(x/2) we get that 𝜑(1) = h0𝜑(2) + h1𝜑(1). Putting x = 4 into the identity gives us that 𝜑(2) = h2𝜑(2) + h3𝜑(1).
We can rewrite the above two equations as a matrix multiplied by a vector equals a vector. In fact, it will be in the form v = Av (v is the same vector on both sides). This means that v is an eigenvector of the matrix A with eigenvalue 1. But v = (𝜑(1), 𝜑(2)) and so by finding this eigenvector using the standard methods we will be able to find the values of 𝜑(1) and 𝜑(2).
In fact, this gives us that 𝜑(1) = (1+sqrt(3))/2 and 𝜑(2) = (1-sqrt(3))/2 (this is where those values in the Mathematica code sample come from). Also note that we need to specifically chose the eigenvector of magnitude 2 for this algorithm to work so you must use those values for 𝜑(1) and 𝜑(2) even though you could rescale the eigenvector.
Now we can find the values of 𝜑(1/2), 𝜑(3/2), and 𝜑(5/2). For example, 𝜑(1/2) = h0𝜑(1) and 𝜑(3/2) = h1𝜑(2) + h2𝜑(1).
With these values, you can then find the values of 𝜑(1/4), 𝜑(3/4), and so on. Continuing this process will give you the value of 𝜑 for all dyadic rationals (rational numbers in the form k/2j.
The same process can be used to find the wavelet function. You only need to use the four different values shown in the first plot rather than the four shown in the second plot.
I recently implemented this Python. An R implementation will be fairly similar.
import numpy as np
import matplotlib.pyplot as plt
def cascade_algorithm(j: int):
mu = (1 + np.sqrt(3))/2
h_k = np.array([mu/4, (1+mu)/4, (2-mu)/4, (1-mu)/4])
# Array to store all the value of phi.
phi_vals = np.zeros((2, 3*2**j+1), dtype=np.float64)
for i in range(3*2**j+1):
phi_vals[0][i] = i/(2**j)
calced_vals = np.zeros((3*2**j+1), dtype=np.bool)
# Input values for 1 and 2.
phi_vals[1][1*2**j] = (1+np.sqrt(3))/2
phi_vals[1][2*2**j] = (1-np.sqrt(3))/2
# We now know the values for 0, 1, 2, and 3.
calced_vals[0] = True
calced_vals[1*2**j] = True
calced_vals[2*2**j] = True
calced_vals[3*2**j] = True
# Now calculate for all the dyadic rationals.
for k in range(1, j+1):
for l in range(1, 3*2**k):
x = l/(2**k)
if calced_vals[int(x*2**j)] != True:
calced_vals[int(x*2**j)] = True
two_x = 2*x
which_k = np.array([0, 1, 2, 3], dtype=np.int)
which_k = ((two_x - which_k > 0) & (two_x - which_k < 3))
phi = 0
for n, _ in enumerate(which_k):
if which_k[n] == True:
phi += h_k[n]*phi_vals[1][int((two_x-n)*2**j)]
phi_vals[1][int(x*2**j)] = 2*phi
return phi_vals
phi_vals = cascade_algorithm(10)
plt.plot(phi_vals[0], phi_vals[1])
plt.show()
If you just want to plot the graphs, then you can use the package "wavethresh" to plot for example the D4 with the following commands:
draw.default(filter.number=4, family="DaubExPhase", enhance=FALSE, main="D4 Mother", scaling.function = F) # mother wavelet
draw.default(filter.number=4, family="DaubExPhase", enhance=FALSE, main="D4 Father", scaling.function = T) # father wavelet
Notice that the mother wavelet and the father wavelets will be plotted depending on the variable "scaling.function". If true, then it plots the father wavelet (scaling), else it plots the mother wavelet.
If you want to generate it by yourself, without packages, I'd suggest you follow Daubechies-Lagarias algorithm, in this paper. It is not hard to implement.