I would like to code a function (lets call it update!), that updates the position and velocity of particles inside a box.
I already have a function that describes the collision between two particles.
mutable struct Particle
pos :: Vector{Float64}
vel :: Vector{Float64}
end
p1 = Particle( rand(2) , rand(2) )
p2 = Particle( rand(2) , rand(2) )
function collision!(p1::Particle, p2::Particle)
# Find collision vector
n = p1.pos - p2.pos
# Normalize it, since you want an orthonormal basis
n ./= sqrt(n[1]^2 + n[2]^2)
# Construct M
M = [n[1] n[2]; -n[2] n[1]]
# Find transformed velocity vectors
v1ₙ = M*p1.vel
v2ₙ = M*p2.vel
# Swap first component
v1ₙ[1], v2ₙ[1] = v2ₙ[1], v1ₙ[1]
# Calculate and store new velocity vectors
p1.vel .= M'*v1ₙ
p2.vel .= M'*v2ₙ
return nothing
end
I know that the function update! has to:
find out if the particles are not too close (they cant be closer then 2*Radius, like shown in the image above).
Only consider the relevant particles.
For a particle p1 we can go through all the possible collisions with other particles, by using collsion!.
See if the particle is inside the box, if not then it "bounces" off the walls of the box. (I know that this can be done using xlims, ylims)
function update!(particles, xlims, ylims, dt)
for p1 in particles
# loop through pairs in order to find collisions
for p2 in # ...
# ... #
# walls
# ... #
end
# positions update
# ... #
end
Finally There should be another function, that defines the Number of particles , box size, time intervalls dt and the amount of time that the simulation is running.
function particles_in_box!(particles, xlims, ylims, T, dt=0.01)
# ... #
end
I think I got the theory right, but I am not sure how to implement it. Any help would be appreciated.
Related
I am very sorry for asking a question that is probably very easy if you know how to solve it, and where many versions of the same question has been asked before. However, I am creating a new post since I have not found an answer to this specific question.
Basically, I have a 200cm x 200cm square that I am recording with a camera above it. However, the camera distorts the square slightly, see example here.. I am wondering how I go from transforming the x,y coordinates in the camera to real-life x,y coordinates (e.g., between 0-200 cm for each side). I understand that I probably need to apply some kind of transformation matrix, but I do not know which one, nor how to determine the transformation matrix. I haven't done any serious linear-algebra in a long time, so I appreciate any pointers for what to read up on, or how to get it done. I am working in python, so if there is some ready code for doing the transformation that would also be useful to know.
Thanks a lot!
I will show this using python and numpy.
import numpy as np
First, you have to understand the projection model
def apply_homography(H, p1):
p = H # p1.T
return (p[:2] / p[2]).T
With some algebraic manipulation you can determine the points at the plane z=1 that produced the given points.
def revert_homography(H, p2):
Hb = np.linalg.inv(H)
# 1 figure out which z coordinate should be added to p2
# order to get z=1 for p1
z = 1/(Hb[2,2] + (Hb[2,0] * p2[:,0] + Hb[2,1]*p2[:,1]))
p2 = np.hstack([p2[:,:2] * z[:,None], z[:, None]])
return p2 # Hb.T
The projection is not invertible, but under the complanarity assumption it may be inverted successfully.
Now, let's see how to determine the H matrix from the given points (assuming they are coplanar).
If you have the four corners in order in order you can simply specify the (x,y) coordinates of the cornder, then you can use the projection equations to determine the homography matrix like here, or here.
This requires at least 5 points to be determined as there is 9 coefficients, but we can fix one element of the matrix and make it an inhomogeneous equation.
def find_homography(p1, p2):
A = np.zeros((8, 2*len(p1)))
# x2'*(H[2,0]*x1+H[2,1]*x2)
A[6,0::2] = p1[:,0] * p2[:,0]
A[7,0::2] = p1[:,1] * p2[:,0]
# - (H[0,0]*x1+H[0,1]*y1+H[0,2])
A[0,0::2] = -p1[:,0]
A[1,0::2] = -p1[:,1]
A[2,0::2] = -1
# y2'*(H[2,0]*x1+H[2,1]*x2)
A[6,1::2] = p1[:,0] * p2[:,1]
A[7,1::2] = p1[:,1] * p2[:,1]
# - (H[1,0]*x1+H[1,1]*y1+H[1,2])
A[3,1::2] = -p1[:,0]
A[4,1::2] = -p1[:,1]
A[5,1::2] = -1
# assuming H[2,2] = 1 we can pass its coefficient
# to the independent term making an inhomogeneous
# equation
b = np.zeros(2*len(p2))
b[0::2] = -p2[:,0]
b[1::2] = -p2[:,1]
h = np.ones(9)
h[:8] = np.linalg.lstsq(A.T, b, rcond=None)[0]
return h.reshape(3,3)
Here a complete usage example. I pick a random H and transform four random points, this is what you have, I show how to find the transformation matrix H_. Next I create a test set of points, and I show how to find the world coordinates from the image coordinates.
# Pick a random Homography
H = np.random.rand(3,3)
H[2,2] = 1
# Pick a set of random points
p1 = np.random.randn(4, 3);
p1[:,2] = 1;
# The coordinates of the points in the image
p2 = apply_homography(H, p1)
# testing
# Create a set of random points
p_test = np.random.randn(20, 3)
p_test[:,2] = 1;
p_test2 = apply_homography(H, p_test)
# Now using only the corners find the homography
# Find a homography transform
H_ = find_homography(p1, p2)
assert np.allclose(H, H_)
# Predict the plane points for the test points
p_test_predicted = revert_homography(H_, p_test2)
assert np.allclose(p_test_predicted, p_test)
I would like to simulate the collision of particles inside a box.
To be more specific I want to create a function (lets call it collision!), that updates the particles velocities after each interaction, like shown in the image.
I defined the particles (with radius equal 1) as followed:
mutable struct Particle
pos :: Vector{Float64}
vel :: Vector{Float64}
end
p = Particle( rand(2) , rand(2) )
# example for the position
p.pos
> 2-element Vector{Float64}:
0.49339012018408135
0.11441734325871078
And for the collision
function collision!(p1::Particle, p2::Particle)
# ... #
return nothing
end
The main idea is that when two particles collide, they "exchange" their velocity vector that is parallel to the particles centers (vector n hat).
In order to do that, one would need to transform the velocity vectors to the orthonormal basis of the collision normal (n hat).
Then exchange the parallel component and rotate it in the original basis back.
I think I got the math right but I am not sure how to implement it in the code
With the caveat that I have not checked the math at all, one implementation for the 2d case you provide might be along the lines of:
struct Particle
pos :: Vector{Float64}
vel :: Vector{Float64}
end
p1 = Particle( rand(2) , rand(2) )
p2 = Particle( rand(2) , rand(2) )
function collision!(p1::Particle, p2::Particle)
# Find collision vector
n = p1.pos - p2.pos
# Normalize it, since you want an orthonormal basis
n ./= sqrt(n[1]^2 + n[2]^2)
# Construct M
M = [n[1] n[2]; -n[2] n[1]]
# Find transformed velocity vectors
v1ₙ = M*p1.vel
v2ₙ = M*p2.vel
# Swap first component (or should it be second? Depends on how M was constructed)
v1ₙ[1], v2ₙ[1] = v2ₙ[1], v1ₙ[1]
# Calculate and store new velocity vectors
p1.vel .= M'*v1ₙ
p2.vel .= M'*v2ₙ
return nothing
end
A few points:
You don't need a mutable struct; just a plain struct will work fine since the Vector itself is mutable
This implementation has a lot of excess allocations that you could avoid if you could work either in-place or perhaps more feasibly on the stack (for example, using StaticArrays of some sort instead of base Arrays as the basis for your position and velocity vectors). In-place actually might not be too hard either if you just make another struct (say "CollisionEvent") which holds preallocated buffers for M, n, v1n and v2n, and pass that to the collision! function as well.
While I have not dived in to see, one might be able to find useful reference implementations for this type of collision in a molecular dynamics package like https://github.com/JuliaMolSim/Molly.jl
From what I have read, using FFTW.jl / AbstractFFTs.jl's fft(A) when A is a 2D array should perform fft in 2D, not column-wise. Any idea why I am seeing only column-wise diffusion when (I think) I'm adding scaled second spatial derivative to u(t,x), as if using explicit solver for time?
Thank you! I am quite new to this.
code and heatmap screenshot
using Random
using FFTW
using Plots
gr()
N = (100,100)
# initialize with gaussian noise
u = randn(Float16, (N[1], N[2])).*0.4.+0.4
# include square of high concentration to observe diffusion clearly
u[40:50,40:50] .= 3
N = size(x)
L = 100
k1 = fftfreq(51)
k2 = fftfreq(51)
lap_mat = -(k1.^2 + k2.^2)
function lap_fft(x)
lapF = rfft(x)
lap = irfft(lap_mat.*lapF, 100)
return lap
end
# ode stepper or Implicit-Explicit solver
for i in 1:100000
u+=lap_fft(u)*0.0001
end
# plot state
heatmap(u)
Just because you are performing a real FFT, doesn't mean that you can real inverse fft the result. rfft goes from R -> C. What you can however do is the following:
function lap_fft(x)
lapF = complex(zeros(100,100)); # only upper half filled
lapF[1:51,1:100] = rfft(x) .* lap_mat; # R -> C
return abs.(ifft(lapF)); # C -> R
end
Real FFT to complex frequency domain (only upper half filled because of data redundancy), multiply your filter in frequency domain, inverse FFT into complex image domain and obtain the magnitude abs.(), real part real.() etc.
But honestly, why the hassle with the real fft?
using Random
using FFTW
using Plots
gr()
N = (100,100)
# initialize with gaussian noise
u = randn(Float16, (N[1], N[2])).*0.4.+0.4;
# include square of high concentration to observe diffusion clearly
u[40:50,40:50] .= 3;
N = size(u);
L = 100;
k1 = fftfreq(100);
k2 = fftfreq(100);
tmp = -(k1.^2 + k2.^2);
lap_mat = sqrt.(tmp.*reshape(tmp,1,100));
function lap_fft(x)
return abs.(ifftshift(ifft(fftshift(ifftshift(fft(fftshift(x))).*lap_mat))));
end
# ode stepper or Implicit-Explicit solver
for i in 1:100000
u+=lap_fft(u)*0.001;
end
# plot state
heatmap(u)
Let f be a continuous real function defined on the interval [a,b]. I want to aproximate this function by a piecewise quadratic polynomial. I already created a matrix that summarizes these polynomials. Let's say that I'm considering a uniform partition of the interval into N pieces ( therefore N+1 points).
I have a matrix A of size N times 3, where the k row represents the quadratic polynomial associated with the k-interval of this partition in the natural form ( the row [a b c] represents the polynomial a+bx+cx^2). I already created a method to find this matrix (obviously it depends on the choice of my interpolation points inside of each interval but that it doesn't matter for this question).
I'm trying to plot the corresponding function but I'm having some problems. I used the same idea given in Similar question. This is what I wrote
x=zeros(N+1,1);
%this is the set of points defining the uniform partition
for i=1:N+1
x(i)=a+(i-1)*((b-a)/(N));
end
%this is the length of my linspace for plotting the functions
l=100
And now I plot the functions:
figure;
hold on;
%first the original function
u=linspace(a,b,l*N);
v=arrayfun( f , u);
plot(u,v,'b')
% this is for plotting the other functions
for k=1:N
x0=linspace(x(k),x(k+1));
y0=arrayfun(#(t) [1,t,t^2]*A(k,:)',x0);
plot(x0, y0, 'r');
end
The problem is that the for is plotting the same function f and I don't know why. I tried with multiple different functions. I'm pretty sure that my matrix A is correct.
Please write a minimal working example that can be run as standalone code or copy/pasted from people here to check where you might have a bug -- often in the process of reducing your code to its bare principles in this manner, you end up figuring out what is the problem yourself in the first place. But, in any case, I have written one myself and cannot replicate the problem.
figure;
hold on;
# arbitrary values for Minimal Working Example
N = 10;
x = [10:10:110]; # (N+1, 1)
A = randn( N, 3 ); # (3 , N)
a = 100; b = 200; l = 3;
f = #(t) t.^2 .* sin(t);
%first the original function
u = linspace(a,b,l*N);
v = arrayfun( f , u);
plot(u,v,'b')
for k = 1 : N
x0 = linspace( x(k), x(k+1) )
y0 = arrayfun( #(t) ([1, t, t.^2]) * (A(k, :).'), x0 )
x0, y0
plot(x0, y0, 'r');
endfor
hold off;
Output:
Are you doing something different?
I'd like to implement image morphing, for which I need to be able to deform the image with given set of points and their destination positions (where they will be "dragged"). I am looking for a simple and easy solution that gets the job done, it doesn't have to look great or be extremely fast.
This is an example what I need:
Let's say I have an image and a set of only one deforming point [0.5,0.5] which will have its destination at [0.6,0.5] (or we can say its movement vector is [0.1,0.0]). This means I want to move the very center pixel of the image by 0.1 to the right. Neighboring pixels in some given radius r need to of course be "dragged along" a little with this pixel.
My idea was to do it like this:
I'll make a function mapping the source image positions to destination positions depending on the deformation point set provided.
I will then have to find the inverse function of this function, because I have to perform the transformation by going through destination pixels and seeing "where the point had to come from to come to this position".
My function from step 1 looked like this:
p2 = p1 + ( 1 / ( (distance(p1,p0) / r)^2 + 1 ) ) * s
where
p0 ([x,y] vector) is the deformation point position.
p1 ([x,y] vector) is any given point in the source image.
p2 ([x,y] vector) is the position, to where p1 will be moved.
s ([x,y] vector) is movement vector of deformation point and says in which direction and how far p0 will be dragged.
r (scalar) is the radius, just some number.
I have problem with step number 2. The calculation of the inverse function seems a little too complex to me and so I wonder:
If there is an easy solution for finding the inverse function, or
if there is a better function for which finding the inverse function is simple, or
if there is an entirely different way of doing all this that is simple?
Here's the solution in Python - I did what Yves Daoust recommended and simply tried to use the forward function as the inverse function (switching the source and destination). I also altered the function slightly, changing exponents and other values produces different results. Here's the code:
from PIL import Image
import math
def vector_length(vector):
return math.sqrt(vector[0] ** 2 + vector[1] ** 2)
def points_distance(point1, point2):
return vector_length((point1[0] - point2[0],point1[1] - point2[1]))
def clamp(value, minimum, maximum):
return max(min(value,maximum),minimum)
## Warps an image accoording to given points and shift vectors.
#
# #param image input image
# #param points list of (x, y, dx, dy) tuples
# #return warped image
def warp(image, points):
result = img = Image.new("RGB",image.size,"black")
image_pixels = image.load()
result_pixels = result.load()
for y in range(image.size[1]):
for x in range(image.size[0]):
offset = [0,0]
for point in points:
point_position = (point[0] + point[2],point[1] + point[3])
shift_vector = (point[2],point[3])
helper = 1.0 / (3 * (points_distance((x,y),point_position) / vector_length(shift_vector)) ** 4 + 1)
offset[0] -= helper * shift_vector[0]
offset[1] -= helper * shift_vector[1]
coords = (clamp(x + int(offset[0]),0,image.size[0] - 1),clamp(y + int(offset[1]),0,image.size[1] - 1))
result_pixels[x,y] = image_pixels[coords[0],coords[1]]
return result
image = Image.open("test.png")
image = warp(image,[(210,296,100,0), (101,97,-30,-10), (77,473,50,-100)])
image.save("output.png","PNG")
You don't need to construct the direct function and invert it. Directly compute the inverse function, by swapping the roles of the source and destination points.
You need some form of bivariate interpolation, have a look at radial basis function interpolation. It requires to solve a linear system of equations.
Inverse distance weighting (similar to your proposal) is the easiest to implement but I am afraid it will give disappointing results.
https://en.wikipedia.org/wiki/Multivariate_interpolation#Irregular_grid_.28scattered_data.29