So I have an initial velocity iv a final velocity fv (that is always 0) a time t and an acceleration variable a
I use these variables to calculate final distance fd
Note: language used here is Kotlin
Note: Formula used for calculating fd and a are not something I came up with
var iv = 10.0 // initial velocity
var fv = 0.0 // final velocity
var t = 8.0 // time
var a = ((fv - iv)/t) // acceleration
var fd: Double = ((iv*t) + (a/2.0*Math.pow(t,2.0)))
I get the result that fd = 40.0
when I try to model this the way I would try to apply it in code.
var d = 0.0 // current distance traveled
var i = 0 // current time elapsed
while (i < t) {
d += v
v += a
i++
}
I end up with the result of d = 45.0 when d should equal fd at the end.
what am I doing wrong in applying velocity and acceleration to velocity so that my results differ from what the mathematical formulas show they should be?
Don't worry about "formulas" - think about the physics.
If you have ever studied calculus and physics you know that:
a = dv/dt // a == acceleration; v == velocity; t == time
v = ds/dt // v == velocity; s == distance; t == time
If you know calculus well enough you can integrate the equation for acceleration twice to get the distance traveled as a function of time:
a(t) = dv/dt = a0
v(t) = ds/dt = a0*t + v0
s(t) = (a0/2)*t^2 + v0*t + s0
You can calculate the constants:
a0 = -1.25 m/sec^s
v0 = 10 m/s
s0 = 0 m
Substituting:
a(t) = -1.25
v(t) = 10 - 1.25*t
s(t) = -0.625*t^2 + 10*t = (10 - 0.625*t)*t
You can also calculate the answer numerically. That's what you're doing with Kotlin.
If you know the initial conditions
a(0), v(0), and s(0)
you can calculate the value at the end of a time increment dt like this:
a(t+dt) = f(t+dt)
v(t+dt) = v(t) + a(t)*dt
s(t+dt) = s(t) + v(t)*dt
Looks like you are assuming that acceleration is constant throughout the time you're interested in.
You don't say what units you're using. I'll assume metric units: length in meters and time in seconds.
You decelerate from an initial velocity of 10 m/sec to a final velocity of 0 m/second over 8 seconds. That means a constant acceleration of -1.25 m/sec^2.
You should be able to substitute values into these equations and get the answers you need.
Do the calculations by hand before you try to code them.
I wrote a main function which uses a stochastic optimization algorithm (Particle Swarm Optimization) to found optimal solution for a ODE system. I would run 50 times to make sure the optimal can be found. At first, it operates normally, but now I found the calculation time would increase with iteration increases.
It cost less than 300s for first ten calculations, but it would increase to 500s for final calculation. It seems that it would cost 3~5 seconds more for each calculation. I have followed the high performance tips to optimize my code but it doesn't work.
I am sorry I don't know quite well how to upload my code before, here is the code I wrote below. But in this code, the experimental data is not loaded, I may need to find a way to upload data. In main function, with the increase of i, the time cost is increasing for each calculation.
Oh, by the way, I found another interesting phenomenon. I changed the number of calculations and the calculation time changed again. For first 20 calculations in main loop, each calculation cost about 300s, and the memory useage fluctuates significantly. But something I don't know happend, it is speeding up. It cost 1/4 time less time for each calculation, which is about 80s. And the memory useage became a straight line like this:
I knew the Julia would do pre-heating for first run and then speed up. But this situation seems different. This situation looks like Julia run slowly for first 20 calculation, and then it found a good way to optimize the memory useage and speed up. Then the program just run at full speed.
using CSV, DataFrames
using BenchmarkTools
using DifferentialEquations
using Statistics
using Dates
using Base.Threads
using Suppressor
function uniform(dim::Int, lb::Array{Float64, 1}, ub::Array{Float64, 1})
arr = rand(Float64, dim)
#inbounds for i in 1:dim; arr[i] = arr[i] * (ub[i] - lb[i]) + lb[i] end
return arr
end
mutable struct Problem
cost_func
dim::Int
lb::Array{Float64,1}
ub::Array{Float64,1}
end
mutable struct Particle
position::Array{Float64,1}
velocity::Array{Float64,1}
cost::Float64
best_position::Array{Float64,1}
best_cost::Float64
end
mutable struct Gbest
position::Array{Float64,1}
cost::Float64
end
function PSO(problem, data_dict; max_iter=100,population=100,c1=1.4962,c2=1.4962,w=0.7298,wdamp=1.0)
dim = problem.dim
lb = problem.lb
ub = problem.ub
cost_func = problem.cost_func
gbest, particles = initialize_particles(problem, population, data_dict)
# main loop
for iter in 1:max_iter
#threads for i in 1:population
particles[i].velocity .= w .* particles[i].velocity .+
c1 .* rand(dim) .* (particles[i].best_position .- particles[i].position) .+
c2 .* rand(dim) .* (gbest.position .- particles[i].position)
particles[i].position .= particles[i].position .+ particles[i].velocity
particles[i].position .= max.(particles[i].position, lb)
particles[i].position .= min.(particles[i].position, ub)
particles[i].cost = cost_func(particles[i].position,data_dict)
if particles[i].cost < particles[i].best_cost
particles[i].best_position = copy(particles[i].position)
particles[i].best_cost = copy(particles[i].cost)
if particles[i].best_cost < gbest.cost
gbest.position = copy(particles[i].best_position)
gbest.cost = copy(particles[i].best_cost)
end
end
end
w = w * wdamp
if iter % 50 == 1
println("Iteration " * string(iter) * ": Best Cost = " * string(gbest.cost))
println("Best Position = " * string(gbest.position))
println()
end
end
gbest, particles
end
function initialize_particles(problem, population,data_dict)
dim = problem.dim
lb = problem.lb
ub = problem.ub
cost_func = problem.cost_func
gbest_position = uniform(dim, lb, ub)
gbest = Gbest(gbest_position, cost_func(gbest_position,data_dict))
particles = []
for i in 1:population
position = uniform(dim, lb, ub)
velocity = zeros(dim)
cost = cost_func(position,data_dict)
best_position = copy(position)
best_cost = copy(cost)
push!(particles, Particle(position, velocity, cost, best_position, best_cost))
if best_cost < gbest.cost
gbest.position = copy(best_position)
gbest.cost = copy(best_cost)
end
end
return gbest, particles
end
function get_dict_label(beta::Int)
beta_str = lpad(beta,2,"0")
T_label = "Temperature_" * beta_str
M_label = "Mass_" * beta_str
MLR_label = "MLR_" * beta_str
return T_label, M_label, MLR_label
end
function get_error(x::Vector{Float64}, y::Vector{Float64})
numerator = sum((x.-y).^2)
denominator = var(x) * length(x)
numerator/denominator
end
function central_diff(x::AbstractArray{Float64}, y::AbstractArray{Float64})
# Central difference quotient
dydx = Vector{Float64}(undef, length(x))
dydx[2:end] .= diff(y) ./ diff(x)
#views dydx[2:end-1] .= (dydx[2:end-1] .+ dydx[3:end])./2
# Forward and Backward difference
dydx[1] = (y[2]-y[1])/(x[2]-x[1])
dydx[end] = (y[end]-y[end-1])/(x[end]-x[end-1])
return dydx
end
function decomposition!(dm,m,p,T)
# A-> residue + volitale
# B-> residue + volatile
beta,A1,E1,n1,k1,A2,E2,n2,k2,m1,m2 = p
R = 8.314
rxn1 = -m1 * exp(A1-E1/R/T) * max(m[1]/m1,0)^n1 / beta
rxn2 = -m2 * exp(A2-E2/R/T) * max(m[2]/m2,0)^n2 / beta
dm[1] = rxn1
dm[2] = rxn2
dm[3] = -k1 * rxn1 - k2 * rxn2
dm[4] = dm[1] + dm[2] + dm[3]
end
function read_file(file_path)
df = CSV.read(file_path, DataFrame)
data_dict = Dict{String, Vector{Float64}}()
for beta in 5:5:21
T_label, M_label, MLR_label = get_dict_label(beta)
T_data = collect(skipmissing(df[:, T_label]))
M_data = collect(skipmissing(df[:, M_label]))
T = T_data[T_data .< 780]
M = M_data[T_data .< 780]
data_dict[T_label] = T
data_dict[M_label] = M
data_dict[MLR_label] = central_diff(T, M)
end
return data_dict
end
function initial_condition(beta::Int64, ode_parameters::Array{Float64,1})
m_FR_initial = ode_parameters[end]
m_PVC_initial = 1 - m_FR_initial
T_span = (300.0, 800.0) # temperature range
p = [beta; ode_parameters; m_PVC_initial]
m0 = [p[end-1], p[end], 0.0, 1.0] # initial mass
return m0, T_span, p
end
function cost_func(ode_parameters, data_dict)
total_error = 0.0
for beta in 5:5:21
T_label, M_label, MLR_label= get_dict_label(beta)
T = data_dict[T_label]::Vector{Float64}
M = data_dict[M_label]::Vector{Float64}
MLR = data_dict[MLR_label]::Vector{Float64}
m0, T_span, p = initial_condition(beta,ode_parameters)
prob = ODEProblem(decomposition!,m0,T_span,p)
sol = solve(prob, AutoVern9(Rodas5(autodiff=false)),saveat=T,abstol=1e-8,reltol=1e-8,maxiters=1e4)
if sol.retcode != :Success
# println(1)
return Inf
else
M_sol = #view sol[end, :]
MLR_sol = central_diff(T, M_sol)::Array{Float64,1}
error1 = get_error(MLR, MLR_sol)::Float64
error2 = get_error(M, M_sol)::Float64
total_error += error1 + error2
end
end
total_error
end
function main()
flush(stdout)
total_time = 0
best_costs = []
file_path = raw"F:\17-Fabric\17-Fabric (Smoothed) TG.csv"
data_dict = read_file(file_path)
dimension = 9
lb = [5, 47450, 0.0, 0.0, 24.36, 148010, 0.0, 0.0, 1e-5]
ub = [25.79, 167700, 5, 1, 58.95, 293890, 5, 1, 0.25]
problem = Problem(cost_func,dimension,lb,ub)
global_best_cost = Inf
println("-"^100)
println("Running PSO ...")
population = 50
max_iter = 1001
println("The population is: ", population)
println("Max iteration is:", max_iter)
for i in 1:50 # The number of calculation
start_time = Dates.now()
println("Current iteration is: ", string(i))
gbest, particles = PSO(problem, data_dict, max_iter=max_iter, population=population)
if gbest.cost < global_best_cost
global_best_cost = gbest.cost
global_best_position = gbest.position
end
end_time = Dates.now()
time_duration = round(end_time-start_time, Second)
total_time += time_duration.value
push!(best_costs, gbest.cost)
println()
println("The Best is:")
println(gbest.cost)
println(gbest.position)
println("The calculation time is: " * string(time_duration))
println()
println("-"^50)
end
println('-'^100)
println("Global best cost is: ", global_best_cost)
println("Global best position is: ", global_best_position)
println(total_time)
best_costs
end
#suppress_err begin
#time global best_costs = main()
end
So, what is the possible mechanism for this? Is there a way to avoid this problem? Because If I increase the population and max iterations of particles, the time increased would be extremely large and thus is unacceptable.
And what is the possible mechanism for speed up the program I mentioned above? How to trigger this mechanism?
As the parameters of an ODE optimizes it can completely change its characteristics. Your equation could be getting more stiff and require different ODE solvers. There are many other related ways, but you can see how changing parameters could give such a performance issue. It's best to use methods like AutoTsit5(Rodas5()) and the like in such estimation cases because it's hard to know or guess what the performance will be like, and thus adaptiveness in the method choice can be crucial.
I recently started with Julia and wanted to implement one of my usual problems - implement time-depended events.
For now I have:
# Packages
using Plots
using DifferentialEquations
# Parameters
k21 = 0.14*24
k12 = 0.06*24
ke = 1.14*24
α = 0.5
β = 0.05
η = 0.477
μ = 0.218
k1 = 0.5
V1 = 6
# Time
maxtime = 10
tspan = (0.0, maxtime)
# Dose
stim = 100
# Initial conditions
x0 = [0 0 2e11 8e11]
# Model equations
function system(dy, y, p, t)
dy[1] = k21*y[2] - (k12 + ke)*y[1]
dy[2] = k12*y[1] - k21*y[2]
dy[3] = (α - μ - η)*y[3] + β*y[4] - k1/V1*y[1]*y[3]
dy[4] = μ*y[3] - β*y[4]
end
# Events
eventtimes = [2, 5]
function condition(y, t, integrator)
t - eventtimes
end
function affect!(integrator)
x0[1] = stim
end
cb = ContinuousCallback(condition, affect!)
# Solve
prob = ODEProblem(system, x0, tspan)
sol = solve(prob, Rodas4(), callback = cb)
# Plotting
plot(sol, layout = (2, 2))
But the output that is give is not correct. More specifically, the events are not taken into account and the initial condition doesn't seems to be 0 for y1 but stim.
Any help would be greatly appreciated.
t - eventtimes doesn't work because one's a scalar and the other is a vector. But for this case, it's much easier to just use a DiscreteCallback. When you make it a DiscreteCallback you should pre-set the stop times so that to it hits 2 and 5 for the callback. Here's an example:
# Packages
using Plots
using DifferentialEquations
# Parameters
k21 = 0.14*24
k12 = 0.06*24
ke = 1.14*24
α = 0.5
β = 0.05
η = 0.477
μ = 0.218
k1 = 0.5
V1 = 6
# Time
maxtime = 10
tspan = (0.0, maxtime)
# Dose
stim = 100
# Initial conditions
x0 = [0 0 2e11 8e11]
# Model equations
function system(dy, y, p, t)
dy[1] = k21*y[2] - (k12 + ke)*y[1]
dy[2] = k12*y[1] - k21*y[2]
dy[3] = (α - μ - η)*y[3] + β*y[4] - k1/V1*y[1]*y[3]
dy[4] = μ*y[3] - β*y[4]
end
# Events
eventtimes = [2.0, 5.0]
function condition(y, t, integrator)
t ∈ eventtimes
end
function affect!(integrator)
integrator.u[1] = stim
end
cb = DiscreteCallback(condition, affect!)
# Solve
prob = ODEProblem(system, x0, tspan)
sol = solve(prob, Rodas4(), callback = cb, tstops = eventtimes)
# Plotting
plot(sol, layout = (2, 2))
This avoids rootfinding altogether so it should be a much nicer solution that hacking time choices into a rootfinding system.
Either way, notice that the affect was changed to
function affect!(integrator)
integrator.u[1] = stim
end
It needs to be modifying the current u value otherwise it won't do anything.
I have to simulate some data from a Linear fractional stable motion. I have found an article where they simulate such data using Matlab. The code is from the article "Simulation methods for linear fractional stable motion and
FARIMA using the Fast Fourier Transform" by Stilian Stoev and Murad S. Taqqu. The following is the matlab code:
% Written by Stilian Stoev 05.06.2002, sstoev#math.bu.edu
%
% Usage:
% y = fftlfsn(H,alpha,m,M,C,N,n)
%
mh = 1/m;
d = H-1/alpha;
t0 = [mh:mh:1];
t1 = [1+mh:mh:M];
A = mh^(1/alpha)*[t0.^d, t1.^d-(t1-1).^d];
C = C*(sum(abs(A).^alpha)^(-1/alpha));
A = C*A;
Na = m*(M+N);
A = fft(A,Na);
y = [];
for i=1:n,
if alpha<2,
Z = rstab(alpha,0,Na)’;
elseif alpha==2,
Z = randn(1,Na);
end;
Z = fft(Z,Na);
w = real(ifft(Z.*A,Na));
y = [y; w(1:m:N*m)];
end;
Example:
The commands
H = 0.2; alpha =1.5; m = 256; M = 6000; N = 2^14 - M;
y = fftlfsn(H,alpha,m,M,1,N,1);
x = cumsum(y);
generate a simulated path y of length N of linear
fractional stable noise and a path x of LFSM.
In the following I have tried to translate it,
but I have some questions. I have commented on it in the code.
fftlfsn <- function(H,alpha,m,M,C,N,n){
mh = 1/m;
d = H-1/alpha;
t0 = seq(mh,mh, by =1);
t1 = seq(1+mh,mh, by=M);
# Is the following the right way to translate the matlab code into R?
A = mh^(1/alpha)*matrix(c(t0^d, t1^d-(t1-1)^d), ncol = length(t0), nrow = length(t1));
C = C*(sum(abs(A)^alpha)^(-1/alpha));
A = C*A;
Na = m*(M+N);
# I don't konw if it is right to use the function "fft" here.
#Does this respond directly to the function "fft" in matlab?
A = fft(A,Na);
#how can I do somthing similar in R?
#I think they create an empty matrix? Could I just write y=0?
y = [];
for (i in 1:n)
{
if(alpha<2){
# The function "rstab" generates symmetric alpha-stable variables. Is there a similar function in R, or do you know how to write one?
Z = t(rstab(alpha,0,Na))
}
else if(alpha==2){
Z = matrix (rnorm(Na, mean = 0, sd = 1), nrow = 1, ncol = Na)
}
# Again, can I just use the R-function "fft" directly?
Z = fft(Z,Na);
w = Re(fft(Z*A,Na, inverse= TRUE));
#I have trouble understanding the following and therefore I can't translate it.
y = [y; w(1:m:N*m)];
}
}
Any help appreciated!
I have a scilab program for averaging a 3D matrix and it works ok.However, instead of having the average just be a set value.I want it to be a certain sum of mass(sum(n*n*n).
K = 100
N = 5
A = 1
mid = floor(N/2)
volume = rand(K, K, K)
cubeCount = floor( K / N )
for x=0:cubeCount1
for y=0:cubeCount1
for z=0:cubeCount1
// Get a cube of NxNxN size
cube = 20;
//Calculate the average value of the voxels in the cube
avg = sum( cube ) / (N * N * N);
// Assign it to the center voxel
volume( N*x+mid+1, N*y+mid+1, N*z+mid+1 ) = avg
end
end
end
disp( volume )
If anyone has a simple solution to this, please tell me.
You seem to have just about said it your self. All you would need to do would be change cube to equal.
cube = while sum(A * A * A) < 10,
A=A+1;
This will give you the correct sum of mass of the voxels.