We are supposed to write a program to solve the following initial value problem numerically using 4th order Runge-Kutta. That algorithm isn't a problem and I can post my solution when I finish.
The problem is, separating it out cleanly into something I can put into Runge-Kutta.
e^(-x') = x' −x + exp(−t^3)
x(t=0) = 1
Any ideas what type of ODE this is called? or methods to solve this? I feel more confident with CS skills and programming numerical methods than I do in math... so any insights into this problem would be helpful.
Update: If anyone is interested in the solution the code is below. I thought it was an interesting problem.
import numpy as np
import matplotlib.pyplot as plt
def Newton(fn, dfn, xp_guess, x, t, tolerance):
iterations = 0
value = 100.
max_iter = 100
xp = xp_guess
value = fn(t, x, xp)
while (abs(value) > tolerance and iterations < max_iter):
xp = xp - (value / dfn(t,x,xp))
value = fn(t,x,xp)
iterations += 1
root = xp
return root
tolerance = 0.00001
x_init = 1.
tmin = 0.0
tmax = 4.0
t = tmin
n = 1
y = 0.0
xp_init = 0.5
def fn(t,x,xp):
'''
0 = x' - x + e^(-t^3) - e^(-x')
'''
return (xp - x + np.e**(-t**3.) - np.e**(-xp))
def dfn(t,x,xp):
return 1 + np.e**(-xp)
i = 0
h = 0.0001
tarr = np.arange(tmin, tmax, h)
y = np.zeros((len(tarr)))
x = x_init
xp = xp_init
for t in tarr:
# RK4 with values coming from Newton's method
y[i] = x
f1 = Newton(fn, dfn, xp, x, t, tolerance)
K1 = h * f1
f2 = Newton(fn, dfn, f1, x+0.5*K1, t+0.5*h, tolerance)
K2 = h * f2
f3 = Newton(fn, dfn, f2, x+0.5*K2, t+0.5*h, tolerance)
K3 = h * f3
f4 = Newton(fn, dfn, f3, x+K3, t+h, tolerance)
K4 = h * f4
x = x + (K1+2.*K2+2.*K3+K4)/6.
xp = f4
i += 1
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(tarr, y)
plt.show()
For Runge-Kutta you only need a numerical solution, not an analytical one.
That is, you need to be able to write a piece of code that takes (x, t) and gives back y such that exp(-y) == y - x + exp(-t**3) to within round-off error. That code can do some sort of iterative approximation algorithm, and Runge-Kutta will be perfectly happy.
Does that help?
Wolfram Alpha says the solution will look like this.
I find that it helps to have an idea of what the answer is before I start.
It also helps to know that a resource like Wolfram Alpha is available to you at all times.
PS - What does it mean to be a student or professor in a time of Internet, Wolfram Alpha, Google, Wikipedia, etc.?
Writing K for x - exp( -t^3), we want to solve exp(-y) = y - K; I get y = K + W(exp(-K)) where W is Lambert's W function, eg here
Related
I need to integrate the following function where there is a differentiation term inside. Unfortunately, that term is not easily differentiable.
Is this possible to do something like numerical integration to evaluate this in R?
You can assume 30,50,0.5,1,50,30 for l, tau, a, b, F and P respectively.
UPDATE: What I tried
InnerFunc4 <- function(t,x){digamma(gamma(a*t*(LF-LP)*b)/gamma(a*t))*(x-t)}
InnerIntegral4 <- Vectorize(function(x) { integrate(InnerFunc4, 1, x, x = x)$value})
integrate(InnerIntegral4, 30, 80)$value
It shows the following error:
Error in integrate(InnerFunc4, 1, x, x = x) : non-finite function value
UPDATE2:
InnerFunc4 <- function(t,L){digamma(gamma(a*t*(LF-LP)*b)/gamma(a*t))*(L-t)}
t_lower_bound = 0
t_upper_bound = 30
L_lower_bound = 30
L_upper_bound = 80
step_size = 0.5
integral = 0
t <- t_lower_bound + 0.5*step_size
while (t < t_upper_bound){
L = L_lower_bound + 0.5*step_size
while (L < L_upper_bound){
volume = InnerFunc4(t,L)*step_size**2
integral = integral + volume
L = L + step_size
}
t = t + step_size
}
Since It seems that your problem is only the derivative, you can get rid of it by means of partial integration:
Edit
Not applicable solution for lower integration bound 0.
I have a system of nonlinear differential equations for a 3 degree of freedom vibratory system.
system of differential equations
First I want to plot y, y_L and y_R against time (for a given value for Omega) and then I want to plot the domains (max values of y, y_L and y_R) against various amounts of Omega.
Unfortunately, I am not good at Octave. I have written the following code in Octave (based on a sample given by one of the users), but it ends with this error: "anonymous function bodies must be single expressions".
I would be grateful if anyone can help me.
Here is the code:
Me = 4000;
me = 20;
c = 2000;
c1 = 700;
c2 = 700;
k = 20000;
k1 = 250000;
k2 = 20000;
a0 = 0.01;
om = 25;
mu1 = (c+2*c2)/(Me);
mu2 = (c2)/(Me);
mu3 = (c1+c2)/(me);
mu4 = (c2)/(me);
w12 = (2*k2)/(Me);
w22 = (k1+k2)/(me);
a1 = (k2)/(me);
a2 = (k)/(Me);
F0 = (k1*a0)/(Me);
couplode = #(t,y) [y(2); mu4*y(4) - mu3*y(2) - w22*y(1) + a1*y(3) + F0*cos(om*t); y(4); mu2*(y(2)+y(6)) - mu1*y(4) - w12*y(3) + 0.5*w12*(y(1)+y(5)) + a2((y(3)).^3; y(6); mu4*y(4) - mu3*y(6) - w22*y(5) + a1*y(3) + F0*cos(om*t)];
[t,y] = ode45(couplode, [0 0.49*pi], [1;1;1;1;1;1]*1E-8);
figure(1)
plot(t, y)
grid
str = {'$$ \dot{y_L} $$', '$$ y_L $$', '$$ \dot{y} $$', '$$ y $$', '$$ \dot{y_R} $$', '$$ y_R $$'};
legend(str, 'Interpreter','latex', 'Location','NW')
You have a strange term rather at the end of the vector definition
... + a2((y(3)).^3
You certainly meant
... + a2*y(3).^3
You get better visibility and easier debugging by breaking that into separate lines
couplode = #(t,y) [ y(2);
mu4*y(4)-mu3*y(2)-w22*y(1)+a1*y(3)+F0*cos(om*t);
y(4);
mu2*(y(2)+y(6)) - mu1*y(4) - w12*y(3) + 0.5*w12*(y(1)+y(5)) + a2*y(3).^3;
y(6);
mu4*y(4)-mu3*y(6)-w22*y(5)+a1*y(3)+F0*cos(om*t)];
At least in this form, spaces or no spaces makes no difference. In general in matlab/octave [a +b -c] is the same as [a, +b, -c], so one has to be careful that the expression is not interpreted as matrix row. Spaces on both sites of the operation sign switches back to the single-expression interpretation.
I've tried to reproduce the model from a PYMC3 and Stan comparison. But it seems to run slowly and when I look at #code_warntype there are some things -- K and N I think -- which the compiler seemingly calls Any.
I've tried adding types -- though I can't add types to turing_model's arguments and things are complicated within turing_model because it's using autodiff variables and not the usuals. I put all the code into the function do_it to avoid globals, because they say that globals can slow things down. (It actually seems slower, though.)
Any suggestions as to what's causing the problem? The turing_model code is what's iterating, so that should make the most difference.
using Turing, StatsPlots, Random
sigmoid(x) = 1.0 / (1.0 + exp(-x))
function scale(w0::Float64, w1::Array{Float64,1})
scale = √(w0^2 + sum(w1 .^ 2))
return w0 / scale, w1 ./ scale
end
function do_it(iterations::Int64)::Chains
K = 10 # predictor dimension
N = 1000 # number of data samples
X = rand(N, K) # predictors (1000, 10)
w1 = rand(K) # weights (10,)
w0 = -median(X * w1) # 50% of elements for each class (number)
w0, w1 = scale(w0, w1) # unit length (euclidean)
w_true = [w0, w1...]
y = (w0 .+ (X * w1)) .> 0.0 # labels
y = [Float64(x) for x in y]
σ = 5.0
σm = [x == y ? σ : 0.0 for x in 1:K, y in 1:K]
#model turing_model(X, y, σ, σm) = begin
w0_pred ~ Normal(0.0, σ)
w1_pred ~ MvNormal(σm)
p = sigmoid.(w0_pred .+ (X * w1_pred))
#inbounds for n in 1:length(y)
y[n] ~ Bernoulli(p[n])
end
end
#time chain = sample(turing_model(X, y, σ, σm), NUTS(iterations, 200, 0.65));
# ϵ = 0.5
# τ = 10
# #time chain = sample(turing_model(X, y, σ), HMC(iterations, ϵ, τ));
return (w_true=w_true, chains=chain::Chains)
end
chain = do_it(1000)
I am trying to Implement gradient Descent algorithm from scratch to find the slope and intercept value for my linear fit line.
Using the package and calculating slope and intercept, I get slope = 0.04 and intercept = 7.2 but when I use my gradient descent algorithm for the same problem, I get slope and intercept both values = (-infinity,-infinity)
Here is my code
x= [1,2,3,4,5,6,7,8,9,10,11,12,13,141,5,16,17,18,19,20]
y=[2,3,4,5,6,7,8,9,10,11,12,13,141,5,16,17,18,19,20,21]
function GradientDescent()
m=0
c=0
for i=1:10000
for k=1:length(x)
Yp = m*x[k] + c
E = y[k]-Yp #error in predicted value
dm = 2*E*(-x[k]) # partial derivation of cost function w.r.t slope(m)
dc = 2*E*(-1) # partial derivate of cost function w.r.t. Intercept(c)
m = m + (dm * 0.001)
c = c + (dc * 0.001)
end
end
return m,c
end
Values = GradientDescent() # after running values = (-inf,-inf)
I have not done the math, but instead wrote the tests. It seems you got a sign error when assigning m and c.
Also, writing the tests really helps, and Julia makes it simple :)
function GradientDescent(x, y)
m=0.0
c=0.0
for i=1:10000
for k=1:length(x)
Yp = m*x[k] + c
E = y[k]-Yp
dm = 2*E*(-x[k])
dc = 2*E*(-1)
m = m - (dm * 0.001)
c = c - (dc * 0.001)
end
end
return m,c
end
using Base.Test
#testset "gradient descent" begin
#testset "slope $slope" for slope in [0, 1, 2]
#testset "intercept for $intercept" for intercept in [0, 1, 2]
x = 1:20
y = broadcast(x -> slope * x + intercept, x)
computed_slope, computed_intercept = GradientDescent(x, y)
#test slope ≈ computed_slope atol=1e-8
#test intercept ≈ computed_intercept atol=1e-8
end
end
end
I can't get your exact numbers, but this is close. Perhaps it helps?
# 141 ?
datax = [1,2,3,4,5,6,7,8,9,10,11,12,13,141,5,16,17,18,19,20]
datay = [2,3,4,5,6,7,8,9,10,11,12,13,141,5,16,17,18,19,20,21]
function gradientdescent()
m = 0
b = 0
learning_rate = 0.00001
for n in 1:10000
for i in 1:length(datay)
x = datax[i]
y = datay[i]
guess = m * x + b
error = y - guess
dm = 2error * x
dc = 2error
m += dm * learning_rate
b += dc * learning_rate
end
end
return m, b
end
gradientdescent()
(-0.04, 17.35)
It seems that adjusting the learning rate is critical...
I my main objectif is to obtain the same results on SAS and on R. Somethimes and depending on the case, it is very easy. Otherwise it is difficult, specially when we want to compute something more complicated than the usual.
So, in ored to understand my case, I have the following differential equation system :
y' = z
z' = b* y'+c*y
Let :
b = - 2 , c = - 4, y(0) = 0 and z(0) = 1
In order to resolve this system, in SAS we use the command PROC MODEL :
data t;
do time=0 to 40;
output;
end;
run;
proc model data=t ;
dependent y 0 z 1;
parm b -2 c -4;
dert.y = z;
dert.z = b * dert.y + c * y;
solve y z / dynamic solveprint out=out1;
run;
In R, we could write the following solution using the lsoda function of the deSolve package:
library(deSolve)
b <- -2;
c <- -4;
rigidode <- function(t, y, parms) {
with(as.list(y), {
dert.y <- z
dert.z <- b * dert.y + c * y
list(c(dert.y, dert.z))
})
}
yini <- c(y = 0, z = 1)
times <- seq(from=0,to=40,by=1)
out_ode <- ode (times = times, y = yini, func = rigidode, parms = NULL)
out_lsoda <- lsoda (times = times, y = yini, func = rigidode, parms = NULL)
Here are the results :
SAS
R
For time t=0,..,10 , we obtain similar results. But for t=10,...,40, we start to have differences. For me, these differences are important.
In order to correct these differences, I fixed on R the error truncation term on 1E-9 in stead of 1E-6. I also verified if the numerical integration methods and the hypothesis used by default are the same.
Do you have any idea how to deal with this problem?
Sincerely yours,
Mily