Julia & Functions - NoMethodError - julia

I'm not understanding why the following snippet of code is returning a NoMethodError in Julia
using Calculus
nx = 101
nt = 101
dx = 2*pi / (nx - 1)
nu = 0.07
dt = dx*nu
function init(x, nu, t)
phi = exp( -x^2 / 4.0*nu ) + exp( -(x - 2.0*pi)^2 / 4.0*nu )
dphi_dx = derivative(phi)
u = ( 2.0*nu /phi )*dphi_dx + 4.0
return u
end
x = range(0.0,stop=2*pi,length=nx)
t = 0.0
u = [init(x0,nu,t) for x0 in x]
My aim here is to populate the elements of an array named u with values as calculated by my function init. The u array should have nx elements with u calculated at every x value in the range between 0.0 and 2*pi.

Next time please also post the error message and take a detailed at it before, so you can try to spot the mistake by yourself.
I don't really know the Calculus package but it seems you are using it wrong. Your phi is a number and not a function. You can't take a derivative from just a single number. Change it to
phi = x -> exp( -x^2 / 4.0*nu ) + exp( -(x - 2.0*pi)^2 / 4.0*nu )
an then call the phi and derivative at argument x, so phi(x) and derivative(phi,x) or dphi_x(x). As I don't know much about the Calculus package you should take a look at its documentation again to verify that the derivative command is doing exactly what you want like that.
Little extra: there are also element-wise operations in Julia (similar to Matlab for example) that apply functions to the whole array. Instead of [init(x0,nu,t) for x0 in x], you can also write init.(x,nu,t).

Related

How to reduce runtime of Julia code to find roots using "IntervalRootFinding.jl" package?

I am trying to find roots using the "IntervalArithmetic.jl", and "IntervalRootFinding.jl" package in Julia. It seems that the code is running for an infinite time, as it neither gives roots nor terminates at some point. The code is shown below-
using IntervalArithmetic, Plots, StaticArrays, IntervalRootFinding
α_x = α_y = 100 * (2..3)
γ_x = γ_y = (2..3)
K_y = 10 * (2/3..3/2)
u = 1
g( (x, y) ) = SVector(α_x * u - γ_x * x, α_y * u / ( 1 + x / K_y ) - γ_y * y)
X = IntervalBox(80..120, 5..20)
rts = roots(g, X)
#show rts
Could you please help me to solve the issue?
Thank you

Julia: Why ridge regression not working (optim)

I am trying to implement ridge-regression from scratch in Julia but something is going wrong.
# Imports
using DataFrames
using LinearAlgebra: norm, I
using Optim: optimize, LBFGS, minimizer
# Read Data
out = CSV.read(download("https://raw.githubusercontent.com/jbrownlee/Datasets/master/housing.csv"), DataFrame, header=0)
# Separate features and response
y = Vector(out[:, end])
X = Matrix(out[:, 1:(end-1)])
λ = 0.1
# Functions
loss(beta) = norm(y - X * beta)^2 + λ*norm(beta)^2
function grad!(G, beta)
G = -2*transpose(X) * (y - X * beta) + 2*λ*beta
end
function hessian!(H, beta)
H = X'X + λ*I
end
# Optimization
start = randn(13)
out = optimize(loss, grad!, hessian!, start, LBFGS())
However, the result of this is terrible and we essentially get back start since it is not moving. Of course, I know I could simply use (X'X + λ*I) \ X'y or IterativeSolvers.lmsr(X, y) but I would like to implement this myself.
The problem is with the implementation of the grad! and hessian! functions: you should use dot assignment to change the content of the G and H matrices:
G .= -2*transpose(X) * (y - X * beta) + 2*λ*beta
H .= X'X + λ*I
Without the dot you replace the matrix the function parameter refers to, but the matrix passed to the function (which will then be used by the optimizer) remains unchanged (presumably a zero matrix, that's why you got back the start vector).

How to halt a loop in Julia and printing the ErrorMsg at the same time without using any macros?

I am writing a simple newton method
x_(n+1) = x_n - f(x_n) / f_prime(x_n)
to find the roots (can be a real number or a complex number) of a quadratic function:
f(x) = a*x*x + b*x + c
(a, b, c are given constants and are all real numbers). I know Newton method will fail if the start point or some iteration point in the loop has a zero derivative. I want to use a if statement inside my for/while loop to avoid this situation. Does Julia have something like stop 0 syntax in Fortran ?
The generic Newton's Method root-finding code:
function newton_root_finding(f, f_diff, x0, rtol=1e-8, atol=1e-8)
f_x0 = f(x0)
f_diff_x0 = f_diff(x0)
x1 = x0 - f_x0 / f_diff_x0
f_diff_x1 = f_diff(x1)
#assert abs(f_diff_x0) > atol + rtol * abs(f_diff_x0) "Zero derivative. No solution found."
while abs(f_x0) > atol + rtol * (abs(f_x0))
x0 = x1
f_x0 = f(x0)
f_diff_x0 = f_diff(x0)
x1 = x0 - f_x0 / f_diff_x0
end
return x1
end
function quadratic_func(x)
a = 1.0
b = 0.0
c = 2.0
return a*x*x + b*x + c
end
function quadratic_func_diff(x)
a = 1.0
b = 0.0
c = 2.0
return 2.0*a*x + 1.0*b + 0.0*c
end
newton_root_finding(quadratic_func, quadratic_func_diff, 1.0 + 0.5im)
In the above code I used a #assert macro to make it happens, but I don't want to use any macro. I want to use a if statement inside my while loop to halt it. Another thing I've noticed is that if I change to #assert abs(f_diff_x0) != 0 this test will be ignored. Is that because of some round-off errors that "zero derivative" doesn't exactly equal to 0?
The way to exit from the inside of a loop in general is a break statement; a return fulfills the same purpose, because it just exits the whole function.
For the comparisons you can use Base.isapprox(x, y; atol=atol, rtol=rtol). It's documentation starts with:
Inexact equality comparison: true if norm(x-y) <= max(atol, rtol*max(norm(x), norm(y))).
norm falls back to abs for numbers. And I think you might have a bug in both comparisons, always comparing the value at x0 to itself.
As for the breaking on zero derivatives, an #assert is, I think, appropriate here: if you get zero derivative, you don't stop iteration and return the result, but you throw an error to signify an infeasible condition. I'd thus write your function as follows:
function newton_root_finding(f, ∂f, x0, rtol=1e-8, atol=1e-8)
x_old = x0
y_old = f(x0)
while true
df_old = ∂f(x_old)
#assert !isapprox(df_old, 0, rtol=rtol, atol=atol) "Zero derivative. No solution found."
x_new = x_old - y_old / df_old
y_new = f(x_new)
isapprox(y_old, y_new, rtol=rtol, atol=atol) && return x_new
x_old, y_old = x_new, y_new
end
end
This returns 3.357392012620626e-26 + 1.4142135623730951im on your test case, approximately sqrt(2)im.
To address your first question, you can use break to exit the while loop, like
function test()
i = 0
while true
i += 1
if i > 10
break
end
end
return i
end
As to your second question, when comparing floating point numbers it is often better to use isapprox (provide an atol if you compare against zero) instead of == or !=.

How I display a math function in Julia?

I'm new in Julia and I'm trying to learn to manipulate calculus on it. How do I do if I calculate the gradient of a function with "ForwardDiff" like in the code below and see the function next?
I know if I input some values it gives me the gradient value in that point but I just want to see the function (the gradient of f1).
julia> gradf1(x1, x2) = ForwardDiff.gradient(z -> f1(z[1], z[2]), [x1, x2])
gradf1 (generic function with 1 method)
To elaborate on Felipe Lema's comment, here are some examples using SymPy.jl for various tasks:
#vars x y z
f(x,y,z) = x^2 * y * z
VF(x,y,z) = [x*y, y*z, z*x]
diff(f(x,y,z), x) # ∂f/∂x
diff.(f(x,y,z), [x,y,z]) # ∇f, gradiant
diff.(VF(x,y,z), [x,y,z]) |> sum # ∇⋅VF, divergence
J = VF(x,y,z).jacobian([x,y,z])
sum(diag(J)) # ∇⋅VF, divergence
Mx,Nx, Px, My,Ny,Py, Mz, Nz, Pz = J
[Py-Nz, Mz-Px, Nx-My] # ∇×VF
The divergence and gradient are also part of SymPy, but not exposed. Their use is more general, but cumbersome for this task. For example, this finds the curl:
import PyCall
PyCall.pyimport_conda("sympy.physics.vector", "sympy")
RF = sympy.physics.vector.ReferenceFrame("R")
v1 = get(RF,0)*get(RF,1)*RF.x + get(RF,1)*get(RF,2)*RF.y + get(RF,2)*get(RF,0)*RF.z
sympy.physics.vector.curl(v1, RF)

Converting matlab code to R code

I was wondering how I can convert this code from Matlab to R code. It seems this is the code for midpoint method. Any help would be highly appreciated.
% Usage: [y t] = midpoint(f,a,b,ya,n) or y = midpoint(f,a,b,ya,n)
% Midpoint method for initial value problems
%
% Input:
% f - Matlab inline function f(t,y)
% a,b - interval
% ya - initial condition
% n - number of subintervals (panels)
%
% Output:
% y - computed solution
% t - time steps
%
% Examples:
% [y t]=midpoint(#myfunc,0,1,1,10); here 'myfunc' is a user-defined function in M-file
% y=midpoint(inline('sin(y*t)','t','y'),0,1,1,10);
% f=inline('sin(y(1))-cos(y(2))','t','y');
% y=midpoint(f,0,1,1,10);
function [y t] = midpoint(f,a,b,ya,n)
h = (b - a) / n;
halfh = h / 2;
y(1,:) = ya;
t(1) = a;
for i = 1 : n
t(i+1) = t(i) + h;
z = y(i,:) + halfh * f(t(i),y(i,:));
y(i+1,:) = y(i,:) + h * f(t(i)+halfh,z);
end;
I have the R code for Euler method which is
euler <- function(f, h = 1e-7, x0, y0, xfinal) {
N = (xfinal - x0) / h
x = y = numeric(N + 1)
x[1] = x0; y[1] = y0
i = 1
while (i <= N) {
x[i + 1] = x[i] + h
y[i + 1] = y[i] + h * f(x[i], y[i])
i = i + 1
}
return (data.frame(X = x, Y = y))
}
so based on the matlab code, do I need to change h in euler method (R code) to (b - a) / n to modify Euler code to midpoint method?
Note
Broadly speaking, I agree with the expressed comments; however, I decided to vote up this question. (now deleted) This is due to the existence of matconv that facilitates this process.
Answer
Given your code, we could use matconv in the following manner:
pacman::p_load(matconv)
out <- mat2r(inMat = "input.m")
The created out object will attempt to translate Matlab code into R, however, the job is far from finished. If you inspect the out object you will see that it requires further work. Simple statements are usually translated correctly with Matlab comments % replaced with # and so forth but more complex statements may require a more detailed investigation. You could then inspect respective line and attempt to evaluate them to see where further work may be required, example:
eval(parse(text=out$rCode[1]))
NULL
(first line is a comment so the output is NULL)

Resources