Solving a non-linear System of Equations in SciLab - scilab

I am trying to solve the following system of equations in SciLab:
x^2 + y^2 = 0
x^4 + y^4 - 10 = 0
I defined the following function in SciLab:
function y=f3(x,y)
y = [x^2+y^2,x^4+y^4-10]
endfunction
That appeared to work. I found that f3(1,1) is: 2. -8.
So I then ran the following:
fsolve([0,0], f3)
and I got:
fsolve: exception caught in 'fct' subroutine.
at line 2 of function f3
in builtin fsolve
Undefined variable: y
I then defined the function fct as follows:
function y=fct(x,y)
y = [2*x+2*y, 4*x^3+4*y^3]
endfunction
I then ran the command:
fsolve([0,0], f3, fct)
and that produced the following message:
fsolve: exception caught in 'jac' subroutine.
at line 2 of function f3
in builtin fsolve
Undefined variable: y
Any additional comments? What am I doing wrong?

Checking help fsolve, you'll see that fsolve works on functions of single argument. That means your f3 should receive a vector v instead of xand y, having that x = v(1) and y = v(2). So your function should be:
function y = f3(v)
y = [v(1)^2 + v(2)^2,...
v(1)^4 + v(2)^4-10]
endfunction
This will solve the problem of not being able to run fsolve. However, a more serious problem is that your system have no single solution, because any point (x,y) that lies in the curve x^2 + y^2 = x^4 + y^4 - 10 is a solution to your system. Therefore, fsolve will not be able to find any solution at all:
--> [y,val,info]=fsolve([0,0],f3)
info =
4.
val =
0. -10.
y =
0. 0.
The help page says that for info == 4, "iteration is not making good progress."

Related

Julia: Why ridge regression not working (optim)

I am trying to implement ridge-regression from scratch in Julia but something is going wrong.
# Imports
using DataFrames
using LinearAlgebra: norm, I
using Optim: optimize, LBFGS, minimizer
# Read Data
out = CSV.read(download("https://raw.githubusercontent.com/jbrownlee/Datasets/master/housing.csv"), DataFrame, header=0)
# Separate features and response
y = Vector(out[:, end])
X = Matrix(out[:, 1:(end-1)])
λ = 0.1
# Functions
loss(beta) = norm(y - X * beta)^2 + λ*norm(beta)^2
function grad!(G, beta)
G = -2*transpose(X) * (y - X * beta) + 2*λ*beta
end
function hessian!(H, beta)
H = X'X + λ*I
end
# Optimization
start = randn(13)
out = optimize(loss, grad!, hessian!, start, LBFGS())
However, the result of this is terrible and we essentially get back start since it is not moving. Of course, I know I could simply use (X'X + λ*I) \ X'y or IterativeSolvers.lmsr(X, y) but I would like to implement this myself.
The problem is with the implementation of the grad! and hessian! functions: you should use dot assignment to change the content of the G and H matrices:
G .= -2*transpose(X) * (y - X * beta) + 2*λ*beta
H .= X'X + λ*I
Without the dot you replace the matrix the function parameter refers to, but the matrix passed to the function (which will then be used by the optimizer) remains unchanged (presumably a zero matrix, that's why you got back the start vector).

Is it possible to use the input command to enter functions in order to use numderivative?

I've been working on numerical methods to solve polynomial and non-polynomial equations. I wanted to use numderivative to calculate the defined derivative of a function entered by the user with the following simple code:
clc
clear
x0 =input('Enter the x value: ') // x0 = 4
function y = f(x)
y = input('Enter your function: ') // y = sqrt(x)
endfunction
dd = numderivative(f,x0)
printf('The definite derivative value of f(x) in x = %d is %.6f',x0,dd)
The output is the following:
Enter the x value: 4
Enter your function: sqrt(x)
Enter your function: sqrt(x)
The definite derivative value of f(x) in x = 4 is 0.250000
This code asks for the function twice. I would like to know how to solve that problem. Thank in advance.
No it is not possible to enter a function, but you can enter the instructions of the function:
x0 =input('Enter the x value: ') // x0 = 4
instr = input('Enter the expression to derivate as a function of x: ',"s")//sqrt(x)
deff("y=f(x)","y="+instr)
dd = numderivative(f,x0)
printf('The definite derivative value of f(x) in x = %d is %.6f',x0,dd)

With Julia Symbolics, can I solve for a variable in an equation?

I would like to solve for a in y = √((a^2 + b^2))
What I tried:
julia> using Symbolics
julia> #variables a b
(a, b)
julia> y = √((a^2 + b^2))
sqrt(a^2 + b^2)
julia> eq = y = √((a^2 + b^2))
sqrt(a^2 + b^2)
julia> eq
sqrt(a^2 + b^2)
And then to solve, I tried:
julia> Symbolics.solve_for(eq,[a])
julia> Symbolics.solve_for(eq,a)
julia> Symbolics.solve_for(y,[a])
julia> Symbolics.solve_for(y,a)
which all resulted in the error:
ERROR: type Num has no field rhs
There are two problems in your code. The first one is that an equation should have two parts, right hand side (i.e. rhs) and left hand side (i.e. lhs). Your error message clearly points out the problem: sqrt(a^2 + b^2) is a Num type since a and b are variables of Num since they will (supposed to) evaluate to numbers. In Symbolics.jl, the way to declare an equation is to use ~. So the right way to express your equation is
#variables a b y
eq = y ~ √((a^2 + b^2))
However unfortunately Symbolics.jl cannot solve it for you now since solve_for can only resolve system of linear equations, just as the document says
Currently only works if all equations are linear. check if the expr is linear w.r.t vars.
So it will throw AssertionError: islinear(ex, vars) error. However you can try out this function using some simple equation like a+b.
julia> eq = y ~ a+b
y ~ a + b
Symbolics.solve_for([eq],[a])
1-element Vector{Num}:
y - b
BTW:
You can turn off the linearity check with check=false parameter, but it's almost guaranteed that Symbolics.jl will give you a wrong result. For example, the Symbolics.jl says the result of equation y ~ √((a^2 + b^2)) is a + y*sqrt(a^2 + b^2)*(a^-1) - ((a^-1)*(sqrt(a^2 + b^2)^2)).

Reassign function and avoid recursive definition in Julia

I need to operate on a sequence of functions
h_k(x) = (I + f_k( ) )^k g(x)
for each k=1,...,N.
A basic example (N=2, f_k=f) is the following:
f(x) = x^2
g(x) = x
h1(x) = g(x) + f(g(x))
h2(x) = g(x) + f(g(x)) + f(g(x) + f(g(x)))
println(h1(1)) # returns 2
println(h2(1)) # returns 6
I need to write this in a loop and it would be best to redefine g(x) at each iteration. Unfortunately, I do not know how to do this in Julia without conflicting with the syntax for a recursive definition of g(x). Indeed,
f(x) = x^2
g(x) = x
for i=1:2
global g(x) = g(x) + f(g(x))
println(g(1))
end
results in a StackOverflowError.
In Julia, what is the proper way to redefine g(x), using its previous definition?
P.S. For those who would suggest that this problem could be solved with recursion: I want to use a for loop because of how the functions f_k(x) (in the above, each f_k = f) are computed in the real problem that this derives from.
I am not sure if it is best, but a natural approach is to use anonymous functions here like this:
let
f(x) = x^2
g = x -> x
for i=1:2
l = g
g = x -> l(x) + f(l(x))
println(g(1))
end
end
or like this
f(x) = x^2
g = x -> x
for i=1:4
l = g
global g = x -> l(x) + f(l(x))
println(g(1))
end
(I prefer the former option using let as it avoids using global variables)
The issue is that l is a loop local variable that gets a fresh binding at each iteration, while g is external to the loop.
You might also check out this section of the Julia manual.

How I display a math function in Julia?

I'm new in Julia and I'm trying to learn to manipulate calculus on it. How do I do if I calculate the gradient of a function with "ForwardDiff" like in the code below and see the function next?
I know if I input some values it gives me the gradient value in that point but I just want to see the function (the gradient of f1).
julia> gradf1(x1, x2) = ForwardDiff.gradient(z -> f1(z[1], z[2]), [x1, x2])
gradf1 (generic function with 1 method)
To elaborate on Felipe Lema's comment, here are some examples using SymPy.jl for various tasks:
#vars x y z
f(x,y,z) = x^2 * y * z
VF(x,y,z) = [x*y, y*z, z*x]
diff(f(x,y,z), x) # ∂f/∂x
diff.(f(x,y,z), [x,y,z]) # ∇f, gradiant
diff.(VF(x,y,z), [x,y,z]) |> sum # ∇⋅VF, divergence
J = VF(x,y,z).jacobian([x,y,z])
sum(diag(J)) # ∇⋅VF, divergence
Mx,Nx, Px, My,Ny,Py, Mz, Nz, Pz = J
[Py-Nz, Mz-Px, Nx-My] # ∇×VF
The divergence and gradient are also part of SymPy, but not exposed. Their use is more general, but cumbersome for this task. For example, this finds the curl:
import PyCall
PyCall.pyimport_conda("sympy.physics.vector", "sympy")
RF = sympy.physics.vector.ReferenceFrame("R")
v1 = get(RF,0)*get(RF,1)*RF.x + get(RF,1)*get(RF,2)*RF.y + get(RF,2)*get(RF,0)*RF.z
sympy.physics.vector.curl(v1, RF)

Resources