Is it possible to use the input command to enter functions in order to use numderivative? - scilab

I've been working on numerical methods to solve polynomial and non-polynomial equations. I wanted to use numderivative to calculate the defined derivative of a function entered by the user with the following simple code:
clc
clear
x0 =input('Enter the x value: ') // x0 = 4
function y = f(x)
y = input('Enter your function: ') // y = sqrt(x)
endfunction
dd = numderivative(f,x0)
printf('The definite derivative value of f(x) in x = %d is %.6f',x0,dd)
The output is the following:
Enter the x value: 4
Enter your function: sqrt(x)
Enter your function: sqrt(x)
The definite derivative value of f(x) in x = 4 is 0.250000
This code asks for the function twice. I would like to know how to solve that problem. Thank in advance.

No it is not possible to enter a function, but you can enter the instructions of the function:
x0 =input('Enter the x value: ') // x0 = 4
instr = input('Enter the expression to derivate as a function of x: ',"s")//sqrt(x)
deff("y=f(x)","y="+instr)
dd = numderivative(f,x0)
printf('The definite derivative value of f(x) in x = %d is %.6f',x0,dd)

Related

Taking a forawrd derivative with ForwardDiff

I am trying to use ForwardDIff to compute the gradient of a function. My code is very simple:
buffer1 = 1.
buffer2 = 1.0-buffer1
buffers = [buffer1; buffer2];
K0 = [[100., 200.] [0.01, 0.001]];
q_sat = [100, 150];
K = exp.(log.(K0) * buffers);
f(x::Vector) = q_sat[1]*(K[1]*x[1]/(1+alpha[1]*K[1]*x[1]+alpha[2]*K[2]*x[2]))
x = [0.5; 0.5]
g = ForwardDiff.gradient(f(x), x)
The first lines are about defining some constants and then I define the function that takes in a vector and returns a real number. When trying to comput the gradient of f I get the error:
MethodError: objects of type Float64 are not callable
What could the problem be?
You want to differentiate f (the function), not f(x) (some scalar), so it must be
ForwardDiff.gradient(f, x)

How I display a math function in Julia?

I'm new in Julia and I'm trying to learn to manipulate calculus on it. How do I do if I calculate the gradient of a function with "ForwardDiff" like in the code below and see the function next?
I know if I input some values it gives me the gradient value in that point but I just want to see the function (the gradient of f1).
julia> gradf1(x1, x2) = ForwardDiff.gradient(z -> f1(z[1], z[2]), [x1, x2])
gradf1 (generic function with 1 method)
To elaborate on Felipe Lema's comment, here are some examples using SymPy.jl for various tasks:
#vars x y z
f(x,y,z) = x^2 * y * z
VF(x,y,z) = [x*y, y*z, z*x]
diff(f(x,y,z), x) # ∂f/∂x
diff.(f(x,y,z), [x,y,z]) # ∇f, gradiant
diff.(VF(x,y,z), [x,y,z]) |> sum # ∇⋅VF, divergence
J = VF(x,y,z).jacobian([x,y,z])
sum(diag(J)) # ∇⋅VF, divergence
Mx,Nx, Px, My,Ny,Py, Mz, Nz, Pz = J
[Py-Nz, Mz-Px, Nx-My] # ∇×VF
The divergence and gradient are also part of SymPy, but not exposed. Their use is more general, but cumbersome for this task. For example, this finds the curl:
import PyCall
PyCall.pyimport_conda("sympy.physics.vector", "sympy")
RF = sympy.physics.vector.ReferenceFrame("R")
v1 = get(RF,0)*get(RF,1)*RF.x + get(RF,1)*get(RF,2)*RF.y + get(RF,2)*get(RF,0)*RF.z
sympy.physics.vector.curl(v1, RF)

Solving a non-linear System of Equations in SciLab

I am trying to solve the following system of equations in SciLab:
x^2 + y^2 = 0
x^4 + y^4 - 10 = 0
I defined the following function in SciLab:
function y=f3(x,y)
y = [x^2+y^2,x^4+y^4-10]
endfunction
That appeared to work. I found that f3(1,1) is: 2. -8.
So I then ran the following:
fsolve([0,0], f3)
and I got:
fsolve: exception caught in 'fct' subroutine.
at line 2 of function f3
in builtin fsolve
Undefined variable: y
I then defined the function fct as follows:
function y=fct(x,y)
y = [2*x+2*y, 4*x^3+4*y^3]
endfunction
I then ran the command:
fsolve([0,0], f3, fct)
and that produced the following message:
fsolve: exception caught in 'jac' subroutine.
at line 2 of function f3
in builtin fsolve
Undefined variable: y
Any additional comments? What am I doing wrong?
Checking help fsolve, you'll see that fsolve works on functions of single argument. That means your f3 should receive a vector v instead of xand y, having that x = v(1) and y = v(2). So your function should be:
function y = f3(v)
y = [v(1)^2 + v(2)^2,...
v(1)^4 + v(2)^4-10]
endfunction
This will solve the problem of not being able to run fsolve. However, a more serious problem is that your system have no single solution, because any point (x,y) that lies in the curve x^2 + y^2 = x^4 + y^4 - 10 is a solution to your system. Therefore, fsolve will not be able to find any solution at all:
--> [y,val,info]=fsolve([0,0],f3)
info =
4.
val =
0. -10.
y =
0. 0.
The help page says that for info == 4, "iteration is not making good progress."

Octave index out of bound error. Can't figure out why

I'm new to Octave and quite confused by this error I'm getting.
My function f works for a (7,1) vector of ones but for any other
(7,1) vector I've tried I get an index out of bound error
To my knowledge the indexing between the working input and the not working one
should be the same, only the values in those indexes change.
So why is this happening, what am I doing wrong?
Here's my code:
function asd
f([1,1,1,1,1,1,1]) #works
f([2,1,1,1,1,1,1]) #out of bound,
#same for no matter which value I replace with a 2
x = ones(7,1)
f(x) #works
x(1) = 2
f(x) #out of bound
endfunction
function y = f(x)
y = ones(7,1);
y(1) = x(1) − x(2) − x(6);
y(2) = x(2) − x(3) − x(4);
y(3) = x(3) + x(4) − x(5);
y(4) = x(5) + x(6) − x(7);
y(5) = 200((x(3))^2) − 75((x(4))^2);
y(6) = 100((x(2))^2) + 75((x(4))^2) + 100((x(5))^2) − 75((x(6))^2);
y(7) = 100((x(1))^2) + 75((x(6))^2) + 50((x(7))^2) − 10.285;
endfunction
here's the error:
error: index (4): out of bound 1
error: called from
asd>f at line 20 column 8
asd at line 3 column 3
You are trying to index the number 100, which is a single element, therefore only has index 1.
Doing 100(1) is equivalent to saying a = 100; a(1).
Therefore doing 100(2) results in an index out of bounds error.
What are you trying to do? Presumably you were trying to multiply instead of indexing? In which case you can't just have 100(something), you need 100 * (something) instead.

Comparing SAS and R results after resolving a system of differential equations

I my main objectif is to obtain the same results on SAS and on R. Somethimes and depending on the case, it is very easy. Otherwise it is difficult, specially when we want to compute something more complicated than the usual.
So, in ored to understand my case, I have the following differential equation system :
y' = z
z' = b* y'+c*y
Let :
b = - 2 , c = - 4, y(0) = 0 and z(0) = 1
In order to resolve this system, in SAS we use the command PROC MODEL :
data t;
do time=0 to 40;
output;
end;
run;
proc model data=t ;
dependent y 0 z 1;
parm b -2 c -4;
dert.y = z;
dert.z = b * dert.y + c * y;
solve y z / dynamic solveprint out=out1;
run;
In R, we could write the following solution using the lsoda function of the deSolve package:
library(deSolve)
b <- -2;
c <- -4;
rigidode <- function(t, y, parms) {
with(as.list(y), {
dert.y <- z
dert.z <- b * dert.y + c * y
list(c(dert.y, dert.z))
})
}
yini <- c(y = 0, z = 1)
times <- seq(from=0,to=40,by=1)
out_ode <- ode (times = times, y = yini, func = rigidode, parms = NULL)
out_lsoda <- lsoda (times = times, y = yini, func = rigidode, parms = NULL)
Here are the results :
SAS
R
For time t=0,..,10 , we obtain similar results. But for t=10,...,40, we start to have differences. For me, these differences are important.
In order to correct these differences, I fixed on R the error truncation term on 1E-9 in stead of 1E-6. I also verified if the numerical integration methods and the hypothesis used by default are the same.
Do you have any idea how to deal with this problem?
Sincerely yours,
Mily

Resources