As the title says I want to include a block where I can run a scilab expression/function/script given certain inputs. I can see xcos/scicos can include C, Fortran and Modelica. There is a Expression block:
but the functions are pretty limited:
sin, cos, tan, exp, log, sinh, cosh, tanh, int, round, ceil, floor, sign, abs, max, min, asin, acos, atan, asinh, acosh, atanh, atan2, log10.
for example if I want to solve a second order equation of ax^2+bx+c=d there are no sqrt or power/^ operators/functions! Ideally I want to just run a Scilab script/function where I can have complete freedom. I would appreciate if you could help me know if there is such a block in either xcos or scicos.
Thanks to Rupak and Anuradha from Spoken-Tutorial IIT Bombay I found the solution. Create a Scilab function as:
function [y1, y2, ...] = myFunction(u1, u2, ...)
// some commands ...
yi = // return value;
endfunction
and save it as myFunction.sci in your preferred location.
Then execute the function once (in editor) or run the command:
exec('path\to\myFunction.sci', -1)
in the console so it will be in the Scilab memory.
Then use the Scilab function block in your block diagram:
open the Scilab Multiple Values Request by double clocking on the block, Ctrl+B or right click and select Block Parameters ...:
Here you can specify the size of input and output matrices. For example [1,1;2,3] refers to two matrices with 1×1 and 2×3 sizes. By selecting OK the Scilab Input Value Request will open:
here you can put the function you just defined y=myFunction(u); or use any other Scilab built in syntax. Now simply select OK four times till the settings are finished.
For a more elaborate example you may follow this YouTube tutorial.
Related
I'm starting to learn Julia, and I was wondering if there is an equivalent function in Julia that is doing the same as the epsilon function in Fortran.
In Fortran, the epsilon function of variable x gives the smallest number of the same kind of x that satisfies 1+epsilon(x)>1
I thought the the function eps() in Julia would be something similar, so I tried
eps(typeof(x))
but I got the error:
MethodError: no method matching eps(::Type{Int64})
Is there any other function that resembles the Fortran one that can be used on the different variables of the code?
If you really need eps to work for both Float and Int types, you can overload a method of eps for Int by writing Base.eps(Int) = one(Int). Then your code will work. This is okay for personal projects, but not a good idea for code you intend to share.
As the docstring for eps says:
help?> eps
eps(::Type{T}) where T<:AbstractFloat
eps()
eps is only defined for subtypes of AbstractFloat i.e. floating point numbers. It seems that your variable x is an integer variable, as the error message says no method matching eps(::Type{Int64}). It doesn't really make sense to define an eps for integers since the "the smallest number of the same kind of x that satisfies 1 + epsilon(x) > 1" is always going to be 1 for integers.
If you did want to get a 1 of the specific type of integer you have, you can use the one function instead:
julia> x = UInt8(42)
0x2a
julia> one(typeof(x))
0x01
It seems like what you actually want is
eps(x)
not
eps(typeof(x))
The former is exactly equivalent to your Fortran function.
When I was in high school, I figured out how to program my TI-84 Plus calculator to do quadratic equations for me. Like the goody-two-shoes I was, I deleted the program before the final exam. I am trying to recreate the program now, but it's not working well. Here's my code:
:Prompt A, B, C
:(-B+√(B²-4AC))/2A→Y
:(-B-√(B²-4AC))/2A→Z
:Disp Y
:Disp Z
(→ corresponds to the STO> (store) button on the calculator, which allows a user to set a value for a given letter variable.)
As far as I can tell, this should work. The math and the parentheses seem to be in order, the Prompt function works (after the program finishes, asking the calculator to print A, B, and C match the values stored from the last time the program was run).
When I ask it to calculate quadratic equations that I already know the answers to, it gives me funny numbers. Entering A=1, B=-3, C=2, which should return x-intercept values of 1 and 2, returns 2 and 0 instead. The x-intercepts of 0=3x²-10x+7 are 1 and 7/3, but the calculator returns 21 and 0. I can't reproduce it right now, but the this program has also returned some imaginary numbers where there shouldn't have been.
What's wrong with this code? The math works (inputting the second and third lines of code into the calculator to calculate, as opposed to lines of code in a program, after storing values in the variables does return the correct value), the Prompt and Disp functions work; what's wrong here?
Order of operations strikes again. The expression
(-B+√(B²-4AC))/2A
is being parsed as
((-B+√(B²-4AC))/2)*A
Add parentheses to /(2A) to fix this.
I have a function f(x,y) whose outcome is random (I take mean from 20 random numbers depending on x and y). I see no way to modify this function to make it symbolic.
And when I run
x,y = var('x,y')
d = plot_vector_field((f(x),x), (x,0,1), (y,0,1))
it says it can't cast symbolic expression to real or rationa number. In fact it stops when I write:
a=matrix(RR,1,N)
a[0]=x
What is the way to change this variable to real numbers in the beginning, compute f(x) and draw a vector field? Or just draw a lot of arrows with slope (f(x),x)?
I can create something sort of like yours, though with no errors. At least it doesn't do what you want.
def f(m,n):
return m*randint(100,200)-n*randint(100,200)
var('x,y')
plot_vector_field((f(x,y),f(y,x)),(x,0,1),(y,0,1))
The reason is because Python functions immediately evaluate - in this case, f(x,y) was 161*x - 114*y, though that will change with each invocation.
My suspicion is that your problem is similar, the immediate evaluation of the Python function once and for all. Instead, try lambda functions. They are annoying but very useful in this case.
var('x,y')
plot_vector_field((lambda x,y: f(x,y), lambda x,y: f(y,x)),(x,0,1),(y,0,1))
Wow, I now I have to find an excuse to show off this picture, cool stuff. I hope your error ends up being very similar.
Hey there,
I have a mathematical function (multidimensional which means that there's an index which I pass to the C++-function on which single mathematical function I want to return. E.g. let's say I have a mathematical function like that:
f = Vector(x^2*y^2 / y^2 / x^2*z^2)
I would implement it like that:
double myFunc(int function_index)
{
switch(function_index)
{
case 1:
return PNT[0]*PNT[0]*PNT[1]*PNT[1];
case 2:
return PNT[1]*PNT[1];
case 3:
return PNT[2]*PNT[2]*PNT[1]*PNT[1];
}
}
whereas PNT is defined globally like that: double PNT[ NUM_COORDINATES ]. Now I want to implement the derivatives of each function for each coordinate thus generating the derivative matrix (columns = coordinates; rows = single functions). I wrote my kernel already which works so far and which call's myFunc().
The Problem is: For calculating the derivative of the mathematical sub-function i concerning coordinate j, I would use in sequential mode (on CPUs e.g.) the following code (whereas this is simplified because usually you would decrease h until you reach a certain precision of your derivative):
f0 = myFunc(i);
PNT[ j ] += h;
derivative = (myFunc(j)-f0)/h;
PNT[ j ] -= h;
now as I want to do this on the GPU in parallel, the problem is coming up: What to do with PNT? As I have to increase certain coordinates by h, calculate the value and than decrease it again, there's a problem coming up: How to do it without 'disturbing' the other threads? I can't modify PNT because other threads need the 'original' point to modify their own coordinate.
The second idea I had was to save one modified point for each thread but I discarded this idea quite fast because when using some thousand threads in parallel, this is a quite bad and probably slow (perhaps not realizable at all because of memory limits) idea.
'FINAL' SOLUTION
So how I do it currently is the following, which adds the value 'add' on runtime (without storing it somewhere) via preprocessor macro to the coordinate identified by coord_index.
#define X(n) ((coordinate_index == n) ? (PNT[n]+add) : PNT[n])
__device__ double myFunc(int function_index, int coordinate_index, double add)
{
//*// Example: f[i] = x[i]^3
return (X(function_index)*X(function_index)*X(function_index));
// */
}
That works quite nicely and fast. When using a derivative matrix with 10000 functions and 10000 coordinates, it just takes like 0.5seks. PNT is defined either globally or as constant memory like __constant__ double PNT[ NUM_COORDINATES ];, depending on the preprocessor variable USE_CONST.
The line return (X(function_index)*X(function_index)*X(function_index)); is just an example where every sub-function looks the same scheme, mathematically spoken:
f = Vector(x0^3 / x1^3 / ... / xN^3)
NOW THE BIG PROBLEM ARISES:
myFunc is a mathematical function which the user should be able to implement as he likes to. E.g. he could also implement the following mathematical function:
f = Vector(x0^2*x1^2*...*xN^2 / x0^2*x1^2*...*xN^2 / ... / x0^2*x1^2*...*xN^2)
thus every function looking the same. You as a programmer should only code once and not depending on the implemented mathematical function. So when the above function is being implemented in C++, it looks like the following:
__device__ double myFunc(int function_index, int coordinate_index, double add)
{
double ret = 1.0;
for(int i = 0; i < NUM_COORDINATES; i++)
ret *= X(i)*X(i);
return ret;
}
And now the memory accesses are very 'weird' and bad for performance issues because each thread needs access to each element of PNT twice. Surely, in such a case where each function looks the same, I could rewrite the complete algorithm which surrounds the calls to myFunc, but as I stated already: I don't want to code depending on the user-implemented function myFunc...
Could anybody come up with an idea how to solve this problem??
Thanks!
Rewinding back to the beginning and starting with a clean sheet, it seems you want to be able to do two things
compute an arbitrary scalar valued
function over an input array
approximate the partial derivative of an arbitrary scalar
valued function over the input array
using first order accurate finite differencing
While the function is scalar valued and arbitrary, it seems that there are, in fact, two clear forms which this function can take:
A scalar valued function with scalar arguments
A scalar valued function with vector arguments
You appeared to have started with the first type of function and have put together code to deal with computing both the function and the approximate derivative, and are now wrestling with the problem of how to deal with the second case using the same code.
If this is a reasonable summary of the problem, then please indicate so in a comment and I will continue to expand it with some code samples and concepts. If it isn't, I will delete it in a few days.
In comments, I have been trying to suggest that conflating the first type of function with the second is not a good approach. The requirements for correctness in parallel execution, and the best way of extracting parallelism and performance on the GPU are very different. You would be better served by treating both types of functions separately in two different code frameworks with different usage models. When a given mathematical expression needs to be implemented, the "user" should make a basic classification as to whether that expression is like the model of the first type of function, or the second. The act of classification is what drives algorithmic selection in your code. This type of "classification by algorithm" is almost universal in well designed libraries - you can find it in C++ template libraries like Boost and the STL, and you can find it in legacy Fortran codes like the BLAS.
I have a function that takes a floating point number and returns a floating point number. It can be assumed that if you were to graph the output of this function it would be 'n' shaped, ie. there would be a single maximum point, and no other points on the function with a zero slope. We also know that input value that yields this maximum output will lie between two known points, perhaps 0.0 and 1.0.
I need to efficiently find the input value that yields the maximum output value to some degree of approximation, without doing an exhaustive search.
I'm looking for something similar to Newton's Method which finds the roots of a function, but since my function is opaque I can't get its derivative.
I would like to down-thumb all the other answers so far, for various reasons, but I won't.
An excellent and efficient method for minimizing (or maximizing) smooth functions when derivatives are not available is parabolic interpolation. It is common to write the algorithm so it temporarily switches to the golden-section search (Brent's minimizer) when parabolic interpolation does not progress as fast as golden-section would.
I wrote such an algorithm in C++. Any offers?
UPDATE: There is a C version of the Brent minimizer in GSL. The archives are here: ftp://ftp.club.cc.cmu.edu/gnu/gsl/ Note that it will be covered by some flavor of GNU "copyleft."
As I write this, the latest-and-greatest appears to be gsl-1.14.tar.gz. The minimizer is located in the file gsl-1.14/min/brent.c. It appears to have termination criteria similar to what I implemented. I have not studied how it decides to switch to golden section, but for the OP, that is probably moot.
UPDATE 2: I googled up a public domain java version, translated from FORTRAN. I cannot vouch for its quality. http://www1.fpl.fs.fed.us/Fmin.java I notice that the hard-coded machine efficiency ("machine precision" in the comments) is 1/2 the value for a typical PC today. Change the value of eps to 2.22045e-16.
Edit 2: The method described in Jive Dadson is a better way to go about this. I'm leaving my answer up since it's easier to implement, if speed isn't too much of an issue.
Use a form of binary search, combined with numeric derivative approximations.
Given the interval [a, b], let x = (a + b) /2
Let epsilon be something very small.
Is (f(x + epsilon) - f(x)) positive? If yes, the function is still growing at x, so you recursively search the interval [x, b]
Otherwise, search the interval [a, x].
There might be a problem if the max lies between x and x + epsilon, but you might give this a try.
Edit: The advantage to this approach is that it exploits the known properties of the function in question. That is, I assumed by "n"-shaped, you meant, increasing-max-decreasing. Here's some Python code I wrote to test the algorithm:
def f(x):
return -x * (x - 1.0)
def findMax(function, a, b, maxSlope):
x = (a + b) / 2.0
e = 0.0001
slope = (function(x + e) - function(x)) / e
if abs(slope) < maxSlope:
return x
if slope > 0:
return findMax(function, x, b, maxSlope)
else:
return findMax(function, a, x, maxSlope)
Typing findMax(f, 0, 3, 0.01) should return 0.504, as desired.
For optimizing a concave function, which is the type of function you are talking about, without evaluating the derivative I would use the secant method.
Given the two initial values x[0]=0.0 and x[1]=1.0 I would proceed to compute the next approximations as:
def next_x(x, xprev):
return x - f(x) * (x - xprev) / (f(x) - f(xprev))
and thus compute x[2], x[3], ... until the change in x becomes small enough.
Edit: As Jive explains, this solution is for root finding which is not the question posed. For optimization the proper solution is the Brent minimizer as explained in his answer.
The Levenberg-Marquardt algorithm is a Newton's method like optimizer. It has a C/C++ implementation levmar that doesn't require you to define the derivative function. Instead it will evaluate the objective function in the current neighborhood to move to the maximum.
BTW: this website appears to be updated since I last visited it, hope it's even the same one I remembered. Apparently it now also support other languages.
Given that it's only a function of a single variable and has one extremum in the interval, you don't really need Newton's method. Some sort of line search algorithm should suffice. This wikipedia article is actually not a bad starting point, if short on details. Note in particular that you could just use the method described under "direct search", starting with the end points of your interval as your two points.
I'm not sure if you'd consider that an "exhaustive search", but it should actually be pretty fast I think for this sort of function (that is, a continuous, smooth function with only one local extremum in the given interval).
You could reduce it to a simple linear fit on the delta's, finding the place where it crosses the x axis. Linear fit can be done very quickly.
Or just take 3 points (left/top/right) and fix the parabola.
It depends mostly on the nature of the underlying relation between x and y, I think.
edit this is in case you have an array of values like the question's title states. When you have a function take Newton-Raphson.