Could someone please help me debug this code? I am almost certain there is nothing wrong but Julia keeps giving me an error. The code is basically implementing the problem statement. I am discretizing,then a function computing the sums to compute Erof, then taking the gradient to compute the gradient step used in gradient descent. The debugger in Julia is a nightmare, please help.
If someone has a clue to what the problem is please let me know.
You can see the error line. It says no method matching colon(::Int64, ::Tuple(Int64)). This means N in for i = 1:N is a tuple but it should not be a tuple. N must be an integer.
N = size(U) in line 3 returns a tuple regardless whether U is a Vector or a Multi-Dimensional Array
With range, you should use an integer. So change your N = size(U) to N = length(U) or add the dimension argument to your size call.
Related
I'm trying to solve the following integral from 0 to t, where the value of t is not known but is surely less than infinite of the following function:
e^(x*s)*x^(b-1)mlf(-x^b,b,b) dx
where s is again a real valued variable of which the value is unknown and b is the parameter of the Mittag-Leffler function, such that 0<b<1. I have tried with "Ryacas" but whenever I try to define a variable as symbolic, let say for example:
x<- Sym("x")
the function "Sym" is not found thus I cannot even try to integrate the function.
Any suggestions will be really appreciated!
I cannot put the whole code here, and was not able to reproduce the problem with a small code, but here is the beginning of the code:
using JuMP, Cbc, StatsBase
n = 3;
V = 1:(2n+1);
model = Model(with_optimizer(Cbc.Optimizer, seconds=120));
#variable(model, x[V], Bin);
...
#objective(model, Min, total_blah);
JuMP.optimize!(model)
result = termination_status(model)
JuMP.objective_value(model)
xsol = JuMP.value.(x);
The problem I have is that the solver returns a solution where some of the xsol have values like 0.99995, where I am expecting Binary, ie either 0 or 1.
Can someone explain this behavior?
I looked this up and CBC has an option called integerTolerance (or integerT) that helps CBC to decide whether a variable is integer valued. Using CBC.exe, I see:
Coin:integerTolerance
integerTolerance has value 1e-006
Indeed the default is 1e-6. You cannot set it to zero but you can make it smaller (valid range is 1e-020 to 0.5). (The only solver I know of that allows this tolerance to be set to zero is Cplex; usually doing that leads to longer solution times).
In general I would advice to keep it as it is. If small deviations from integer values irritate you, I would round integer variables in the solution before printing. This gives better looking solutions (but this rounding step may make the solution slightly infeasible).
Can someone explain me how the optim function works in Scilab and give me a short example of that.
What I am trying to do is to maximize this function and find the optimal value
> function [f, g, ind]=cost(x, ind)
f= -x.^2
g=2*x
endfunction
// Simplest call
x0 = [1; -1; 1];
[fopt, xopt] = optim(cost, x0)
When I am trying to implement the function, I receive error
Variable returned by scilab argument function is incorrect.
I think I do some very basic mistake but can't understand where.
I think the answer is that -x.^2 does not return a scalar but a vector (x is a vector and x.^2 is an elementwise operation). You probably want to say something like x'*x. The objective function of an optimization problem should always be scalar (otherwise we end up with a multi-objective or multi-criteria problem which is a whole different type of problem).
Minimizing -x'*x is probably not a good idea
The gradient is not correct for f=-x'*x (but see previous point).
I wish to find an optimisation tool in R that lets me determine the value of an input parameter (say, a specific value between 0.001 and 0.1) that results in my function producing a desired output value.
My function takes an input parameter and computes a value. I want this output value to exactly match a predetermined number, so the function outputs the absolute of the difference between these two values; when they are identical, the output of the function is zero.
I've tried optimize(), but it seems to be set up to minimise the input parameter, not the output value. I also tried uniroot(), but it produces the error f() values at end points not of opposite sign, suggesting that it doesn't like the fact that increasing/decreasing the input parameter reduces the output value up to a point, but going beyond that point then increases it again.
Apologies if I'm missing something obvious here—I'm completely new to optimising functions.
Indeed you are missing something obvious:-) It's very obvious how you should/could formulate your problem.
Assuming the function that must equal a desired output value is f.
Define a function g satisfying
g <- function(x) f(x) - output_value
Now you can use uniroot to find a zero of g. But you must provide endpoints that satisfy the requirements of uniroot. I.e. the value of g for one endpoint must be positive and the value of g for the other endpoint must be negative (or the other way around).
Example:
f <- function(x) x - 10
g <- function(x) f(x) - 8
then
uniroot(g,c(0,20))
will do what you want but
uniroot(g,c(0,2))
will issue the error message values at end points not of opposite sign.
You could also use an optimization function but then you want to minimize the function g. To set you straight: optimize does not minimize the input paramater. Read the help thoroughly.
I have a function f(x,y) whose outcome is random (I take mean from 20 random numbers depending on x and y). I see no way to modify this function to make it symbolic.
And when I run
x,y = var('x,y')
d = plot_vector_field((f(x),x), (x,0,1), (y,0,1))
it says it can't cast symbolic expression to real or rationa number. In fact it stops when I write:
a=matrix(RR,1,N)
a[0]=x
What is the way to change this variable to real numbers in the beginning, compute f(x) and draw a vector field? Or just draw a lot of arrows with slope (f(x),x)?
I can create something sort of like yours, though with no errors. At least it doesn't do what you want.
def f(m,n):
return m*randint(100,200)-n*randint(100,200)
var('x,y')
plot_vector_field((f(x,y),f(y,x)),(x,0,1),(y,0,1))
The reason is because Python functions immediately evaluate - in this case, f(x,y) was 161*x - 114*y, though that will change with each invocation.
My suspicion is that your problem is similar, the immediate evaluation of the Python function once and for all. Instead, try lambda functions. They are annoying but very useful in this case.
var('x,y')
plot_vector_field((lambda x,y: f(x,y), lambda x,y: f(y,x)),(x,0,1),(y,0,1))
Wow, I now I have to find an excuse to show off this picture, cool stuff. I hope your error ends up being very similar.