DSolve with conditions in Mathematica - math

I would like to solve this equation in Mathematica :
DSolve[{p'[r] == 1/((r^2)*(((R - S)/(R^3)) - (1/(r^2)*(1 - S/r)))^(1/2))}, p[r], r]
but I have some supplementary conditions:
S is a strictly positive real
R > 3*sqrt(3)*S/2
I want the solution over the interval r in ]R, +infinity]
I am a beginner with Mathematica so how to specify these conditions ?

Your existing code appears to produce a solution (albeit large) on Mathematica 8
sol = DSolve[{p'[r] ==
1/((r^2)*(((R - S)/(R^3)) - (1/(r^2)*(1 - S/r)))^(1/2))}, p[r], r]
You can add the additional constraints on the solution, as part of the simplification. It doesn't appear to make a significant difference. Were you expecting something different?
Simplify[sol, {S, R} \[Element] Reals && S > 0 && R > 3*sqrt (3)*S/2]
Minor Correction
FullSimplify[sol, {S, R} \[Element] Reals && S > 0 && R > 3*sqrt (3)*S/2]
Appears to simplify some of the terms, but only a little.

Related

Define a correct constraint, like outside a 2-D rectangle in Julia with JuMP

I would like to define a constraint in an optimization problem as follows:
(x,y) not in {(x,y)|1.0 < x < 2.0, 3.0 < y < 4.0}.
what I tried is #constraint(model, (1.0 < x < 2.0 + 3.0 < y < 4.0)!=2), but failed.
It seems that boolen operation is not allowed. such that I have no idea about it. Any advice is appreciated!
You should avoid introducing quadratic constraints (as in the other answer) and rather introduce binary variables. This increase number of available solvers and generally linear models take shorter time to solve.
Hence you should note that !(1.0 < x < 2.0) is an equivalent of x <= 1 || x >= 2 which can be written in a linear form as:
#variable(model, bx, Bin)
const M = 1000 # number "big enough"
#constraint(model, x <= 1 + M*bx)
#constraint(model, x >=2 - M*(1-bx))
bx is here a "switcher" variable that makes either first or second constraint obligatory.
I am not sure what you want about y as you have 3.0 < y < 3.0 but basically the pattern to formulate the would be the same.
Just note you cannot have a constraint such as y != 3 as solvers obviously have some numerical accuracy and you would need rather to represent this is as for an example !(3-0.01 < y < 3+0.01) (still using the same pattern as above)
UPDATE: The previous solution in this answer turned out to be wrong (exclude parts of the admissable region), and so I felt obligated to provide another 'right' solution. This solution partitions the admissable region into parts and solves different optimization problems for each part. Keeping the best solution. This is not a nice solution, but if one does not have a good solver (those commercial ones) it is one way. The commercial solvers usually go through a more efficient similar process by the name of branch-and-bound.
using JuMP, Ipopt
function solveopt()
bestobj = Inf
bestx, besty = 0.0,0.0
for (ltside, xvar, val) in (
(true, true, 2.0),(false, true, 3.0),
(true, false, 3.0),(false, false, 4.0))
m = Model(Ipopt.Optimizer)
#variable(m, x)
#variable(m, y)
add_constraint(m, ScalarConstraint(xvar ? x : y,
ltside ? MOI.LessThan(val) : MOI.GreaterThan(val)))
# following is an objective optimal inside the box
#NLobjective(m, Min, (x-2.5)^2+(y-3.5)^2)
optimize!(m)
if objective_value(m) < bestobj
bestx, besty = value(x), value(y)
end
end
return bestx, besty
end
The solution for this example problem is:
julia> solveopt()
:
: lots of solver output...
:
(2.5, 3.9999999625176965)
Lastly, I benchmarked this crude method against a non-commercial solver (Pajarito) with the method from other answer and this one is 2X faster (because of simplicity, I suppose). Commercial solvers would beat both times.

How to use VariableRef as indices in Julia JuMP

In Julia when I do:
model = Model();
set_optimizer(model, Cbc.Optimizer);
N=11;
model = Model();
set_optimizer(model, Cbc.Optimizer);
#variable(model, X[1:N,1:N,1:N], Bin);
#variable(model, 1<=K<=10, Int);
for k in K
#constraint(model, (sum(X[1,j,k] for j= 1:N)) ==1 )
end
I get this error:
ArgumentError: invalid index: K of type VariableRef
Because I used a variable reference(k) as index for a vector (X).
How could I fix that?
If you want to K to interact with other variables you need to make it a binary vector with the sum of 1 and then use multiplication for modelling the interaction.
#variable(model, K[1:10], Bin);
#constraint(model, sum(K) == 1)
Now I am not exactly shure what you want to accomplish. If you want to turn off and on equations depending on the value of K this would look like this:
#constraint(model,con[k in 1:10], sum(X[1,j,k] for j= 1:N)*K[k] == K[k] )
This however makes the model non-linear and you would need to use a non-linear solver for it.
Depending on your use case having sums simply up to 1 could be enough (and this yields a model much easier for solvers but might your business needs or not):
#constraint(model,con[k in 1:10], sum(X[1,j,k] for j= 1:N) == K[k] )

Fitting two curves with linear/non-linear regression

I need to fit two curves(which both should belong to cubic functions) into a set of points with JuMP.
I've done fitting one curve, but I'm struggling at fitting 2 curves into same dataset.
I thought that if I can distribute points to curves - so if each point can only be used once - I can do it like below, but it didn't work. (I know that I can use much more complicated things, I want to keep it simple.)
This is a part of my current code:
# cubicFunc is a two dimensional array which accepts cubicFunc[x,degree]
#variable(m, mult1[1:4]) // 0:3 because it's cubic
#variable(m, mult2[1:4]) // 0:3 because it's cubic
#variable(m, 0 <= includeIn1[1:numOfPoints] <= 1, Int)
#variable(m, 0 <= includeIn2[1:numOfPoints] <= 1, Int)
# some kind of hack to force one of them to 0 and other one to 1
#constraint(m, loop[i in 1:numOfPoints], includeIn1[i] + includeIn2[i] == 1)
#objective(m, Min, sum( (yPoints - cubicFunc*mult1).*includeIn1 .^2 ) + sum( (yPoints - cubicFunc*mult2).*includeIn2 .^2 ))
But it gives various errors depending on what I'm trying; *includeIn1 and, .*includeIn1 doesn't work, I've tried to do it via #NLobjective but it gave me whooping ~50 lines of errors etc.
Is my idea realistic? Can I make it into the code?
Any help will be highly appreciated. Thank you very much.
You can write down the problem e.g. like this:
using JuMP, Ipopt
m = Model(with_optimizer(Ipopt.Optimizer))
#variable(m, mult1[1:4])
#variable(m, mult2[1:4])
#variable(m, 0 <= includeIn1[1:numOfPoints] <= 1)
#variable(m, 0 <= includeIn2[1:numOfPoints] <= 1)
#NLconstraint(m, loop[i in 1:numOfPoints], includeIn1[i] + includeIn2[i] == 1)
#NLobjective(m, Min, sum(includeIn1[i] * (yPoints[i] - sum(cubicFunc[i,j]*mult1[j] for j in 1:4)) ^2 for i in 1:numOfPoints) +
sum(includeIn2[i] * (yPoints[i] - sum(cubicFunc[i,j]*mult2[j] for j in 1:4)) ^2 for i in 1:numOfPoints))
optimize!(m)
Given the constraints includeIn1 and includeIn2 will be 1 or 0 in optimum (if they are not this means that it does not matter to which group you assign the point), so we do not have to constrain them to be binary. Also I use non-linear solver as the problem does not not seem to be possible to reformulate as linear or quadratic optimization task.
However, I give the above code only as an example how you can write it down. The task you have formulated does not have a unique local minimum (that is a global one then), but several local minima. Therefore using standard non-linear convex solvers that JuMP supports will only find one local optimum (not necessarily a global one). In order to look for global optima you need to switch to global solvers like e.g. https://github.com/robertfeldt/BlackBoxOptim.jl.

Derivatives of probability distributions w.r.t. parameters in R?

I need the (analytical) derivatives of the PDFs/log PDFs/CDFs of the most common probability distributions w.r.t. to their parameters in R. Is there any way to use these functions?
The gamlss.dist package provides the derivatives of the log PDFs of many probability distribution (code for the normal distribution). Is there anything similar for PDFs/CDFs?
Edit: Admittedly, the derivatives of the PDFs can be obtained from the derivatives of the log PDFs by a simple application of the chain rule, but I don't think a similar thing is possible for the CDFs...
OP mentioned that calculating the derivatives once is OK, so I'll talk about that. I use Maxima but the same thing could be done with Sympy or other computer algebra systems, and it might even be possible in R; I didn't investigate.
In Maxima, probability distributions are in the distrib add-on package which you load via load(distrib). You can find documentation for all the cdf functions by entering ?? cdf_ at the interactive input prompt.
Maxima applies partial evaluation to functions -- if some variables don't have defined values, that's OK, the result has those variables undefined in it. So you can say diff(cdf_foo(x, a, b), a) to get a derivative wrt a for example, with free variables x, a, and b.
You can generate code via grind, which produces output suitable for Maxima, but other languages will understand the expressions.
There are several ways to do this stuff. Here's just a first attempt.
(%i1) load (distrib) $
(%i2) fundef (cdf_weibull);
(%o2) cdf_weibull(x, a, b) := if maybe((a > 0) and (b > 0)) = false
then error("cdf_weibull: parameters a and b must be greater than 0")
x a
else (1 - exp(- (-) )) unit_step(x)
b
(%i3) assume (a > 0, b > 0);
(%o3) [a > 0, b > 0]
(%i4) diff (cdf_weibull (x, a, b), a);
a
x
- --
a a a
b log(b) x x log(x)
(%o4) - %e unit_step(x) (--------- - ---------)
a a
b b
(%i5) grind (%);
-%e^-(x^a/b^a)*unit_step(x)*((log(b)*x^a)/b^a-(x^a*log(x))/b^a)$
(%o5) done
(%i6) diff (cdf_weibull (x, a, b), b);
a
x
- --
a
(- a) - 1 a b
(%o6) - a b x %e unit_step(x)
(%i7) grind (%);
-a*b^((-a)-1)*x^a*%e^-(x^a/b^a)*unit_step(x)$
(%o7) done

Mathematica DSolve diff. equation over a particular domain

I am looking for a way to solve the following differential equation:
DSolve[(1 - b*Abs[z])*f[z]/a == f''[z], f[z], z]
Therefore I tried to DSolve it distinguishing z>0 from z<0 such as:
DSolve[(1 - b*z)*f[z]/a == f''[z], f[z], z>0]
But I still does not work.
Maybe adding a domain explicitly would help but I can't find a way to do so.
Does anyone has any idea how do do such things?
Thank you for your help and time
You can pass your assumptions on to the solver with Refine:
Refine[DSolve[(1 - b*Abs[z])*f[z]/a == f''[z], f[z], z], z > 0]
gives
{{f[z] -> AiryAi[(1/a - (b z)/a)/(-(b/a))^(2/3)] C[1] + AiryBi[(1/a - (b z)/a)/(-(b/a))^(2/3)] C[2]}}

Resources