In sage's solve function, is there a way to express a disjoint conditional? - sage

Take the following combination of constraints, for example:
var('x y', domain='real')
constraint1 = x^2 + y^2 == 1
constraint2 = x==y OR x==-y
solve([constraint1, constraint2],x,y)
This of course does not work, because OR is invalid syntax. Is it possible to mark disjoint constraints as input to sage's solve function?

Related

Optimization in R - Efficient Computation of Objective and Gradient

I need to optimize a set of variables with respect to an objective function. I have the analytical gradient of the function, and would like to use it in the optimization routine. The objective and gradient have some common computations, and I would like to define the functions in the most efficient way possible. The below example demonstrates the issue.
Let f_obj, f_grad and f_common be functions for the objective, gradient and common computations, respectively. The optimization is over the vector x. The below code finds a root of the polynomial y^3 - 3*y^2 + 6*y + 1, where y is a function of c(x[1], x[2]). Note that the function f_common is called in both f_obj and f_grad. In my actual problem the common computation is much longer, so I'm looking for a way to define f_obj and f_grad so that the number of calls to f_common is minimized.
f_common <- function(x) x[1]^3*x[2]^3 - x[2]
f_obj <- function(x) {
y <- f_common(x)
return ( (y^3 - 3*y^2 + 6*y + 1)^2 )
}
f_grad <- function(x) {
y <- f_common(x)
return ( 2 * (y^3 - 3*y^2 + 6*y + 1) * (3*y^2 - 6*y + 6)* c(3*x[1]^2*x[2]^3, 3*x[1]^3*x[2]^2 - 1) )
}
optim(par = c(100,100), fn = f_obj, gr = f_grad, method = "BFGS")
UPDATE
I find that the package nloptr offers the facility to input the objective function and its gradient as a list. Is there a way to define other optimizers, (optim, optimx, nlminb, etc.) in a similar manner?
Thanks.
Store the value of the common function in global variable to make it available to subsequent function call, as in following:
f_common <- function(x) x[1]^3*x[2]^3 - x[2]
f_obj <- function(x) {
y <<- f_common(x) # <<- operator stores in parent scope
return ( (y^3 - 3*y^2 + 6*y + 1)^2 )
}
f_grad <- function(x) {
return ( 2 * (y^3 - 3*y^2 + 6*y + 1) * (3*y^2 - 6*y + 6)* c(3*x[1]^2*x[2]^3, 3*x[1]^3*x[2]^2 - 1) )
}
y<<-0
optim(par = c(100,100), fn = f_obj, gr = f_grad, method = "BFGS")
A couple of notes are worth adding about this solution.
1) Firstly, using the <<- operator, does not strictly speaking assign to a global variable, but rather to one in the parent scope of the function (i.e. the scope from which it was called). Typically this is often global scope. This works fine here and is the better approach. It is also possible to explicitly use global scope using the assign() function, but there is no need for that here.
2) It should also be noted that it is normally not recommended to use global variables, because they can have unexpected side effects if the same variable name is used elsewhere. To avoid any possible side effects, I would suggest using a variable name such as global.f_common that will never be used elsewhere and has no danger of side effects. I simply used the name y in the example to be consistent with the nomenclature in the original question. This is one of the rare occasions where giving a variable scope outside of its function may be justified because it is difficult to achieve the desired behaviour another way. Just make sure you use caution and use a unique name (such as global.f_common) as suggested above.

Prolog Recursion (Factorial of a Power Function)

I am having some troubles with my CS assignment. I am trying to call another rule that I created previously within a new rule that will calculate the factorial of a power function (EX. Y = (N^X)!). I think the problem with my code is that Y in exp(Y,X,N) is not carrying over when I call factorial(Y,Z), I am not entirely sure though. I have been trying to find an example of this, but I haven been able to find anything.
I am not expecting an answer since this is homework, but any help would be greatly appreciated.
Here is my code:
/* 1.2: Write recursive rules exp(Y, X, N) to compute mathematical function Y = X^N, where Y is used
to hold the result, X and N are non-negative integers, and X and N cannot be 0 at the same time
as 0^0 is undefined. The program must print an error message if X = N = 0.
*/
exp(_,0,0) :-
write('0^0 is undefined').
exp(1,_,0).
exp(Y,X,N) :-
N > 0, !, N1 is N - 1, exp(Y1, X, N1), Y is X * Y1.
/* 1.3: Write recursive rules factorial(Y,X,N) to compute Y = (X^N)! This function can be described as the
factorial of exp. The rules must use the exp that you designed.
*/
factorial(0,X) :-
X is 1.
factorial(N,X) :-
N> 0, N1 is N - 1, factorial(N1,X1), X is X1 * N.
factorial(Y,X,N) :-
exp(Y,X,N), factorial(Y,Z).
The Z variable mentioned in factorial/3 (mentioned only once; so-called 'singleton variable', cannot ever get unified with anything ...).
Noticed comments under question, short-circuiting it to _ won't work, you have to unify it with a sensible value (what do you want to compute / link head of the clause with exp and factorial through parameters => introduce some parameter "in the middle"/not mentioned in the head).
Edit: I'll rename your variables for you maybe you'll se more clearly what you did:
factorial(Y,X,Result) :-
exp(Y,X,Result), factorial(Y,UnusedResult).
now you should see what your factorial/3 really computes, and how to fix it.

Recursive implicit function

I'm trying to solve a system of recursive equations. For this, the value V(m) is defined to be the solution to an equation like V(m)==g(V(m)), where g is a known function. How can I define a function in that way?
To be specific, consider the system:
In Julia I have:
T=4;
beta=.95;
alpha=.3;
gamma=.3;
zeta=.3;
V(m) = m<T ? alpha+beta*V(m+1)+beta*(Z(m+1,1)+Z(m+1,2)+Z(m+1,3)+Z(m+1,4)) : 0.
U(f) = f<T ? gamma+beta*U(f+1)+beta*(Z(1,f+1)+Z(2,f+1)+Z(3,f+1)+Z(4,f+1)) : 0.
Z(m,f) = m<T && f<T ? zeta+beta*(Z(m+1,f+1)+V(m+1)+U(f+1)) : 0.
Notice that when computing V(2) for example, the rhs also contains V(2). So V(2) is the solution to that equation. How can I compute the values of V, U and Z in a setting like this?

How extreme values of a functional can be found using R?

I have a functional like this :
(LaTex formula: $v[y]=\int_0^2 (y'^2+23yy'+12y^2+3ye^{2t})dt$)
with given start and end conditions y(0)=-1, y(2)=18.
How can I find extreme values of this functional in R? I realize how it can be done for example in Excel but didn't find appropriate solution in R.
Before trying to solve such a task in a numerical setting, it might be better to lean back and think about it for a moment.
This is a problem typically treated in the mathematical discipline of "variational calculus". A necessary condition for a function y(t) to be an extremum of the functional (ie. the integral) is the so-called Euler-Lagrange equation, see
Calculus of Variations at Wolfram Mathworld.
Applying it to f(t, y, y') as the integrand in your request, I get (please check, I can easily have made a mistake)
y'' - 12*y + 3/2*exp(2*t) = 0
You can go now and find a symbolic solution for this differential equation (with the help of a textbook, or some CAS), or solve it numerically with the help of an R package such as 'deSolve'.
PS: Solving this as an optimization problem based on discretization is possible, but may lead you on a long and stony road. I remember solving the "brachistochrone problem" to a satisfactory accuracy only by applying several hundred variables (not in R).
Here is a numerical solution in R. First the functional:
f<-function(y,t=head(seq(0,2,len=length(y)),-1)){
len<-length(y)-1
dy<-diff(y)*len/2
y0<-(head(y,-1)+y[-1])/2
2*sum(dy^2+23*y0*dy+12*y0^2+3*y0*exp(2*t))/len
}
Now the function that does the actual optimization. The best results I got were using the BFGS optimization method, and parametrizing using dy rather than y:
findMinY<-function(points=100, ## number of points of evaluation
boundary=c(-1,18), ## boundary values
y0=NULL, ## optional initial value
method="Nelder-Mead", ## optimization method
dff=T) ## if TRUE, optimizes based on dy rather than y
{
t<-head(seq(0,2,len=points),-1)
if(is.null(y0) || length(y0)!=points)
y0<-seq(boundary[1],boundary[2],len=points)
if(dff)
y0<-diff(y0)
else
y0<-y0[-1]
y0<-head(y0,-1)
ff<-function(z){
if(dff)
y<-c(cumsum(c(boundary[1],z)),boundary[2])
else
y<-c(boundary[1],z,boundary[2])
f(y,t)
}
res<-optim(y0,ff,control=list(maxit=1e9),method=method)
cat("Iterations:",res$counts,"\n")
ymin<-res$par
if(dff)
c(cumsum(c(boundary[1],ymin)),boundary[2])
else
c(boundary[1],ymin,boundary[2])
}
With 500 points of evaluation, it only takes a few seconds with BFGS:
> system.time(yy<-findMinY(500,method="BFGS"))
Iterations: 90 18
user system elapsed
2.696 0.000 2.703
The resulting function looks like this:
plot(seq(0,2,len=length(yy)),yy,type='l')
And now a solution that numerically integrates the Euler equation.
As #HansWerner pointed out, this problem boils down to applying the Euler-Lagrange equation to the integrand in OP's question, and then solving that differential equation, either analytically or numerically. In this case the relevant ODE is
y'' - 12*y = 3/2*exp(2*t)
subject to:
y(0) = -1
y(2) = 18
So this is a boundary value problem, best approached using bvpcol(...) in package bvpSolve.
library(bvpSolve)
F <- function(t, y.in, pars){
dy <- y.in[2]
d2y <- 12*y.in[1] + 1.5*exp(2*t)
return(list(c(dy,d2y)))
}
init <- c(-1,NA)
end <- c(18,NA)
t <- seq(0, 2, by = 0.01)
sol <- bvpcol(yini = init, yend = end, x = t, func = F)
y = function(t){ # analytic solution...
b <- sqrt(12)
a <- 1.5/(4-b*b)
u <- exp(2*b)
C1 <- ((18*u + 1) - a*(exp(4)*u-1))/(u*u - 1)
C2 <- -1 - a - C1
return(a*exp(2*t) + C1*exp(b*t) + C2*exp(-b*t))
}
par(mfrow=c(1,2))
plot(t,y(t), type="l", xlim=c(0,2),ylim=c(-1,18), col="red", main="Analytical Solution")
plot(sol[,1],sol[,2], type="l", xlim=c(0,2),ylim=c(-1,18), xlab="t", ylab="y(t)", main="Numerical Solution")
It turns out that in this very simple example, there is an analytical solution:
y(t) = a * exp(2*t) + C1 * exp(sqrt(12)*t) + C2 * exp(-sqrt(12)*t)
where a = -3/16 and C1 and C2 are determined to satisfy the boundary conditions. As the plots show, the numerical and analytic solution agree completely, and also agree with the solution provided by #mrip

Recursive function analysis

I'm trying to analyze the performance a recursive program I wrote.
The basic code is
Cost(x)
{
1 + MIN(Cost(x-1), Cost(x-2), Cost(x-3))
}
I want to write a recurrence relation for the number of calls made to Cost(). How would I start this?
Something like T(x) = T(x/2). But I don't think that's right
Edit: I can represent this as a tree with a branching factor of 3 for each of the 3 recursive calls to Cost(). So would it would more accurately be T(x) = T(x/3)?
The number of calls made to Cost() would be:
C(x) = 1 + C(x-1) + C(x-2) + C(x-3)
So, for an input x, Cost() was called once plus the amount of times it was called for x-1, x-2, and x-3. This is assuming that your solution does not use memoization. The recurrence relation is not pretty: http://www.wolframalpha.com/input/?i=T(x)+%3D+1+%2B+T(x-1)+%2B+T(x-2)+%2B+T(x-3)
Using memoization, however, your "number of calls" becomes C(x) = x because you only need to evaluate C(i) once for all i between 0 and x. (Might be C(x) = x + 1, depending on your initial conditions)
You really did write a recursive solution ?( i hope its a assignment to compare against linear iteration )
i think
T(X) = T(X-1)+T(X-2)+T(X-3)+C
C is small constant

Resources