Solving for a matrix in sympy - math

I want to solve for Ka = F. i do this by defining a new matrix M = Ka-F. And solves for this when its zero. This also works very well for a 2x2 matrix i only have 2 unkown so it should be very trivial to do this one but sympy wont solve it for some reason i cant really seem to find out what the problem is... this is the code:
from sympy import *
K = Matrix([[1/2,0,0,-1/2],[0,1,0,-1],[0,0,1/2,-1/2],[-1/2,-1,-1/2,2]])
T1,T4 = symbols("T_1,T_4")
T = Matrix([[T1],[20],[20],[T4]])
F = Matrix([[29.1666],[16.666],[12.5],[41.6666]])
M = K*T-F
solve(M)

Related

can Gekko solve vector based dynamic optimization problem for optimal control

i have tried many solver but getting errors somewhere. now im going to try gekko for my problem.
Please Lt me know that gekko can this kind of problem, where M in python function take the variable q. and all variables and parameters are in form of vector or matrix.
thanks you
q should be function of time, and M, c sai and other matrix depends on q and u.
Here is a similar problem with an inverted pendulum
There is also a related Stack Overflow question: how to use arrays in gekko optimizer for python Gekko can solve path planning problems that are subject to actuator constraints and equations of motion. One of the challenging mathematical features of this problem is how to smoothly pose the path constraints so that the robot does not pause at each intermediate path point. A potential method is to create a path cubic spline that helps to define the desired location for each time point. Here is one problem with matrices:
from gekko import GEKKO
import numpy as np
m = GEKKO(remote=False)
ni = 3; nj = 2; nk = 4
# solve AX=B
A = m.Array(m.Var,(ni,nj),lb=0)
X = m.Array(m.Var,(nj,nk),lb=0)
AX = np.dot(A,X)
B = m.Array(m.Var,(ni,nk),lb=0)
# equality constraints
m.Equations([AX[i,j]==B[i,j] for i in range(ni) \
for j in range(nk)])
m.Equation(5==m.sum([m.sum([A[i][j] for i in range(ni)]) \
for j in range(nj)]))
m.Equation(2==m.sum([m.sum([X[i][j] for i in range(nj)]) \
for j in range(nk)]))
# objective function
m.Minimize(m.sum([m.sum([B[i][j] for i in range(ni)]) \
for j in range(nk)]))
m.solve()
print(A)
print(X)
print(B)
Here is another test script that demonstrates dot products and the trace function with Numpy:
import numpy as np
from gekko import GEKKO
m = GEKKO(remote=False)
# Random 3x3
A = np.random.rand(3,3)
# Random 3x1
b = np.random.rand(3,1)
# Gekko array 3x3
p = m.Array(m.Param,(3,3))
# Gekko array 3x1
y = m.Array(m.Var,(3,1))
# Dot product of A p
x = np.dot(A,p) # or A#p
# Dot product of x y
w = x#y
# Dot product of p y
z = p#y # or np.dot(p,y)
# Trace (sum of diag) of p
t = np.trace(p)
# solve Ax = b
s = m.axb(A,b)
m.solve()

Second order ODE in Julia giving wrong results

I am trying to use the DifferentialEquations.jl provided by julia, and it's working all right until I try to use it on a second order ODE.
Consider for instance the second order ODE
x''(t) = x'(t) + 2* x(t), with initial conditions
x'(0) = 0, x(0) = 1
which has an analytic solution given by: x(t) = 2/3 exp(-t) + 1/3 exp(2t).
To solve it numerically, I run the following code:
using DifferentialEquations;
function f_simple(ddu, du, u, p, t)
ddu[1] = du[1] + 2*u[1]
end;
du0 = [0.]
u0 = [1.]
tspan = (0.0,5.0)
prob2 = SecondOrderODEProblem(f_simple, du0, u0, tspan)
sol = solve(prob2,reltol=1e-8, abstol=1e-8);
With that,
sol(3)[2] = 122.57014434362732
whereas the analytic solution yields 134.50945587649028, and so I'm a bit lost here.
According to the the documentation for DifferentialEquations.jl, Vern7() is appropriate for high-accuracy solutions to non-stiff equations:
sol = solve(prob2, Vern7(), reltol=1e-8, abstol=1e-8)
julia> println(sol(3)[2])
134.5094558872943
On my machine, this matches the analytical solution quite closely. I'm not exactly sure what the default method used is: the documentation indicates that solve has some method of choosing an appropriate solver when one isn't specified.
For more information on Vern7(), check out Jim Verner's page on Runge-Kutta algorithms.

TypeError: can't convert complex to float (Python3, solving for algebraic equations)

I have a simple formula for which I want to solve. Assuming all variables other than x are known, I am trying to solve for x by the following folmula: x = [(c-a)/a]^(1/b)
The initial equation was: a * x^b - a = c, and that was my way for solving for x.
Below is a snippet of my code.
a = 5000
b = 5
c = 562
x = ((c-a)/a)**(1/b)
But for some reason it cannot handle it. Any suggestions?
I think the correct formula is:
x = ((c+a)/a)**(1/b)

Nonlinear equation involving summations in R

I am having the hardest time trying to implement this equation into a nonlinear solver in R. I am trying both the nleqslv and BB packages but so far getting nothing but errors. I have searched and read documentation until my eyes have bled, but I cannot wrap my brain around it. The equation itself works like this:
The Equation
s2 * sum(price^(2*x+2)) - s2.bar * sum(price^(2*x)) = 0
Where s2, s2.bar and price are known vectors of equal length.
The last attempt I tried in BB was this:
gamma = function(x){
n = len(x)
f = numeric(n)
f[n] = s2*sum(price^(2*x[n]+2)) - s2.bar*sum(price^(2*x[n]))
f
}
g0 = rnorm(length(price))
results = BBsolve(par=g0, fn=gamma)
From you description of the various parts used in the function you seem to have muddled up the formula.
Your function gamma should most probably be written as
gamma <- function(x){
f <- s2*sum(price^(2*x+2)) - s2.bar*sum(price^(2*x))
f
}
s2, price and s2.bar are vectors from your description so the formula you gave will return a vector.
Since you have not given any data we cannot test. I have tried testing with randomly generated values for s2, price, s2.bar. Sometimes one gets a solution with both nleqslv and BB but not always.
In the case of package nleqslv the default method will not always work.
Since the package has different methods you should use the function testnslv from the package to see if any of the provided methods does find a solution.

Solving quadratic programming using R

I would like to solve the following quadratic programming equation using ipop function from kernlab :
min 0.5*x'*H*x + f'*x
subject to: A*x <= b
Aeq*x = beq
LB <= x <= UB
where in our example H 3x3 matrix, f is 3x1, A is 2x3, b is 2x1, LB and UB are both 3x1.
edit 1
My R code is :
library(kernlab)
H <- rbind(c(1,0,0),c(0,1,0),c(0,0,1))
f = rbind(0,0,0)
A = rbind(c(1,1,1), c(-1,-1,-1))
b = rbind(4.26, -1.73)
LB = rbind(0,0,0)
UB = rbind(100,100,100)
> ipop(f,H,A,b,LB,UB,0)
Error in crossprod(r, q) : non-conformable arguments
I know from matlab that is something like this :
H = eye(3);
f = [0,0,0];
nsamples=3;
eps = (sqrt(nsamples)-1)/sqrt(nsamples);
A=ones(1,nsamples);
A(2,:)=-ones(1,nsamples);
b=[nsamples*(eps+1); nsamples*(eps-1)];
Aeq = [];
beq = [];
LB = zeros(nsamples,1);
UB = ones(nsamples,1).*1000;
[beta,FVAL,EXITFLAG] = quadprog(H,f,A,b,Aeq,beq,LB,UB);
and the answer is a vector of 3x1 equals to [0.57,0.57,0.57];
However when I try it on R, using ipop function from kernlab library
ipop(f,H,A,b,LB,UB,0)) and I am facing Error in crossprod(r, q) : non-conformable arguments
I appreciate any comment
The original question asks about the error message Error in crossprod(r, q) : non-conformable arguments. The answer is that r must be specified with the same dimensions as b. So if b is 2x1 then r must also be 2x1.
A secondary question (from the comments) asks about why the system presented in the original question works in Matlab but not in R. The answer is that R and Matlab specify the problems differently. Matlab allows for inequality constraints to be entered separately from the equality constraints. However in R the constraints must all be of the form b<=Ax<=b+r (at least within the kernlab function ipop). So how may we mimic the original inequality constraints? The simple way is to make b very negative and to make r'=-b+r, where r' is your new r vector. Now we still have the same upper bound on the constraints because r'+b=-b+r+b=r. However we have put a lower bound on the constraints, too. My suggestion is to try solving the system with a few different values for b to see if the solution is consistent.
EDIT:
This is probably a better way to handle solving the program:
library(quadprog);
dvec <- -f;
Dmat <- H;
Amat <- -t(A);
bvec <- -rbind(4.26,-1.73);
solve.QP(Dmat, dvec, Amat, bvec)
where these definitions depend on the previously defined R code.

Resources