Octave (or MATLAB) / operator - vector

I wonder what / operator does in Octave. (I am not sure whether it works the same way in MATLAB)
V = [1; 2; 3]
then
1 / V = [0.071429 0.142857 0.214286]
I know that ./ operator does element-wise division of vectors or matrices.
Then what does / operator do?

This behavior is described in the documentation:
x / y
Right division. This is conceptually equivalent to the expression
(inverse (y') * x')'
but it is computed without forming the inverse of y’.
If the system is not square, or if the coefficient matrix is singular, a minimum norm solution is computed.
MATLAB has exactly the same behavior, see its documentation.

Related

Rewriting expression using vector notation

First time user, sorry if this is the wrong forum.
I am looking for a way to express the following in pure linear algebra vector notation, i.e. remove the element-wise operations.
I am hoping that would make deriving the gradient and Hessian easier.
In MATLAB:
sum((W'*p - r).^2 .* m)
where W is a matrix, p, r and m are vectors.
In R:
sum(t(W) %*% p - r)^2 * m)
Thanks
The element-wise squaring makes the vector notation a bit more obscure in terms of everyday linear algebra operators.
The Hadamard product is the mathematical term for element-wise multiplication and will be denoted as ○.
Using the Hadamard product, we can write the expression as
In brackets, we have the element-wise multiplication, and then we left-multiply with m's transpose to take the sum (vector form of the dot-product).
We could also use the trace (denoted here are "tr") and diag operations to get an equivalent expression:
Here, we use diag to create a square matrix, square the square matrix (which is a valid operation), perform a matrix-vector multiplication, diag the resulting vector, and take its trace. This one looks like code, but I think that's just because code is trying to look like it ;).

Reformulating a quadratic program suitable for R

My problem is one which should be quite common in statistical inference:
min{(P - k)'S(P - k)} subject to k >= 0
So my choice variable is k, a 3x1 vector. The 3x1 vector P and 3x3 matrix S are known. Is it possible to reformulate this problem so I can use R's solve.QP quadratic programming solver? This solver requires the problem to be in the form
min{-d'b + 0.5 b' D b} subject to A'b >= b_0.
So here the choice vector is is b. Is there a way I can make my problem fit into solve.QP? Thanks so much for any help.

Creating function from jacbian maxima

I want to use Maxima to do linear stability analysis as function of r:
f(x):=rx + x^3 - x^5
A:solve(f(x)=0,x)
J:jacobian([f(x)],[x])
Now for each element in A, I want to check the sign of J as a function of r. In general I want a function from r that gives that tells me if there exists any eigenvalue to J with positive real part.
Maybe you know this already, but: multiplication in Maxima is indicated by an asterisk. So you have to write:
f(x):=r*x + x^3 - x^5;
I don't see any problem with your approach so far. The Jacobian is a 1 by 1 matrix so it is trivial to compute the eigenvalue. Then substitute values of x into that, and look at the real part (function realpart).

Get branch points of equation

If I have a general function,f(z,a), z and a are both real, and the function f takes on real values for all z except in some interval (z1,z2), where it becomes complex. How do I determine z1 and z2 (which will be in terms of a) using Mathematica (or is this possible)? What are the limitations?
For a test example, consider the function f[z_,a_]=Sqrt[(z-a)(z-2a)]. For real z and a, this takes on real values except in the interval (a,2a), where it becomes imaginary. How do I find this interval in Mathematica?
In general, I'd like to know how one would go about finding it mathematically for a general case. For a function with just two variables like this, it'd probably be straightforward to do a contour plot of the Riemann surface and observe the branch cuts. But what if it is a multivariate function? Is there a general approach that one can take?
What you have appears to be a Riemann surface parametrized by 'a'. Consider the algebraic (or analytic) relation g(a,z)=0 that would be spawned from this branch of a parametrized Riemann surface. In this case it is simply g^2 - (z - a)*(z - 2*a) == 0. More generally it might be obtained using Groebnerbasis, as below (no guarantee this will always work without some amount of user intervention).
grelation = First[GroebnerBasis[g - Sqrt[(z - a)*(z - 2*a)], {x, a, g}]]
Out[472]= 2 a^2 - g^2 - 3 a z + z^2
A necessary condition for the branch points, as functions of the parameter 'a', is that the zero set for 'g' not give a (single valued) function in a neighborhood of such points. This in turn means that the partial derivative of this relation with respect to g vanishes (this is from the implicit function theorem of multivariable calculus). So we find where grelation and its derivative both vanish, and solve for 'z' as a function of 'a'.
Solve[Eliminate[{grelation == 0, D[grelation, g] == 0}, g], z]
Out[481]= {{z -> a}, {z -> 2 a}}
Daniel Lichtblau
Wolfram Research
For polynomial systems (and some class of others), Reduce can do the job.
E.g.
In[1]:= Reduce[Element[{a, z}, Reals]
&& !Element[Sqrt[(z - a) (z - 2 a)], Reals], z]
Out[1]= (a < 0 && 2a < z < a) || (a > 0 && a < z < 2a)
This type of approach also works (often giving very complicated solutions for functions with many branch cuts) for other combinations of elementary functions I checked.
To find the branch cuts (as opposed to the simple class of branch points you're interested in) in general, I don't know of a good approach. The best place to find the detailed conventions that Mathematica uses is at the functions.wolfram site.
I do remember reading a good paper on this a while back... I'll try to find it....
That's right! The easiest approach I've seen for branch cut analysis uses the unwinding number. There's a paper "Reasoning about the elementary functions of complex analysis" about this the the journal "Artificial Intelligence and Symbolic Computation". It and similar papers can be found at one of the authors homepage: http://www.apmaths.uwo.ca/~djeffrey/offprints.html.
For general functions you cannot make Mathematica calculate it.
Even for polynomials, finding an exact answer takes time.
I believe Mathematica uses some sort of quantifier elimination when it uses Reduce,
which takes time.
Without any restrictions on your functions (are they polynomials, continuous, smooth?)
one can easily construct functions which Mathematica cannot simplify further:
f[x_,y_] := Abs[Zeta[y+0.5+x*I]]*I
If this function is real for arbitrary x and any -0.5 < y < 0 or 0<y<0.5,
then you will have found a counterexample to the Riemann zeta conjecture,
and I'm sure Mathematica cannot give a correct answer.

Writing an equation in vector form using Mathematica

Is it possible to write the following equation using vector notation in Mathematica?
dp/dt= div(k1 / k2 . grad p)
Where p is a scalar, k1 is vector, and k2 is a scalar.
You can find the vector calculus operators in the VectorAnalysis package where Laplacian (you did mean Laplacian, right?) is Laplacian and gradient is Grad. Both have some fancy symbolic replacements I belive. The default Cartesian coordinates are {Xx,Yy,Zz}, so this should give what I think you are asking for:
<< VectorAnalysis`
D[p[t, Xx, Yy, Zz], t] == Laplacian[{k1x, k1y, k1z}.Grad[p[t, Xx, Yy, Zz]]]/k2
I'm assuming k2 is a scalar? The p^(0,0,0,1) etc. int the output is Mathematica's way of denoting partial derivatives. If p is actually a defined function, they will be calculated.
HTH

Resources