Better libraries in R for matrix operation [closed] - r

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
This question is related to a question I asked before
Matrix and vector multiplication operation in R
Specifically, I feel painful to do some matrix operations in R. For example, for the following code, there are couple of additional steps for me to make it run.
f<-function(x,A,b){
e=A %*% x - b
v=t(e) %*% e
return(as.numeric(v))
}
A=matrix(runif(300),ncol=3)
b=matrix(runif(100),ncol=1)
x0=runif(3)
optimx::optimx(x0,f,A=A,b=b, method="BFGS")
optimx only accepts a vector as initial value, so I cannot write x0 as a column vector like the assignment for A and b.
my function f has some matrix operations, but it returns a scalar, optimx also does not like that (it treats it as a matrix class), so I need to do as.numeric().
Is there a better way to enable me to do matrix operations in R like Matlab?

I'm not optimistic that you're going to find you what you want, and trying to work around the idiom of a language - rather than sucking it up/adapting to it - is often a recipe for continuing pain. A few thoughts:
c(v) and drop(v) have the same effect as.numeric(v); c(v) is terser and drop(v) is (perhaps) semantically clearer
optim() (unlike optimx::optimx) doesn't complain about being handed a column vector (in R terms, a 1-column matrix), and works the same as in your example
crossprod(e) is equivalent to (and faster than) t(e) %*% e
You could use MATLAB (you haven't told us why you're using R), or (if you can't afford it) try Octave ...

Related

I want to obtain eigenvalues of symmetric matrix in Julia in O(nmr) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 12 months ago.
Improve this question
I am a beginner at Julia. I want to obtain r eigenvalues and eigenvectors of input symmetric n times n matrix X in increasing order. I heard the computational complexity is O(n^2 r).
n is around 1000-20000, r is around 100-1000. How can I obtain the eigenvalue and eigenvectors within O(nmr)?
I'm not an expert on this, but I would start out trying the methods in the LinearAlgebra stdlib. The LinearAlgebra.eigen function is specialized on the input matrix types SymTridiagonal, Hermitian, Symmetric, and lets you specify how many vectors/values you want:
If you have a dense matrix, A, and want the largest r eigenvalues and vectors:
(evals, evecs) = eigen(Symmetric(A), 1:r)
You can also use eigvals and eigvecs if you just need eigenvalues or eigenvectors. Also check out eigen! if you want to save some memory.
BTW, using Symmetric(A) doesn't create a new matrix, it is just a wrapper around A that tells the compiler that A is symmetrical and only accesses the part of A that is above the diagonal.
If the version in LinearAlgebra is not the fastest in this quite general case, then it should probably be reported on Julia's github. There may be faster implementations for more specialized cases, but for general symmetric dense matrices, the implementation in the stdlib should be expected to be near optimal.

Global Optimization with bounds in R [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I am looking for an optimzier that minimizes a least square problem (non-linear) for a global minimum with constraints.
I was trying to use SANN optimization in R but realised that it doesnt allow constaints. I actually just want to bind my constraint to >0 and <1.
Is there a package available for that?
Thank you very much in advance.
You could apply optim with "L-BFGS-B", which directly allows constraints. If the results are very sensitive to initial parameters, then you could minimise over a grid of initial values supplied to par and then choose the parameters that give the best result.
You could also use "SANN" with optim (or any other unconstrained optimiser), but change your initial objective function such that it's automatically constrained. For example, if you really want to minimise wrt \beta but \beta must lie between 0 and 1, then you could instead instead minimise wrt \tau and replace \beta by exp(\tau)/(1+exp(\tau)) (the logit function) in your objective function. It'll always be between 0 and 1 then.

How to create a mathematical function from data plots [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I am by no means a math person, but I am really trying to figure out how create a graphable function from some data plots I measure from a chemical titration. I have been trying to learn R and I would like to know if anyone can explain to me or point me to a guide to create a mathmatic function of the titration graph below.
Thanks in advance.
What you are looking for is a Interpolation. I'm not a R programmer, but I'll try to answer anyway.
Some of the more common ways to achieve this function you want is by Polynomial Interpolation which usually gives back a Nth degree polynomial function, where N is the number of data points minus one (1 point gives a constant, 2 points make a line, 3 makes a*x^2 + b*x + c and so on).
Other common alternatives I've learn are used in Computer Graphics are Splines, B-spline, Bézier curve and Hermite interpolation. Those make the curve smoother and good looking (I've told they originated in the car industry so they are less true to the data points).
TL;DR: I've found evidence that there is a implementation of spline in R from the question Interpolation in R which may lead you to your solution.
Hope you get to know better your tool and do a great work.
When doing this kind of work in Computer Science we call it Numerical Methods (at least here in my university), I've done some class and homework in this area while attending to the Numerical Methods Course (it can be found at github) but it's nothing worth noting.
I would add a lot of links to Wikipedia but StackOverflow didn't allow it.

gmp for R and other sols [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
implemented a solution to a problem in arithmetic precision handeling in gmp - but the results are rather strange . as a part of troubleshooting I was wondering whether there is any other package hich woudl allow similar operatiosn as gmp in R. I would need something like chooseZ and multiplication of numbers larger than 2^64 - jsu to make sure that I am not having an error somewhere in this step of my script
need to compute nubers like
choose(2450,765) then multiply it with a floating point number like 0.0034
the log solution is not really working becasue the expression can aslo be
sum for 2 to k of (k* k*choose(1800,800)*0.089)
so Iw ould need a wauy to sum over (k kchoose(1800,800)*0.089)
You could just work on the logarithmic scale:
lchoose(2450,765) + log(0.0034)
#[1] 1511.433
If you exponentiate this, you get a really big number. I simply do not believe that this number would be different from Inf for any practical purpose and I believe even less that you'd need it to exact precision.
Edit:
If you want to calculate \sum_{i=2}^k{i^2 * choose(1800, 800) * 0.089}, you should see that this is the same as choose(1800, 800) * \sum_{i=2}^k{i^2 * 0.089} and then you can again work on the logarithmic scale.

Application for solving linear system of equations [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I needed an application for solving linear systems of equations (N up to 10), so I got different codes, and compile them, and they seem to work, but I get lots of problems with precision. I mean, the solvers are really very sensitive to small changes of the system.
So, could somebody recommend to me a reliable commandl ine application for this purpose? Or some useful open source code (and easy to compile)
Thanks
GNU Octave is essentially a free version of Matlab (the syntax is identical for basic operations), so you can try things out there and see how they compare to the answers that you're getting.
Having said that, if your answer is very sensitive to the input, it's possible that your problem is ill-conditioned - you can check this by computing the condition number of the matrix in Octave. It's hard to say what to do in that case without knowing more specifics on the problem.
Also, you don't mention which method you're currently using. Gaussian elimination (i.e. "what you learned in math class") is notoriously numerically unstable if you don't use pivoting (see the wikipedia entry for "Pivoting"); adding that might be enough to improve the quality of the results.
An approach is to use the numpy package in Python. You can create a 2d matrix A and a 1d vector b, then solve Ax=b for x using solve(A, x). It's part of the linalg subpackage of numpy.

Resources