Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
implemented a solution to a problem in arithmetic precision handeling in gmp - but the results are rather strange . as a part of troubleshooting I was wondering whether there is any other package hich woudl allow similar operatiosn as gmp in R. I would need something like chooseZ and multiplication of numbers larger than 2^64 - jsu to make sure that I am not having an error somewhere in this step of my script
need to compute nubers like
choose(2450,765) then multiply it with a floating point number like 0.0034
the log solution is not really working becasue the expression can aslo be
sum for 2 to k of (k* k*choose(1800,800)*0.089)
so Iw ould need a wauy to sum over (k kchoose(1800,800)*0.089)
You could just work on the logarithmic scale:
lchoose(2450,765) + log(0.0034)
#[1] 1511.433
If you exponentiate this, you get a really big number. I simply do not believe that this number would be different from Inf for any practical purpose and I believe even less that you'd need it to exact precision.
Edit:
If you want to calculate \sum_{i=2}^k{i^2 * choose(1800, 800) * 0.089}, you should see that this is the same as choose(1800, 800) * \sum_{i=2}^k{i^2 * 0.089} and then you can again work on the logarithmic scale.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I am looking for an optimzier that minimizes a least square problem (non-linear) for a global minimum with constraints.
I was trying to use SANN optimization in R but realised that it doesnt allow constaints. I actually just want to bind my constraint to >0 and <1.
Is there a package available for that?
Thank you very much in advance.
You could apply optim with "L-BFGS-B", which directly allows constraints. If the results are very sensitive to initial parameters, then you could minimise over a grid of initial values supplied to par and then choose the parameters that give the best result.
You could also use "SANN" with optim (or any other unconstrained optimiser), but change your initial objective function such that it's automatically constrained. For example, if you really want to minimise wrt \beta but \beta must lie between 0 and 1, then you could instead instead minimise wrt \tau and replace \beta by exp(\tau)/(1+exp(\tau)) (the logit function) in your objective function. It'll always be between 0 and 1 then.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
This question is related to a question I asked before
Matrix and vector multiplication operation in R
Specifically, I feel painful to do some matrix operations in R. For example, for the following code, there are couple of additional steps for me to make it run.
f<-function(x,A,b){
e=A %*% x - b
v=t(e) %*% e
return(as.numeric(v))
}
A=matrix(runif(300),ncol=3)
b=matrix(runif(100),ncol=1)
x0=runif(3)
optimx::optimx(x0,f,A=A,b=b, method="BFGS")
optimx only accepts a vector as initial value, so I cannot write x0 as a column vector like the assignment for A and b.
my function f has some matrix operations, but it returns a scalar, optimx also does not like that (it treats it as a matrix class), so I need to do as.numeric().
Is there a better way to enable me to do matrix operations in R like Matlab?
I'm not optimistic that you're going to find you what you want, and trying to work around the idiom of a language - rather than sucking it up/adapting to it - is often a recipe for continuing pain. A few thoughts:
c(v) and drop(v) have the same effect as.numeric(v); c(v) is terser and drop(v) is (perhaps) semantically clearer
optim() (unlike optimx::optimx) doesn't complain about being handed a column vector (in R terms, a 1-column matrix), and works the same as in your example
crossprod(e) is equivalent to (and faster than) t(e) %*% e
You could use MATLAB (you haven't told us why you're using R), or (if you can't afford it) try Octave ...
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I'm working with huge dense matrices in R (Matrix package, Matrix data type) where one of my matrix exceeds the theoretical limit of an R matrix (it is supposed to be 58932 by 58932)
I need to conduct basic matrix operations such as add and multiply.
My question is: Is there a package in R or some other software which I can use to store these huge matrices as well as work with them.
Thank you in advance,
Try the class big.matrix in the CRAN package bigmemory.
http://www.stat.yale.edu/~mjk56/temp/bigmemory-vignette.pdf
I mainly use Windows, so bigmemory does not work for me.
I wrote my own package filematrix doing about the same with pure R code.
http://cran.r-project.org/web/packages/filematrix/index.html
I tested it on matrices over 1 TB in size.
Your 60,000 x 60,000 matrix should take only 28 GB as a file.
Happy to answer any questions about it.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
So - edited because some of us thought that this question is off-topic.
I need to build spline (approximation) on 100 points in one of environments listed in tags. But I need it with exact number of intervals (maximum of 6 intervals - separate equations - in whole domain). Packages / libraries in R and Maxima which I know let me for building spline on this points but with 25-30 intervals (separate equations). Does anyone know how to build spline with set number of intervals without coding whole algorithm all over again?
What you're looking for might be described as "local regression" or "localized regression"; searching for those terms might turn up some hits.
I don't know if you can find exactly what you've described. But implementing it doesn't seem too complicated: (1) Split the domain into N intervals (say N=10). For each interval, (2) make a list of the data in the interval, (3) fit a low-order polynomial (e.g. cubic) to the data in the interval using least squares.
If that sounds interesting to you, I can go into details, or maybe you can work it out yourself.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I needed an application for solving linear systems of equations (N up to 10), so I got different codes, and compile them, and they seem to work, but I get lots of problems with precision. I mean, the solvers are really very sensitive to small changes of the system.
So, could somebody recommend to me a reliable commandl ine application for this purpose? Or some useful open source code (and easy to compile)
Thanks
GNU Octave is essentially a free version of Matlab (the syntax is identical for basic operations), so you can try things out there and see how they compare to the answers that you're getting.
Having said that, if your answer is very sensitive to the input, it's possible that your problem is ill-conditioned - you can check this by computing the condition number of the matrix in Octave. It's hard to say what to do in that case without knowing more specifics on the problem.
Also, you don't mention which method you're currently using. Gaussian elimination (i.e. "what you learned in math class") is notoriously numerically unstable if you don't use pivoting (see the wikipedia entry for "Pivoting"); adding that might be enough to improve the quality of the results.
An approach is to use the numpy package in Python. You can create a 2d matrix A and a 1d vector b, then solve Ax=b for x using solve(A, x). It's part of the linalg subpackage of numpy.