Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
So - edited because some of us thought that this question is off-topic.
I need to build spline (approximation) on 100 points in one of environments listed in tags. But I need it with exact number of intervals (maximum of 6 intervals - separate equations - in whole domain). Packages / libraries in R and Maxima which I know let me for building spline on this points but with 25-30 intervals (separate equations). Does anyone know how to build spline with set number of intervals without coding whole algorithm all over again?
What you're looking for might be described as "local regression" or "localized regression"; searching for those terms might turn up some hits.
I don't know if you can find exactly what you've described. But implementing it doesn't seem too complicated: (1) Split the domain into N intervals (say N=10). For each interval, (2) make a list of the data in the interval, (3) fit a low-order polynomial (e.g. cubic) to the data in the interval using least squares.
If that sounds interesting to you, I can go into details, or maybe you can work it out yourself.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I am looking for an optimzier that minimizes a least square problem (non-linear) for a global minimum with constraints.
I was trying to use SANN optimization in R but realised that it doesnt allow constaints. I actually just want to bind my constraint to >0 and <1.
Is there a package available for that?
Thank you very much in advance.
You could apply optim with "L-BFGS-B", which directly allows constraints. If the results are very sensitive to initial parameters, then you could minimise over a grid of initial values supplied to par and then choose the parameters that give the best result.
You could also use "SANN" with optim (or any other unconstrained optimiser), but change your initial objective function such that it's automatically constrained. For example, if you really want to minimise wrt \beta but \beta must lie between 0 and 1, then you could instead instead minimise wrt \tau and replace \beta by exp(\tau)/(1+exp(\tau)) (the logit function) in your objective function. It'll always be between 0 and 1 then.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I'm working with large data sets (matrices of dimension 6000 x 3072), and using the prcomp() function to do my principal component calculation. However, the function is extremely slow. Even using the rank argument which can limit the number of components to calculate, it still takes 7-8 minutes. Now, I need to calculate principal components 45 times, as well as do some other intermediate calculation that take a few minutes on their own. So I don't want to sit staring at my computer screen for 8-9 hours on this simple analysis.
What are the fastest principal component analysis packages, I can use to speed up the process. I only need to calculate the first 20, so, so the majority of the computation can be ignored.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
Are there any packages in R out there that will do the work of transforming a uni-variate or bi-variate time series to be stationary?
Thanks; any help would be greatly appreciated!
Is there a one for all package with a bunch of different functions to convert non stationary time series to stationary? No (As far as I know)
Its all about the data and figuring out what method would work.
To check if your time series is stationary - can try box.test, adf.test or kpss.test
Did you try diff()? diff calculates the differences between all consecutive values of a vector.
"One way to make a non-stationary time series stationary — compute the differences between consecutive observations. This is known as differencing." - from link
Another way would be log() transformation which is often used with diff().
Other methods are square, log difference, lag. Could try different combinations of those techniques for example log square difference or try other things like Box-Cox transformations.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I am by no means a math person, but I am really trying to figure out how create a graphable function from some data plots I measure from a chemical titration. I have been trying to learn R and I would like to know if anyone can explain to me or point me to a guide to create a mathmatic function of the titration graph below.
Thanks in advance.
What you are looking for is a Interpolation. I'm not a R programmer, but I'll try to answer anyway.
Some of the more common ways to achieve this function you want is by Polynomial Interpolation which usually gives back a Nth degree polynomial function, where N is the number of data points minus one (1 point gives a constant, 2 points make a line, 3 makes a*x^2 + b*x + c and so on).
Other common alternatives I've learn are used in Computer Graphics are Splines, B-spline, Bézier curve and Hermite interpolation. Those make the curve smoother and good looking (I've told they originated in the car industry so they are less true to the data points).
TL;DR: I've found evidence that there is a implementation of spline in R from the question Interpolation in R which may lead you to your solution.
Hope you get to know better your tool and do a great work.
When doing this kind of work in Computer Science we call it Numerical Methods (at least here in my university), I've done some class and homework in this area while attending to the Numerical Methods Course (it can be found at github) but it's nothing worth noting.
I would add a lot of links to Wikipedia but StackOverflow didn't allow it.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
implemented a solution to a problem in arithmetic precision handeling in gmp - but the results are rather strange . as a part of troubleshooting I was wondering whether there is any other package hich woudl allow similar operatiosn as gmp in R. I would need something like chooseZ and multiplication of numbers larger than 2^64 - jsu to make sure that I am not having an error somewhere in this step of my script
need to compute nubers like
choose(2450,765) then multiply it with a floating point number like 0.0034
the log solution is not really working becasue the expression can aslo be
sum for 2 to k of (k* k*choose(1800,800)*0.089)
so Iw ould need a wauy to sum over (k kchoose(1800,800)*0.089)
You could just work on the logarithmic scale:
lchoose(2450,765) + log(0.0034)
#[1] 1511.433
If you exponentiate this, you get a really big number. I simply do not believe that this number would be different from Inf for any practical purpose and I believe even less that you'd need it to exact precision.
Edit:
If you want to calculate \sum_{i=2}^k{i^2 * choose(1800, 800) * 0.089}, you should see that this is the same as choose(1800, 800) * \sum_{i=2}^k{i^2 * 0.089} and then you can again work on the logarithmic scale.