The renewal function for Weibull distribution m(t) with t = 10 is given as below.
I want to find the value of m(t). I wrote the following r code to compute m(t)
last_term = NULL
gamma_k = NULL
n = 50
for(k in 1:n){
gamma_k[k] = gamma(2*k + 1)/factorial(k)
}
for(j in 1: (n-1)){
prev = gamma_k[n-j]
last_term[j] = gamma(2*j + 1)/factorial(j)*prev
}
final_term = NULL
find_value = function(n){
for(i in 2:n){
final_term[i] = gamma_k[i] - sum(last_term[1:(i-1)])
}
return(final_term)
}
all_k = find_value(n)
af_sum = NULL
m_t = function(t){
for(k in 1:n){
af_sum[k] = (-1)^(k-1) * all_k[k] * t^(2*k)/gamma(2*k + 1)
}
return(sum(na.omit(af_sum)))
}
m_t(20)
The output is m(t) = 2.670408e+93. Does my iteratvie procedure correct? Thanks.
I don't think it will work. First, lets move Γ(2k+1) from denominator of m(t) into Ak. Thus, Ak will behave roughly as 1/k!.
In the nominator of the m(t) terms there is t2k, so roughly speaking you're computing sum with terms
100k/k!
From Stirling formula
k! ~ kk, making terms
(100/k)k
so yes, they will start to decrease and converge to something but after 100th term
Anyway, here is the code, you could try to improve it, but it breaks at k~70
N <- 20
A <- rep(0, N)
# compute A_k/gamma(2k+1) terms
ps <- 0.0 # previous sum
A[1] = 1.0
for(k in 2:N) {
ps <- ps + A[k-1]*gamma(2*(k-1) + 1)/factorial(k-1)
A[k] <- 1.0/factorial(k) - ps/gamma(2*k+1)
}
print(A)
t <- 10.0
t2 <- t*t
r <- 0.0
for(k in 1:N){
r <- r + (-t2)^k*A[k]
}
print(-r)
UPDATE
Ok, I calculated Ak as in your question, got the same answer. I want to estimate terms Ak/Γ(2k+1) from m(t), I believe it will be pretty much dominated by 1/k! term. To do that I made another array k!*Ak/Γ(2k+1), and it should be close to one.
Code
N <- 20
A <- rep(0.0, N)
psum <- function( pA, k ) {
ps <- 0.0
if (k >= 2) {
jmax <- k - 1
for(j in 1:jmax) {
ps <- ps + (gamma(2*j+1)/factorial(j))*pA[k-j]
}
}
ps
}
# compute A_k/gamma(2k+1) terms
A[1] = gamma(3)
for(k in 2:N) {
A[k] <- gamma(2*k+1)/factorial(k) - psum(A, k)
}
print(A)
B <- rep(0.0, N)
for(k in 1:N) {
B[k] <- (A[k]/gamma(2*k+1))*factorial(k)
}
print(B)
shows that
I got the same Ak values as you did.
Bk is indeed very close to 1
It means that term Ak/Γ(2k+1) could be replaced by 1/k! to get quick estimate of what we might get (with replacement)
m(t) ~= - Sum(k=1, k=Infinity) (-1)k (t2)k / k! = 1 - Sum(k=0, k=Infinity) (-t2)k / k!
This is actually well-known sum and it is equal to exp() with negative argument (well, you have to add term for k=0)
m(t) ~= 1 - exp(-t2)
Conclusions
Approximate value is positive. Probably will stay positive after all, Ak/Γ(2k+1) is a bit different from 1/k!.
We're talking about 1 - exp(-100), which is 1-3.72*10-44! And we're trying to compute it precisely summing and subtracting values on the order of 10100 or even higher. Even with MPFR I don't think this is possible.
Another approach is needed
OK, so I ended up going down a pretty different road on this. I have implemented a simple discretization of the integral equation which defines the renewal function:
m(t) = F(t) + integrate (m(t - s)*f(s), s, 0, t)
The integral is approximated with the rectangle rule. Approximating the integral for different values of t gives a system of linear equations. I wrote a function to generate the equations and extract a matrix of coefficients from it. After looking at some examples, I guessed a rule to define the coefficients directly and used that to generate solutions for some examples. In particular I tried shape = 2, t = 10, as in OP's example, with step = 0.1 (so 101 equations).
I found that the result agrees pretty well with an approximate result which I found in a paper (Baxter et al., cited in the code). Since the renewal function is the expected number of events, for large t it is approximately equal to t/mu where mu is the mean time between events; this is a handy way to know if we're anywhere in the neighborhood.
I was working with Maxima (http://maxima.sourceforge.net), which is not efficient for numerical stuff, but which makes it very easy to experiment with different aspects. At this point it would be straightforward to port the final, numerical stuff to another language such as Python.
Thanks to OP for suggesting the problem, and S. Pappadeux for insightful discussions. Here is the plot I got comparing the discretized approximation (red) with the approximation for large t (blue). Trying some examples with different step sizes, I saw that the values tend to increase a little as step size gets smaller, so I think the red line is probably a little low, and the blue line might be more nearly correct.
Here is my Maxima code:
/* discretize weibull renewal function and formulate system of linear equations
* copyright 2020 by Robert Dodier
* I release this work under terms of the GNU General Public License
*
* This is a program for Maxima, a computer algebra system.
* http://maxima.sourceforge.net/
*/
"Definition of the renewal function m(t):" $
renewal_eq: m(t) = F(t) + 'integrate (m(t - s)*f(s), s, 0, t);
"Approximate integral equation with rectangle rule:" $
discretize_renewal (delta_t, k) :=
if equal(k, 0)
then m(0) = F(0)
else m(k*delta_t) = F(k*delta_t)
+ m(k*delta_t)*f(0)*(delta_t / 2)
+ sum (m((k - j)*delta_t)*f(j*delta_t)*delta_t, j, 1, k - 1)
+ m(0)*f(k*delta_t)*(delta_t / 2);
make_eqs (n, delta_t) :=
makelist (discretize_renewal (delta_t, k), k, 0, n);
make_vars (n, delta_t) :=
makelist (m(k*delta_t), k, 0, n);
"Discretized integral equation and variables for n = 4, delta_t = 1/2:" $
make_eqs (4, 1/2);
make_vars (4, 1/2);
make_eqs_vars (n, delta_t) :=
[make_eqs (n, delta_t), make_vars (n, delta_t)];
load (distrib);
subst_pdf_cdf (shape, scale, e) :=
subst ([f = lambda ([x], pdf_weibull (x, shape, scale)), F = lambda ([x], cdf_weibull (x, shape, scale))], e);
matrix_from (eqs, vars) :=
(augcoefmatrix (eqs, vars),
[submatrix (%%, length(%%) + 1), - col (%%, length(%%) + 1)]);
"Subsitute Weibull pdf and cdf for shape = 2 into discretized equation:" $
apply (matrix_from, make_eqs_vars (4, 1/2));
subst_pdf_cdf (2, 1, %);
"Just the right-hand side matrix:" $
rhs_matrix_from (eqs, vars) :=
(map (rhs, eqs),
augcoefmatrix (%%, vars),
[submatrix (%%, length(%%) + 1), col (%%, length(%%) + 1)]);
"Generate the right-hand side matrix, instead of extracting it from equations:" $
generate_rhs_matrix (n, delta_t) :=
[delta_t * genmatrix (lambda ([i, j], if i = 1 and j = 1 then 0
elseif j > i then 0
elseif j = i then f(0)/2
elseif j = 1 then f(delta_t*(i - 1))/2
else f(delta_t*(i - j))), n + 1, n + 1),
transpose (makelist (F(k*delta_t), k, 0, n))];
"Generate numerical right-hand side matrix, skipping over formulas:" $
generate_rhs_matrix_numerical (shape, scale, n, delta_t) :=
block ([f, F, numer: true], local (f, F),
f: lambda ([x], pdf_weibull (x, shape, scale)),
F: lambda ([x], cdf_weibull (x, shape, scale)),
[genmatrix (lambda ([i, j], delta_t * if i = 1 and j = 1 then 0
elseif j > i then 0
elseif j = i then f(0)/2
elseif j = 1 then f(delta_t*(i - 1))/2
else f(delta_t*(i - j))), n + 1, n + 1),
transpose (makelist (F(k*delta_t), k, 0, n))]);
"Solve approximate integral equation (shape = 3, t = 1) via LU decomposition:" $
fpprintprec: 4 $
n: 20 $
t: 1;
[AA, bb]: generate_rhs_matrix_numerical (3, 1, n, t/n);
xx_by_lu: linsolve_by_lu (ident(n + 1) - AA, bb, floatfield);
"Iterative solution of approximate integral equation (shape = 3, t = 1):" $
xx: bb;
for i thru 10 do xx: AA . xx + bb;
xx - (AA.xx + bb);
xx_iterative: xx;
"Should find iterative and LU give same result:" $
xx_diff: xx_iterative - xx_by_lu[1];
sqrt (transpose(xx_diff) . xx_diff);
"Try shape = 2, t = 10:" $
n: 100 $
t: 10 $
[AA, bb]: generate_rhs_matrix_numerical (2, 1, n, t/n);
xx_by_lu: linsolve_by_lu (ident(n + 1) - AA, bb, floatfield);
"Baxter, et al., Eq. 3 (for large values of t) compared to discretization:" $
/* L.A. Baxter, E.M. Scheuer, D.J. McConalogue, W.R. Blischke.
* "On the Tabulation of the Renewal Function,"
* Econometrics, vol. 24, no. 2 (May 1982).
* H(t) is their notation for the renewal function.
*/
H(t) := t/mu + sigma^2/(2*mu^2) - 1/2;
tx_points: makelist ([float (k/n*t), xx_by_lu[1][k, 1]], k, 1, n);
plot2d ([H(u), [discrete, tx_points]], [u, 0, t]), mu = mean_weibull(2, 1), sigma = std_weibull(2, 1);
Summary of problem
My objective is to create a function called newton.raphson to implement the Newton-Raphson root-finding algorithm.
Root Finding Algorithm: x1 = X0 - f(xo)/f'(x0)
I have 2 arguments:
iter = number of iteration (value = 10^5)
epsilon = for the tolerance (value = 10^-10)
Can not depend on variables outside of the function
newton.raphson <- function(f, x0, iter=1e5, epsilon=1e-10) {
x <- x0
h <- 1e-5
for (t in 1:iter) {
drvt <- f((x+h)) - f((x-h)) / (2 * h)
update <- x - f(x)/ drvt
if (abs(update) < epsilon) {
break
}
x <- update
}
root <- x
return(root)
}
# Define some function to test
f <- function(x) {
x^2 - 4 * x - 7
}
I get the following results:
> newton.raphson(f, 0)
[1] 2.000045
> newton.raphson(f, 3)
[1] 5.000024
But results should be:
-1.316625
5.316625
Your derivative calculation is a little bit broken - you forgot parenthesis around the difference between f(x+h) and f(x-h):
drvt <- ( f(x+h) - f(x-h) ) / (2 * h)
Also, you should compare the difference between the old and new root approximation to the tolerance. In order to make things more clear, rename your misleading update variable to something like new.x. Then, your should check if (abs(new.x - x) < epsilon).
I'm trying to write a square root function in R. The function is supposed to behave like sqrt() but not use that function of course. I'm supposed to use Newton's method for computing the square root, which is:
y(a+1) = [y(a) + x / y(a)]/2
Here x is the number I'm trying to calculate the square root of and y(0) would be the initial guess of the square root of x.
The function is supposed to take in four arguments: x (the number I'm trying to compute the square root of), eps (the difference in value between iterations that are considered be equal), iter (the max number of iterations), and verbose (says I want to output intermediate results).
My issue is that I am not very well versed in writing functions in R. I have experience in C++, but they are slightly different in R.
I believe I'm supposed to write something that goes like this.
Asks the user to input a number as a guess for the value we want to calculate the square root of. Make a for loop from 1 to iter with two if statements 1) that stop the function and output the y value if the max number of iterations have been reached 2) stop the function and output the y value if the difference between successive iterations is less than eps.
Here is the code I have so far:
MySqrt <- function (x, eps = 1e-6, iter = 100, verbose = TRUE) {
for (i in 0:itmax) {
y[0] <- readline(prompt="Please enter your initial square root guess: ")
y[i + 1] = (y[i] + x / y[i])/2
if (i == 100) {
stop (return(y[i + 1]))
}
if (abs(y[i + 1] - y[i]) < eps) {
stop (return(y[i + 1]))
}
}
return(y[i + 1])
}
Here is the error I receive after entering the initial square root guess: Error in y[0] <- readline(prompt = "Please enter your initial square root guess: ") :
object 'y' not found
Honestly, I didn't expect the code to work because I'm sure there are more than one errors.
You should use iter instead of itmax.
I initialized y within the function and input of y should be formatted as a number instead of a character. You could also simplify the if statement by using | (or).
I also added "cat" function so you could see what i is before the function prints out the square root value.
MySqrt <- function (x, eps = 1e-6, iter = 100, verbose = TRUE) {
y = 0
y[1] = as.numeric(readline(prompt="Please enter your initial square root guess: "))
for (i in 1:iter) {
y[i+1] = as.numeric((y[i] + (x/y[i]))/2)
if (i == 100 || abs(y[i+1] - y[i]) < eps) {
cat("This is", i,"th try: \n")
return(y[i+1])
}
}
}
Try this simply:
newton.raphson <- function(x, start, epsilon=0.0001, maxiter=100) {
y <- c(start) # initial guess
a <- 1 # number of iterations
while (TRUE) {
y <- c(y, (y[a] + x / y[a])/2)
if (abs(y[a+1] - y[a]) < epsilon | a > maxiter) { # converged or exceeded maxiter
return(y[a+1])
}
a <- a + 1
}
}
newton.raphson(2, 0.5, 0.01)
# [1] 1.414234
newton.raphson(3, 0.5, 0.01)
# [1] 1.732051
since sqrt(n) < n/2 then with precision of 1/10000
sqrnt=function(y){
x=y/2
while (abs(x*x-y) > 1e-10)
{x=(x+y/x)/2 }
x
}
In Newton’s method. If you want to know the square root of a, you can start estimate a number, x (for examples a/2), you can compute a better estimate with the following formula:
y = (x + a / x) / 2
If y != x, you set x = y, and repeat until y == x. Then you get the square root of a. Please see the code below:
square_root <- function(a) {
x <- a/2
while (TRUE) {
y <- (x + a / x) / 2
if (y == x) break
x <- y
}
return(y)
}
What is behind Approx and approxfun? I know that these two functions perform a linear interpolation, however I didn't find any reference on how they do that. I guess they use a least square regression model but I am not sure.
Finally, if it's true that they use a least square regression model what is the difference between them and lm + predict?
As commented , you should read the source code. Interpolation problem
Find y(v), given (x,y)[i], i = 0,..,n-1 */
For example approxfun use a simple this algorithm for linear approximation :
y(v), given (x,y)[i], i = 0,..,n-1 */
find the correct interval (i,j) by bisection */
Use i,j for linear interpolation
Here an R code that aprahrase the C function approx1 :
approx1 <-
function( v, x, y)
{
## Approximate y(v), given (x,y)[i], i = 0,..,n-1 */
i <- 1
j <- length(x)
ij <- 0
## find the correct interval by bisection */
while(i < (j-1) ) {
ij <- floor((i + j)/2)
if(v < x[ij])
j <- ij
else
i <- ij
}
## linear interpolation */
if(v == x[j]) return(y[j])
if(v == x[i]) return(y[i])
return (y[i] + (y[j] - y[i]) * ((v - x[i])/(x[j] - x[i])))
}
So, I'm supposed to write the code to execute Newton's Method to calculate the square root of any arbitrary number to a specified precision (tolerance).
Here is my code:
MySqrt <- function(x, eps = 1e-6, itmax = 100, verbose = TRUE) {
GUESS <- 11
myvector <- integer(0)
i <- 1
if (x < 0) {
stop("Square root of negative value")
}
else {
myvector[i] <- GUESS
while (i <= itmax) {
GUESS <- (GUESS + (x/GUESS)) * 0.5
myvector[i+1] <- GUESS
if (abs(GUESS-myvector[i]) < eps) {
break()
}
if (verbose) {
cat("Iteration: ", formatC(i, width = 1), formatC(GUESS, digits = 10, width = 12), "\n")
}
i <- i + 1
}
}
myvector[i]
}
eps is the tolerance. When I use the function to calculate the square root of, say, 21, I got this as an output:
> MySqrt(21, eps = 1e-1, verbose = TRUE)
Iteration: 1 6.454545455
Iteration: 2 4.854033291
Iteration: 3 4.59016621
I'm not sure if the function stops carrying out iterations when it is supposed to, however. Can someone verify if my code is correct? This would be greatly appreciated!
Your code is almost correct. It is iterating the correct number of times. The only bug is that you don't increment i until after the break statement, so you are not returning the most recent approximation. Instead you are returning the previous one.
In order to verify that it is stopping at the right time, you can move the tracing line up above the break. You can also add GUESS-myvector[i] to the trace, so you can watch it halt as soon as the difference gets small enough. If you do this and run the function, the fact that it is stopping at the right time, as well as the fact that it is returning the wrong value, will be obvious:
> MySqrt(21,eps=1e-1)
Iteration: 1 6.454545 -4.545455
Iteration: 2 4.854033 -1.600512
Iteration: 3 4.590166 -0.2638671
Iteration: 4 4.582582 -0.007584239
[1] 4.590166
While your code is (almost) correct, it is not written in very good R style. For example, unless you want to return the entire vector of estimates, there is no reason that you need to keep them all around. Also, rather than using a while loop, here it would make more sense to use a for loop. Here one possible improved version of your function:
MySqrt <- function(x, eps = 1e-6, itmax = 100, verbose = TRUE) {
GUESS <- 11
if (x < 0) {
stop("Square root of negative value")
}
for(i in 1:itmax){
nextGUESS <- (GUESS + (x/GUESS)) * 0.5
if (verbose)
cat("Iteration: ", i, nextGUESS, nextGUESS-GUESS, "\n")
if (abs(GUESS-nextGUESS) < eps)
break
GUESS<- nextGUESS
}
nextGUESS
}