FOR loops and range in Julia - julia

When I try to define range in a for loop when the range is less than 1 I get errors.
For example the following code:
i = linspace(0, 3, 200)
graph = zeros(length(i), 1)
for j in 0:0.015:3
graph[j] = j*cos(j^2)
end
Reports the following error: ERROR: BoundsError()
Why is that?

Like StefanKarpinski noted, it is not the for loop (variable) that only takes integers, but the array index. You cannot access the 0.15th element of an array.
How about this:
x = range(0, stop=3, length=200)
y = zeros(length(x))
for i = 1:length(x)
j = x[i]
y[i] = j*cos(j^2)
end
Or even:
x = range(0, stop=3, length=200)
y = zeros(length(x))
for (i, j) in enumerate(x)
y[i] = j * cos(j * j)
end

IMHO, the for loop takes more space without being clearer. Note sure what is considered "julianic", but in the python world I think most people would go for a list comprehension:
tic()
x = linspace(0, 3, 200)
y = [j*cos(j*j) for j in x]
toc()
elapsed time: 0.014455408 seconds
Even nicer to my eyes and faster is:
tic()
x = linspace(0, 3, 200)
y = x.*cos(x.^2)
toc()
elapsed time: 0.000600354 seconds
where the . in .* or .^ indicates you're applying the method/function element by element.
Not sure why this is a faster. A Julia expert may want to help us in that.

Related

Why my code in Julia is getting slower for higher iteration?

I wrote a main function which uses a stochastic optimization algorithm (Particle Swarm Optimization) to found optimal solution for a ODE system. I would run 50 times to make sure the optimal can be found. At first, it operates normally, but now I found the calculation time would increase with iteration increases.
It cost less than 300s for first ten calculations, but it would increase to 500s for final calculation. It seems that it would cost 3~5 seconds more for each calculation. I have followed the high performance tips to optimize my code but it doesn't work.
I am sorry I don't know quite well how to upload my code before, here is the code I wrote below. But in this code, the experimental data is not loaded, I may need to find a way to upload data. In main function, with the increase of i, the time cost is increasing for each calculation.
Oh, by the way, I found another interesting phenomenon. I changed the number of calculations and the calculation time changed again. For first 20 calculations in main loop, each calculation cost about 300s, and the memory useage fluctuates significantly. But something I don't know happend, it is speeding up. It cost 1/4 time less time for each calculation, which is about 80s. And the memory useage became a straight line like this:
I knew the Julia would do pre-heating for first run and then speed up. But this situation seems different. This situation looks like Julia run slowly for first 20 calculation, and then it found a good way to optimize the memory useage and speed up. Then the program just run at full speed.
using CSV, DataFrames
using BenchmarkTools
using DifferentialEquations
using Statistics
using Dates
using Base.Threads
using Suppressor
function uniform(dim::Int, lb::Array{Float64, 1}, ub::Array{Float64, 1})
arr = rand(Float64, dim)
#inbounds for i in 1:dim; arr[i] = arr[i] * (ub[i] - lb[i]) + lb[i] end
return arr
end
mutable struct Problem
cost_func
dim::Int
lb::Array{Float64,1}
ub::Array{Float64,1}
end
mutable struct Particle
position::Array{Float64,1}
velocity::Array{Float64,1}
cost::Float64
best_position::Array{Float64,1}
best_cost::Float64
end
mutable struct Gbest
position::Array{Float64,1}
cost::Float64
end
function PSO(problem, data_dict; max_iter=100,population=100,c1=1.4962,c2=1.4962,w=0.7298,wdamp=1.0)
dim = problem.dim
lb = problem.lb
ub = problem.ub
cost_func = problem.cost_func
gbest, particles = initialize_particles(problem, population, data_dict)
# main loop
for iter in 1:max_iter
#threads for i in 1:population
particles[i].velocity .= w .* particles[i].velocity .+
c1 .* rand(dim) .* (particles[i].best_position .- particles[i].position) .+
c2 .* rand(dim) .* (gbest.position .- particles[i].position)
particles[i].position .= particles[i].position .+ particles[i].velocity
particles[i].position .= max.(particles[i].position, lb)
particles[i].position .= min.(particles[i].position, ub)
particles[i].cost = cost_func(particles[i].position,data_dict)
if particles[i].cost < particles[i].best_cost
particles[i].best_position = copy(particles[i].position)
particles[i].best_cost = copy(particles[i].cost)
if particles[i].best_cost < gbest.cost
gbest.position = copy(particles[i].best_position)
gbest.cost = copy(particles[i].best_cost)
end
end
end
w = w * wdamp
if iter % 50 == 1
println("Iteration " * string(iter) * ": Best Cost = " * string(gbest.cost))
println("Best Position = " * string(gbest.position))
println()
end
end
gbest, particles
end
function initialize_particles(problem, population,data_dict)
dim = problem.dim
lb = problem.lb
ub = problem.ub
cost_func = problem.cost_func
gbest_position = uniform(dim, lb, ub)
gbest = Gbest(gbest_position, cost_func(gbest_position,data_dict))
particles = []
for i in 1:population
position = uniform(dim, lb, ub)
velocity = zeros(dim)
cost = cost_func(position,data_dict)
best_position = copy(position)
best_cost = copy(cost)
push!(particles, Particle(position, velocity, cost, best_position, best_cost))
if best_cost < gbest.cost
gbest.position = copy(best_position)
gbest.cost = copy(best_cost)
end
end
return gbest, particles
end
function get_dict_label(beta::Int)
beta_str = lpad(beta,2,"0")
T_label = "Temperature_" * beta_str
M_label = "Mass_" * beta_str
MLR_label = "MLR_" * beta_str
return T_label, M_label, MLR_label
end
function get_error(x::Vector{Float64}, y::Vector{Float64})
numerator = sum((x.-y).^2)
denominator = var(x) * length(x)
numerator/denominator
end
function central_diff(x::AbstractArray{Float64}, y::AbstractArray{Float64})
# Central difference quotient
dydx = Vector{Float64}(undef, length(x))
dydx[2:end] .= diff(y) ./ diff(x)
#views dydx[2:end-1] .= (dydx[2:end-1] .+ dydx[3:end])./2
# Forward and Backward difference
dydx[1] = (y[2]-y[1])/(x[2]-x[1])
dydx[end] = (y[end]-y[end-1])/(x[end]-x[end-1])
return dydx
end
function decomposition!(dm,m,p,T)
# A-> residue + volitale
# B-> residue + volatile
beta,A1,E1,n1,k1,A2,E2,n2,k2,m1,m2 = p
R = 8.314
rxn1 = -m1 * exp(A1-E1/R/T) * max(m[1]/m1,0)^n1 / beta
rxn2 = -m2 * exp(A2-E2/R/T) * max(m[2]/m2,0)^n2 / beta
dm[1] = rxn1
dm[2] = rxn2
dm[3] = -k1 * rxn1 - k2 * rxn2
dm[4] = dm[1] + dm[2] + dm[3]
end
function read_file(file_path)
df = CSV.read(file_path, DataFrame)
data_dict = Dict{String, Vector{Float64}}()
for beta in 5:5:21
T_label, M_label, MLR_label = get_dict_label(beta)
T_data = collect(skipmissing(df[:, T_label]))
M_data = collect(skipmissing(df[:, M_label]))
T = T_data[T_data .< 780]
M = M_data[T_data .< 780]
data_dict[T_label] = T
data_dict[M_label] = M
data_dict[MLR_label] = central_diff(T, M)
end
return data_dict
end
function initial_condition(beta::Int64, ode_parameters::Array{Float64,1})
m_FR_initial = ode_parameters[end]
m_PVC_initial = 1 - m_FR_initial
T_span = (300.0, 800.0) # temperature range
p = [beta; ode_parameters; m_PVC_initial]
m0 = [p[end-1], p[end], 0.0, 1.0] # initial mass
return m0, T_span, p
end
function cost_func(ode_parameters, data_dict)
total_error = 0.0
for beta in 5:5:21
T_label, M_label, MLR_label= get_dict_label(beta)
T = data_dict[T_label]::Vector{Float64}
M = data_dict[M_label]::Vector{Float64}
MLR = data_dict[MLR_label]::Vector{Float64}
m0, T_span, p = initial_condition(beta,ode_parameters)
prob = ODEProblem(decomposition!,m0,T_span,p)
sol = solve(prob, AutoVern9(Rodas5(autodiff=false)),saveat=T,abstol=1e-8,reltol=1e-8,maxiters=1e4)
if sol.retcode != :Success
# println(1)
return Inf
else
M_sol = #view sol[end, :]
MLR_sol = central_diff(T, M_sol)::Array{Float64,1}
error1 = get_error(MLR, MLR_sol)::Float64
error2 = get_error(M, M_sol)::Float64
total_error += error1 + error2
end
end
total_error
end
function main()
flush(stdout)
total_time = 0
best_costs = []
file_path = raw"F:\17-Fabric\17-Fabric (Smoothed) TG.csv"
data_dict = read_file(file_path)
dimension = 9
lb = [5, 47450, 0.0, 0.0, 24.36, 148010, 0.0, 0.0, 1e-5]
ub = [25.79, 167700, 5, 1, 58.95, 293890, 5, 1, 0.25]
problem = Problem(cost_func,dimension,lb,ub)
global_best_cost = Inf
println("-"^100)
println("Running PSO ...")
population = 50
max_iter = 1001
println("The population is: ", population)
println("Max iteration is:", max_iter)
for i in 1:50 # The number of calculation
start_time = Dates.now()
println("Current iteration is: ", string(i))
gbest, particles = PSO(problem, data_dict, max_iter=max_iter, population=population)
if gbest.cost < global_best_cost
global_best_cost = gbest.cost
global_best_position = gbest.position
end
end_time = Dates.now()
time_duration = round(end_time-start_time, Second)
total_time += time_duration.value
push!(best_costs, gbest.cost)
println()
println("The Best is:")
println(gbest.cost)
println(gbest.position)
println("The calculation time is: " * string(time_duration))
println()
println("-"^50)
end
println('-'^100)
println("Global best cost is: ", global_best_cost)
println("Global best position is: ", global_best_position)
println(total_time)
best_costs
end
#suppress_err begin
#time global best_costs = main()
end
So, what is the possible mechanism for this? Is there a way to avoid this problem? Because If I increase the population and max iterations of particles, the time increased would be extremely large and thus is unacceptable.
And what is the possible mechanism for speed up the program I mentioned above? How to trigger this mechanism?
As the parameters of an ODE optimizes it can completely change its characteristics. Your equation could be getting more stiff and require different ODE solvers. There are many other related ways, but you can see how changing parameters could give such a performance issue. It's best to use methods like AutoTsit5(Rodas5()) and the like in such estimation cases because it's hard to know or guess what the performance will be like, and thus adaptiveness in the method choice can be crucial.

Continued fractions and Pell's equation - numerical issues

Mathematical background
Continued fractions are a way to represent numbers (rational or not), with a basic recursion formula to calculate it. Given a number r, we define r[0]=r and have:
for n in range(0..N):
a[n] = floor(r[n])
if r[n] == [an]: break
r[n+1] = 1 / (r[n]-a[n])
where a is the final representation. We can also define a series of convergents by
h[-2,-1] = [0, 1]
k[-2, -1] = [1, 0]
h[n] = a[n]*h[n-1]+h[n-2]
k[n] = a[n]*k[n-1]+k[n-2]
where h[n]/k[n] converge to r.
Pell's equation is a problem of the form x^2-D*y^2=1 where all numbers are integers and D is not a perfect square in our case. A solution for a given D that minimizes x is given by continued fractions. Basically, for the above equation, it is guaranteed that this (fundamental) solution is x=h[n] and y=k[n] for the lowest n found which solves the equation in the continued fraction expansion of sqrt(D).
Problem
I am failing to get this simple algorithm work for D=61. I first noticed it did not solve Pell's equation for 100 coefficients, so I compared it against Wolfram Alpha's convergents and continued fraction representation and noticed the 20th elements fail - the representation is 3 compared to 4 that I get, yielding different convergents - h[20]=335159612 on Wolfram compared to 425680601 for me.
I tested the code below, two languages (though to be fair, Python is C under the hood I guess), on two systems and get the same result - a diff on loop 20. I'll note that the convergents are still accurate and converge! Why am I getting different results compared to Wolfram Alpha, and is it possible to fix it?
For testing, here's a Python program to solve Pell's equation for D=61, printing first 20 convergents and the continued fraction representation cf (and some extra unneeded fluff):
from math import floor, sqrt # Can use mpmath here as well.
def continued_fraction(D, count=100, thresh=1E-12, verbose=False):
cf = []
h = (0, 1)
k = (1, 0)
r = start = sqrt(D)
initial_count = count
x = (1+thresh+start)*start
y = start
while abs(x/y - start) > thresh and count:
i = int(floor(r))
cf.append(i)
f = r - i
x, y = i*h[-1] + h[-2], i*k[-1] + k[-2]
if verbose is True or verbose == initial_count-count:
print(f'{x}\u00B2-{D}x{y}\u00B2 = {x**2-D*y**2}')
if x**2 - D*y**2 == 1:
print(f'{x}\u00B2-{D}x{y}\u00B2 = {x**2-D*y**2}')
print(cf)
return
count -= 1
r = 1/f
h = (h[1], x)
k = (k[1], y)
print(cf)
raise OverflowError(f"Converged on {x} {y} with count {count} and diff {abs(start-x/y)}!")
continued_fraction(61, count=20, verbose=True, thresh=-1) # We don't want to stop on account of thresh in this example
A c program doing the same:
#include<stdio.h>
#include<math.h>
#include<stdlib.h>
int main() {
long D = 61;
double start = sqrt(D);
long h[] = {0, 1};
long k[] = {1, 0};
int count = 20;
float thresh = 1E-12;
double r = start;
long x = (1+thresh+start)*start;
long y = start;
while(abs(x/(double)y-start) > -1 && count) {
long i = floor(r);
double f = r - i;
x = i * h[1] + h[0];
y = i * k[1] + k[0];
printf("%ld\u00B2-%ldx%ld\u00B2 = %lf\n", x, D, y, x*x-D*y*y);
r = 1/f;
--count;
h[0] = h[1];
h[1] = x;
k[0] = k[1];
k[1] = y;
}
return 0;
}
mpmath, python's multi-precision library can be used. Just be careful that all the important numbers are in mp format.
In the code below, x, y and i are standard multi-precision integers. r and f are multi-precision real numbers. Note that the initial count is set higher than 20.
from mpmath import mp, mpf
mp.dps = 50 # precision in number of decimal digits
def continued_fraction(D, count=22, thresh=mpf(1E-12), verbose=False):
cf = []
h = (0, 1)
k = (1, 0)
r = start = mp.sqrt(D)
initial_count = count
x = 0 # some dummy starting values, they will be overwritten early in the while loop
y = 1
while abs(x/y - start) > thresh and count > 0:
i = int(mp.floor(r))
cf.append(i)
x, y = i*h[-1] + h[-2], i*k[-1] + k[-2]
if verbose or initial_count == count:
print(f'{x}\u00B2-{D}x{y}\u00B2 = {x**2-D*y**2}')
if x**2 - D*y**2 == 1:
print(f'{x}\u00B2-{D}x{y}\u00B2 = {x**2-D*y**2}')
print(cf)
return
count -= 1
f = r - i
r = 1/f
h = (h[1], x)
k = (k[1], y)
print(cf)
raise OverflowError(f"Converged on {x} {y} with count {count} and diff {abs(start-x/y)}!")
continued_fraction(61, count=22, verbose=True, thresh=mpf(1e-100))
Output is similar to wolfram's:
...
335159612²-61x42912791² = 3
1431159437²-61x183241189² = -12
1766319049²-61x226153980² = 1
[7, 1, 4, 3, 1, 2, 2, 1, 3, 4, 1, 14, 1, 4, 3, 1, 2, 2, 1, 3, 4, 1]

JuMPDict change of dimension

I am using Julia 0.6.2 and JuMP 0.18.5 (I can't use a more recent version since I need to use an old package).
Creating JuMP variables with conditions on the index lead to a JuMPDict instead of an Array.
For example:
m = Model(solver = CplexSolver())
# type of x: JuMP.JuMPDict{JuMP.Variable,2}
#variable(m, x[i in 1:3, j in 1:3; i < j] >= 0)
# type of y: JuMP.JuMPDict{JuMP.Variable,3}
#variable(m, y[i in 1:3, j in 1:3, k in 1:3; i < j] >= 0)
I would like to apply a function f to x and to y[:, :, k] for all k in 1:3. However, I don't know how to define such a generic function.
I tried to set the argument type of f to JuMP.JuMPDict{JuMP.Variable,2}:
function f(input::JuMP.JuMPDict{JuMP.Variable,2})
...
end
I can use the function on x but not on y:
f(x) # Works
for k in 1:3
f(y[:, :, k]) # does not work as y is not an array
end
My last idea was to convert y into several JuMP.JuMPDict{JuMP.Variable,2}:
function convertTo2D(dict3D::JuMP.JuMPDict{JuMP.Variable,3}, k::Int)
dict2D = JuMP.JuMPDict{JuMP.Variable,2}() # This line returns "ERROR: KeyError: key :model not found"
for (key, value) in keys(dict3D)
if key[3] == k
dict2D[(key[1], key[2])] = value # Not sure if it will work
end
end
return dict2D
end
If this was working I could use:
for k in 1:3
f(convertTd2D(y, k))
end
Do you know how I could fix convertTo2D or do what I want another way?
Anonymous variables solved my problem. Thanks to them I can successively create the variables of y in a for loop. Variable y is now an array of "2D dictionaries" rather than a "3D dictionaries":
y = Array{JuMP.JuMPDict{JuMP.Variable,2}, 1}([])
for k in 1:3
yk = #variable(m, [i in 1:3, j in 1:3; i < j] >= 0)
f(yk)
push!(y, yk)
end

Julia loops are as slow as R loops

The code below in Julia and R is to show that the estimator of the population variance is a biased estimator, that is it depends on the sample size and no matter how many times we average over different observations, for small number of data points it is not equal to the variance of the population.
It takes for Julia ~10 seconds to finish the two loops and R does it in ~7 seconds.
If I leave the code inside the loops commented then the loops in R and Julia take the same time and if I only sum the iterators by s = s + i+ j Julia finishes in ~0.15s and R in ~0.5s.
Is it that Julia loops are slow or R became fast?
How can I improve the speed of the code below for Julia?
Can the R code become faster?
Julia:
using Plots
trials = 100000
sample_size = 10;
sd = Array{Float64}(trials,sample_size-1)
tic()
for i = 2:sample_size
for j = 1:trials
res = randn(i)
sd[j,i-1] = (1/(i))*(sum(res.^2))-(1/((i)*i))*(sum(res)*sum(res))
end
end
toc()
sd2 = mean(sd,1)
plot(sd2[1:end])
R:
trials = 100000
sample_size = 10
sd = matrix(, nrow = trials, ncol = sample_size-1)
start_time = Sys.time()
for(i in 2:sample_size){
for(j in 1:trials){
res <- rnorm(n = i, mean = 0, sd = 1)
sd[j,i-1] = (1/(i))*(sum(res*res))-(1/((i)*i))*(sum(res)*sum(res))
}
}
end_time = Sys.time()
end_time - start_time
sd2 = apply(sd,2,mean)
plot(sqrt(sd2))
The plot in case anybody is curious!:
One way I could achieve much higher speed is to use parallel loop which is ver easy to implement in Julia:
using Plots
trials = 100000
sample_size = 10;
sd = SharedArray{Float64}(trials,sample_size-1)
tic()
#parallel for i = 2:sample_size
for j = 1:trials
res = randn(i)
sd[j,i-1] = (1/(i))*(sum(res.^2))-(1/((i)*i))*(sum(res)*sum(res))
end
end
toc()
sd2 = mean(sd,1)
plot(sd2[1:end])
Using global variables in Julia in general is slow and should give you speed comparable to R. You should wrap your code in a function to make it fast.
Here is a timing from my laptop (I cut out only the relevant part):
julia> function test()
trials = 100000
sample_size = 10;
sd = Array{Float64}(trials,sample_size-1)
tic()
for i = 2:sample_size
for j = 1:trials
res = randn(i)
sd[j,i-1] = (1/(i))*(sum(res.^2))-(1/((i)*i))*(sum(res)*sum(res))
end
end
toc()
end
test (generic function with 1 method)
julia> test()
elapsed time: 0.243233887 seconds
0.243233887
Additionally in Julia if you use randn! instead of randn you can speed it up even more as you avoid reallocation of res vector (I am not doing other optimizations to the code as this optimization is distinct to Julia in comparison to R; all other possible speedups in this code would help Julia and R in a similar way):
julia> function test2()
trials = 100000
sample_size = 10;
sd = Array{Float64}(trials,sample_size-1)
tic()
for i = 2:sample_size
res = zeros(i)
for j = 1:trials
randn!(res)
sd[j,i-1] = (1/(i))*(sum(res.^2))-(1/((i)*i))*(sum(res)*sum(res))
end
end
toc()
end
test2 (generic function with 1 method)
julia> test2()
elapsed time: 0.154881137 seconds
0.154881137
Finally it is better to use BenchmarkTools package to measure execution time in Julia. First tic and toc functions will be removed from Julia 0.7. Second - you mix compilation and execution time if you use them (when running test function twice you will see that the time is reduced on the second run as Julia does not spend time compiling functions).
EDIT:
You can keep trials, sample_size and sd as global variables but then you should prefix them with const. Then it is enough to wrap a loop in a function like this:
const trials = 100000;
const sample_size = 10;
const sd = Array{Float64}(trials,sample_size-1);
function f()
for i = 2:sample_size
for j = 1:trials
res = randn(i)
sd[j,i-1] = (1/(i))*(sum(res.^2))-(1/((i)*i))*(sum(res)*sum(res))
end
end
end
tic()
f()
toc()
Now for #parallel:
First, you should use #sync before #parallel to make sure all works correctly (i.e. that all workers have finished before you move to the next instruction). To see why this is needed run the following code on a system with more than one worker:
sd = SharedArray{Float64}(10^6);
#parallel for i = 1:2
if i < 2
sd[i] = 1
else
for j in 2:10^6
sd[j] = 1
end
end
end
minimum(sd) # most probably prints 0.0
sleep(1)
minimum(sd) # most probably prints 1.0
while this
sd = SharedArray{Float64}(10^6);
#sync #parallel for i = 1:2
if i < 2
sd[i] = 1
else
for j in 2:10^6
sd[j] = 1
end
end
end
minimum(sd) # always prints 1.0
Second, the speed improvement is due to #parallel macro not SharedArray. If you try your code on Julia with one worker it is also faster. The reason, in short, is that #parallel internally wraps your code inside a function. You can check it by using #macroexpand:
julia> #macroexpand #sync #parallel for i = 2:sample_size
for j = 1:trials
res = randn(i)
sd[j,i-1] = (1/(i))*(sum(res.^2))-(1/((i)*i))*(sum(res)*sum(res))
end
end
quote # task.jl, line 301:
(Base.sync_begin)() # task.jl, line 302:
#19#v = (Base.Distributed.pfor)(begin # distributed\macros.jl, line 172:
function (#20#R, #21#lo::Base.Distributed.Int, #22#hi::Base.Distributed.Int) # distributed\macros.jl, line 173:
for i = #20#R[#21#lo:#22#hi] # distributed\macros.jl, line 174:
begin # REPL[22], line 2:
for j = 1:trials # REPL[22], line 3:
res = randn(i) # REPL[22], line 4:
sd[j, i - 1] = (1 / i) * sum(res .^ 2) - (1 / (i * i)) * (sum(res) * sum(res))
end
end
end
end
end, 2:sample_size) # task.jl, line 303:
(Base.sync_end)() # task.jl, line 304:
#19#v
end

Newton's Method in R Precision/Output

So, I'm supposed to write the code to execute Newton's Method to calculate the square root of any arbitrary number to a specified precision (tolerance).
Here is my code:
MySqrt <- function(x, eps = 1e-6, itmax = 100, verbose = TRUE) {
GUESS <- 11
myvector <- integer(0)
i <- 1
if (x < 0) {
stop("Square root of negative value")
}
else {
myvector[i] <- GUESS
while (i <= itmax) {
GUESS <- (GUESS + (x/GUESS)) * 0.5
myvector[i+1] <- GUESS
if (abs(GUESS-myvector[i]) < eps) {
break()
}
if (verbose) {
cat("Iteration: ", formatC(i, width = 1), formatC(GUESS, digits = 10, width = 12), "\n")
}
i <- i + 1
}
}
myvector[i]
}
eps is the tolerance. When I use the function to calculate the square root of, say, 21, I got this as an output:
> MySqrt(21, eps = 1e-1, verbose = TRUE)
Iteration: 1 6.454545455
Iteration: 2 4.854033291
Iteration: 3 4.59016621
I'm not sure if the function stops carrying out iterations when it is supposed to, however. Can someone verify if my code is correct? This would be greatly appreciated!
Your code is almost correct. It is iterating the correct number of times. The only bug is that you don't increment i until after the break statement, so you are not returning the most recent approximation. Instead you are returning the previous one.
In order to verify that it is stopping at the right time, you can move the tracing line up above the break. You can also add GUESS-myvector[i] to the trace, so you can watch it halt as soon as the difference gets small enough. If you do this and run the function, the fact that it is stopping at the right time, as well as the fact that it is returning the wrong value, will be obvious:
> MySqrt(21,eps=1e-1)
Iteration: 1 6.454545 -4.545455
Iteration: 2 4.854033 -1.600512
Iteration: 3 4.590166 -0.2638671
Iteration: 4 4.582582 -0.007584239
[1] 4.590166
While your code is (almost) correct, it is not written in very good R style. For example, unless you want to return the entire vector of estimates, there is no reason that you need to keep them all around. Also, rather than using a while loop, here it would make more sense to use a for loop. Here one possible improved version of your function:
MySqrt <- function(x, eps = 1e-6, itmax = 100, verbose = TRUE) {
GUESS <- 11
if (x < 0) {
stop("Square root of negative value")
}
for(i in 1:itmax){
nextGUESS <- (GUESS + (x/GUESS)) * 0.5
if (verbose)
cat("Iteration: ", i, nextGUESS, nextGUESS-GUESS, "\n")
if (abs(GUESS-nextGUESS) < eps)
break
GUESS<- nextGUESS
}
nextGUESS
}

Resources