Julia on Float versus Octave on Float - julia

Version: v"0.5.0-dev+1259"
Context: The goal is to calculate the Rademacher penalty bound on a give data points n with respect to VC-dimension dvc and probability expressed by delta
Please consider Julia code:
#Growth function on any n points with respect to VC-dimmension
function mh(n, dvc)
if n <= dvc
2^n #A
else
n^dvc #B
end
end
#Rademacher penalty bound
function rademacher_penalty_bound(n::Int, dvc::Int, delta::Float64)
sqrt((2.0*log(2.0*n*mh(n,dvc)))/n) + sqrt((2.0/n)*log(1.0/delta)) + 1.0/n
end
and the equivalent code in Octave/Matlab:
%Growth function on n points for a give VC dimmension (dvc)
function md = mh(n, dvc)
if n <= dvc
md= 2^n;
else
md = n^dvc;
end
end
%Rademacher penalty bound
function epsilon = rademacher_penalty_bound (n, dvc, delta)
epsilon = sqrt ((2*log(2*n*mh(n,dvc)))/n) + sqrt((2/n)*log(1/delta)) + 1/n;
end
Problem:
When I start testing it I receive the following results:
Julia first:
julia> rademacher_penalty_bound(50, 50, 0.05) #50 points
1.619360057204432
julia> rademacher_penalty_bound(500, 50, 0.05) #500 points
ERROR: DomainError:
[inlined code] from math.jl:137
in rademacher_penalty_bound at none:2
in eval at ./boot.jl:264
Now Octave:
octave:17> rademacher_penalty_bound(50, 50, 0.05)
ans = 1.6194
octave:18> rademacher_penalty_bound(500, 50, 0.05)
ans = 1.2387
Question: According to Noteworthy differences from MATLAB I think I followed the rule of thumb ("literal numbers without a decimal point (such as 42) create integers instead of floating point numbers..."). The code crashes when the number of points exceeds 51 (line #B in mh). Can someone with more experience can look at the code and say what I should improve/change?

While BigInt and BigFloat will work here, they're serious overkill. The real issue is that you're doing integer exponentiation in Julia and floating-point exponentiation in Octave/Matlab. So you just need to change mh to use floats instead of integers for exponents:
mh(n, dvc) = n <= dvc ? 2^float(n) : n^float(dvc)
rademacher_penalty_bound(n, dvc, δ) =
√((2log(2n*mh(n,dvc)))/n) + √(2log(1/δ)/n) + 1/n
With these definitions, you get the same results as Octave/Matlab:
julia> rademacher_penalty_bound(50, 50, 0.05)
1.619360057204432
julia> rademacher_penalty_bound(500, 50, 0.05)
1.2386545010981596
In Octave/Matlab, even when you input a literal without a decimal point, you still get a float – you have to do an explicit cast to int type. Also, exponentiation in Octave/Matlab always converts to float first. In Julia, x^2 is equivalent to x*x which prohibits conversion to floating-point.

Although BigInt and BigFloat are excellent tools when they are necessary, they should usually be avoided, since they are overkill and slow.
In this case, the problem is indeed the difference between Octave, that treats everything as a floating-point number, and Julia, that treats e.g. 2 as an integer.
So the first thing to do is to use floating-point numbers in Julia too:
function mh(n, dvc)
if n <= dvc
2.0 ^ n
else
Float64(n) ^ dvc
end
end
This already helps, e.g. mh(50, 50) works.
However, the correct solution for this problem is to look at the code more carefully, and realise that the function mh only occurs inside a log:
log(2.0*n*mh(n,dvc))
We can use the laws of logarithms to rewrite this as
log(2.0*n) + log_mh(n, dvc)
where log_mh is a new function, which returns the logarithm of the result of mh. Of course, this should not be written directly as log(mh(n, dvc)), but is rather a new function:
function log_mh(n, dvc)
if n <= dvc
n * log(2.0)
else
dvc * log(n)
end
end
In this way, you will be able to use huge numbers without overflow.

I don't know is it acceptable to get results of BigFloat but anyway in julia part you can use BigInt
#Growth function on any n points with respect to VC-dimmension
function mh(n, dvc)
if n <= dvc
(BigInt(2))^n #A
else
n^dvc #B
end
end
#Rademacher penalty bound
function rademacher_penalty_bound(n::BigInt, dvc::BigInt, delta::Float64)
sqrt((2.0*log(2.0*n*mh(n,dvc)))/n) + sqrt((2.0/n)*log(1.0/delta)) + 1.0/n
end
rademacher_penalty_bound(BigInt(500), BigInt(500), 0.05)
# => 1.30055251010957621105182244420.....

Because by default a Julia Int is a "machine-size" integer, a 64-bit integer for the common x86-64 platform, whereas Octave uses floating point. So in Julia mh(500,50) overflows. You can fix it by replacing mh() as follows:
function mh(n, dvc)
n2 = BigInt(n) # Or n2 = Float64(n)
if n <= dvc
2^n2 #A
else
n2^dvc #B
end
end

Related

Reversing exponents in lua

I have a algorithm to calculate the desired exp for reach a level. But I didn't know how get the desired level based on his EXP.
local function NextLevelXP(level)
return math.floor(1000 * (level ^ 2.0))
end
print(NextLevelXP(7)) -- output: 49000
Now I want the level based on his EXP, with something like:
local function MagicFunctionThatWillLetMeKnowTheLevel(exp)
return --[[math magics here]]
end
print(MagicFunctionThatWillLetMeKnowTheLevel(49000)) --output: 7
print(MagicFunctionThatWillLetMeKnowTheLevel(48999)) --output: 6
I've tried some worse and weird algorithms, but no success.
This can be simplified:
local function NextLevelXP(level)
return math.floor(1000 * (level ^ 2.0))
end
the math.floor is not necessary if level will always be an integer.
to reverse the formula you can do:
local function CurrentLevelFromXP(exp)
return math.floor((exp / 1000) ^ (1/2))
end
Here it is necessary to floor the value as you will get values between levels, (like 7.1, 7.5, 7.8). To reverse the multiplication by 1000 we divide by 1000, to reverse the exponent we use the inverse of the exponent, in this case 2 becomes 1/2 or 0.5
Also, as a special case, for ^2 you can simply math.sqrt the value.

BigInt calculations on the GPU in Julia

I need to perform calculations on random batches of very larger integers. I have a function that compares the numbers for certain properties and returns a value based on those properties. Since the batches and the numbers themselves can be very large I want to speed up the process by utilizing the GPU.
Here is a short version of what i have running purely on the CPU now.
using Statistics
function check(M)
val = 0
#some code that calculates val based on M, e.g. the mean
val = mean(M)
return val
end
function distribution(N, n, exp) # N=batchsize, n=# of batches, exp=exponent of the upper limit of the integers
avg = 0
M = zeros(BigInt, N)
for i = 1 : n
M = rand(1 : BigInt(10) ^ exp, N)
avg += check(M)
end
avg /= n
println(avg, ":", N)
end
#example
distribution(10 ^ 3, 10 ^ 6, 100)
I have briefly used CUDAnative in Julia but I don't know how to implement the BigInt calculations. That package would be preferred but others are fine as well. Any help is appreciated.
BigInts are CPU only since they are not implemented in Julia, see 1.

periodic boundary conditions - finite differences

Hi I have a code below that solves non linear coupled PDE's. However I need to implement periodic boundary conditions. The periodic boundary conditions are troubling me, what should I add into my code to enforce periodic boundary conditions? Updated based on modular arithmetic suggestions below.
Note, t>=0 and x is in the interval [0,1]. Here are the coupled equations, below that I provide my code
where a, b > 0.
Those are the initial conditions, but now I need to impose periodic boundary conditions. These can be mathematically written as u(0,t)=u(1,t) and du(0,t)/dx=du(1,t)/dx, the same holds for f(x,t). The du/dx I have for the boundary conditions are really meant to be partial derivatives.
My code is below
program coupledPDE
integer, parameter :: n = 10, A = 20
real, parameter :: h = 0.1, k = 0.05
real, dimension(0:n-1) :: u,v,w,f,g,d
integer:: i,m
real:: t, R, x,c1,c2,c3,aa,b
R=(k/h)**2.
aa=2.0
b=1.0
c1=(2.+aa*k**2.-2.0*R)/(1+k/2.)
c2=R/(1.+k/2.)
c3=(1.0-k/2.)/(1.0+k/2.)
c4=b*k**2./(1+k/2.)
do i = 0,n-1 !loop over all space points except 0 and n
x = real(i)*h
w(i) = z(x) !u(x,0)
d(i) = z(x) !f(x,0)
end do
do i=0,n-1
ip=mod(i+1,n)
il=modulo(i-1,n)
v(i) = (c1/(1.+c3))*w(i) + (c2/(1.+c3))*(w(ip)+w(il)) -(c4/(1.+c3))*w(i)*((w(i))**2.+(d(i))**2.) !\partial_t u(x,0)=0
g(i) = (c1/(1.+c3))*d(i) + (c2/(1.+c3))*(d(ip)+d(il)) -(c4/(1.+c3))*d(i)*((w(i))**2.+(d(i))**2.) !\partial_t f(x,0)=0
end do
do m=1,A
do i=0,n-1
ip=mod(i+1,n)
il=modulo(i-1,n)
u(i)=c1*v(i)+c2*(v(ip)+v(il))-c3*w(i)-c4*v(i)*((v(i))**2.+(g(i))**2.)
f(i)=c1*g(i)+c2*(g(ip)+g(il))-c3*d(i)-c4*g(i)*((v(i))**2.+(g(i))**2.)
end do
print*, "the values of u(x,t+k) for all m=",m
print "(//3x,i5,//(3(3x,e22.14)))",m,u
do i=0,n-1
w(i)=v(i)
v(i)=u(i)
d(i)=g(i)
t=real(m)*k
x=real(i)*k
end do
end do
end program coupledPDE
function z(x)
real, intent(in) :: x
real :: pi
pi=4.0*atan(1.0)
z = sin(pi*x)
end function z
Thanks for reading, if I should reformat my question in a more proper way please let me know.
One option to boundary conditions in PDE discretization is to use ghost (halo) cells (gridpoints). It may be not the most clever one for periodic BC, but it can be used for all other boundary condition types.
So you declare your arrays as
real, dimension(-1:n) :: u,v,w,f,g,d
but you solve your PDE only in points 0..n-1 (point n is identical with point 0). You could also do from 1..n and declare arrays form 0..n+1.
Then you set
u(-1) = u(n-1)
and
u(n) = u(0)
and the same for all other arrays.
At each time-step you set this again for u and f or all other fields that are modified during the solution:
do m=1,A
u(-1) = u(n-1)
u(n) = u(0)
f(-1) = f(n-1)
f(n) = f(0)
do i=0,n-1 !Discretization equation for all times after the 1st step
u(i)=...
f(i)=...
end do
end do
All above assumed explicit temporal discretization and spatial discretization with finite differences and it assumed that x(0) = 0 and x(n) = 1 are your boundary points.

Strange result in gaussian elimination with scilab

My function for calculate a gaussian elimination (without parcial pivot) with scilab are returning a strange result for operations like
0.083333 - 1.000000*0.083333 = -0.000000 (minus zero, I'm really not understand)
And when I access this result in matrix the number shown is - 1.388D-17. Someone have idea why this? Below my code for gauss elimination. A is expanded matrix (A | b)
function [r] = gaussian_elimination(A)
//Get a tuple representing matrix dimension
[row, col] = size(A)
if ( (row ~= 1) & (col ~= 2) ) then
for k = 1:row
disp(A)
if A(k, k) ~= 0 then
for i = k+1:row
m = real(A(i, k)/A(k, k))
for j = 1:col
a = A(k, j)
new_element = A(i, j) - m*a
printf("New Element A(%d, %d) = %f - %f*%f = %f\n", i, j, A(i,j), m, a, new_element)
A(i,j) = 0
A(i,j) = new_element
end
end
else
A = "Inconsistent system"
break
end
end
else
A = A(1,1)
end
r = A
The most strange is that for some matrices this not happening.
This is a rounding error. See "What Every Computer Scientist Should Know About Floating-Point Arithmetic" for more background information. In short: since the numbers you are representing are not base2 and they are represented in base2, it is sometimes hard to accurately represent the entire number. Figure out the significance and round-off the results.
Take for instance the example from here:
// fround(x,n)
// Round the floating point numbers x to n decimal places
// x may be a vector or matrix// n is the integer number of places to round to
function [y ]= fround(x,n)
y=round(x*10^n)/10^n;
endfunction
-->fround(%pi,5)
ans = 3.14159
Beware: n is the number of decimals, not the number of numerical digits.
If this is only a rounding error, you can clean up your results by the clean function, which rounds to zero the small entries of a matrix. By this function you can also set the cleaning tolerances in absolute or relative magnitude.
clean(r); //it will round to zero the very small elements e.g. 1.388D-17

Fast, inaccurate sin function without lookup

For an ocean shader, I need a fast function that computes a very approximate value for sin(x). The only requirements are that it is periodic, and roughly resembles a sine wave.
The taylor series of sin is too slow, since I'd need to compute up to the 9th power of x just to get a full period.
Any suggestions?
EDIT: Sorry I didn't mention, I can't use a lookup table since this is on the vertex shader. A lookup table would involve a texture sample, which on the vertex shader is slower than the built in sin function.
It doesn't have to be in any way accurate, it just has to look nice.
Use a Chebyshev approximation for as many terms as you need. This is particularly easy if your input angles are constrained to be well behaved (-π .. +π or 0 .. 2π) so you do not have to reduce the argument to a sensible value first. You might use 2 or 3 terms instead of 9.
You can make a look-up table with sin values for some values and use linear interpolation between that values.
A rational algebraic function approximation to sin(x), valid from zero to π/2 is:
f = (C1 * x) / (C2 * x^2 + 1.)
with the constants:
c1 = 1.043406062
c2 = .2508691922
These constants were found by least-squares curve fitting. (Using subroutine DHFTI, by Lawson & Hanson).
If the input is outside [0, 2π], you'll need to take x mod 2 π.
To handle negative numbers, you'll need to write something like:
t = MOD(t, twopi)
IF (t < 0.) t = t + twopi
Then, to extend the range to 0 to 2π, reduce the input with something like:
IF (t < pi) THEN
IF (t < pi/2) THEN
x = t
ELSE
x = pi - t
END IF
ELSE
IF (t < 1.5 * pi) THEN
x = t - pi
ELSE
x = twopi - t
END IF
END IF
Then calculate:
f = (C1 * x) / (C2 * x*x + 1.0)
IF (t > pi) f = -f
The results should be within about 5% of the real sine.
Well, you don't say how accurate you need it to be. The sine can be approximated by straight lines of slopes 2/pi and -2/pi on intervals [0, pi/2], [pi/2, 3*pi/2], [3*pi/2, 2*pi]. This approximation can be had for the cost of a multiplication and an addition after reducing the angle mod 2*pi.
Using a lookup table is probably the best way to control the tradeoff between speed and accuracy.

Resources