I am trying to make a skewness function in F# using Knuth's recursive formula, based on the formula for the variance in Jon Harrop's F# for scientists.
Here is my code, (with an auxilliary function)
let skewness_aux (m, m2, m3, k) x =
let m' = m + (x - m)/k
let m2' = m2 + ((x - m)*(x - m)*(k-1.0))/k
m', m2', m3 + (x-m)*(x-m)*(x-m)*(k-1.0)*(k-2.0)/(k*k)-(3.0*(x-m)*m2)/k, k + 1.0;;
let skewness xs =
let _, m2, m3, n2 = Seq.fold skewness_aux (0.0, 0.0, 0.0, 1.0) xs
(sqrt(n2) * m3)/(sqrt (m2*m2*m2));;
And finally a little test -
skewness [|2.0; 2.0; 3.0|];;
Which should return 1/(sqrt2) approx 0.707107, but is instead giving me 0.8164965809
Any wiser heads than mine got any advice on why it isn't working? The formulas look correct. I am using the wikipedia page on algorithms for higher moment functions as well as Pebay's 2008 paper on the subject to cross check.
Many thanks in advance for any and all help.
Your skewness_aux function returns m, m2, m3, and k+1. Therefore, you need to use sqrt(n2-1), not sqrt(n2).
Related
I am trying to solve the following ODE using DifferentialEquation.jl :
Where P is a matrix used for a projection. I am having a hard time imagining how to solve this problem. Is there a way to directly solve it using Julia? Or should I try and rearrange the equation by hand (which I already tried) to fit the usual differential equation format?
I already started by writing down some equations which can be found below but I am not getting very far.
function ODE(u, p, t)
g,N = p
Jacg = ForwardDiff.jacobian(g, u)
sum = zeros(size(N,1))
for i in 1:size(Jacg,1)
sum = sum + Jacg[i,:] .* (u / norm(u)) .* N[:,i]
end
Proj_N(N) * sum
nothing
end
prob = ODEProblem(ODE, u0, (0.0, 3.0), (g, N))
sol = solve(prob)
Any help is appreciated and thanks in advance.
If you want to use the out of place form you have to return the derivative, i.e.
function ODE(u, p, t)
g,N = p
Jacg = ForwardDiff.jacobian(g, u)
sum = zeros(size(N,1))
for i in 1:size(Jacg,1)
sum = sum + Jacg[i,:] .* (u / norm(u)) .* N[:,i]
end
Proj_N(N) * sum
end
I think you were just mixing up the mutating and non-mutating derivative forms.
I'm trying to pragmatically emulate my calculator's summation function without the use of loops, the reason why is that they become very expensive once the function gets bloated. So far, I know of the formula n(n+1)/2, but that only works if the function looks like:
from X = 1 to 100, Σ (X), result = 5050.
Without a loop, is there a way to implement a function where:
from X = 1 to 100, Σ (X^2+X)?
EDIT: Note that the formula must account for all possible function bodies.
Thanks for the answers
The formula Σ (X^2+X) is equal to Σ (X) + Σ (X^2). You already know how to calculate Σ (X).
As for the Σ (X^2), this is known as Square Pyramidal Number. You can see a longer explanation here, but the formula is:
n3/3 + n2/2 + n/6
Together, that's
n3/3 + n2/2 + n/6 + n(n+1)/2
Or
(n3+2n)/3 + n2
Let's define:
Tower(1) of n is: n.
Tower(2) of n is: n^n (= power(n,n)).
Tower(10) of n is: n^n^n^n^n^n^n^n^n^n.
And also given two functions:
f(n) = [Tower(logn n) of n] = n^n^n^n^n^n^....^n (= log n times "height of tower").
g(n) = [Tower(n) of log n] = log(n)^log(n)^....^log(n) (= n times "height of tower").
Three questions:
How are functions f(n)/g(n) related each other asymptotically (n-->infinity),
in terms of: theta, Big O, little o, Big Omega, little omega ?
Please describe exact way of solution and not only eventual result.
Does base of log (i.e.: 0.5, 2, 10, log n, or n) effect the result ?
If no - why ?
If yes - how ?
I'd like to know whether in any real (even if hypotetic) application there complexity performance looks similar to f(n) or g(n) above. Please give case description - if such exist.
P.S.
I tried to substitute: log n = a, therefore: n = 2^a or 10^a.
And got confused of counting height of received "towers".
I won't provide you a solution, because you have to work on your homework, but maybe there are other people interested about some hints.
1) Mathematics:
log(a^x) = x*log(a)
this will humanize your problem
2) Mathematics:
logx(y) = log2(y) / log2(x) = log10(y) / log10(x)
of course: if x is constant => log2(x) and log10(x) are constants
3) recursive + stop condition
All,
I've just been starting to play around with the Julia language and am enjoying it quite a bit. At the end of the 3rd tutorial there's an interesting problem: genericize the quadratic formula such that it solves for the roots of any n-order polynomial equation.
This struck me as (a) an interesting programming problem and (b) an interesting Julia problem. Has anyone out there solved this one? For reference, here is the Julia code with a couple toy examples. Again, the idea is to make this generic for any n-order polynomial.
Cheers,
Aaron
function derivative(f)
return function(x)
# pick a small value for h
h = x == 0 ? sqrt(eps(Float64)) : sqrt(eps(Float64)) * x
# floating point arithmetic gymnastics
xph = x + h
dx = xph - x
# evaluate f at x + h
f1 = f(xph)
# evaluate f at x
f0 = f(x)
# divide the difference by h
return (f1 - f0) / dx
end
end
function quadratic(f)
f1 = derivative(f)
c = f(0.0)
b = f1(0.0)
a = f(1.0) - b - c
return (-b + sqrt(b^2 - 4a*c + 0im))/2a, (-b - sqrt(b^2 - 4a*c + 0im))/2a
end
quadratic((x) -> x^2 - x - 2)
quadratic((x) -> x^2 + 2)
The package PolynomialRoots.jl provides the function roots() to find all (real and complex) roots of polynomials of any order. The only mandatory argument is the array with coefficients of the polynomial in ascending order.
For example, in order to find the roots of
6x^5 + 5x^4 + 3x^2 + 2x + 1
after loading the package (using PolynomialRoots) you can use
julia> roots([1, 2, 3, 4, 5, 6])
5-element Array{Complex{Float64},1}:
0.294195-0.668367im
-0.670332+2.77556e-17im
0.294195+0.668367im
-0.375695-0.570175im
-0.375695+0.570175im
The package is a Julia implementation of the root-finding algorithm described in this paper: http://arxiv.org/abs/1203.1034
PolynomialRoots.jl has also support for arbitrary precision calculation. This is useful for solving equation that cannot be solved in double precision. For example
julia> r = roots([94906268.375, -189812534, 94906265.625]);
julia> (r[1], r[2])
(1.0000000144879793 - 0.0im,1.0000000144879788 + 0.0im)
gives the wrong result for the polynomial, instead passing the input array in arbitrary precision forces arbitrary precision calculations that provide the right answer (see https://en.wikipedia.org/wiki/Loss_of_significance):
julia> r = roots([BigFloat(94906268.375), BigFloat(-189812534), BigFloat(94906265.625)]);
julia> (Float64(r[1]), Float64(r[2]))
(1.0000000289759583,1.0)
There are no algebraic formulae for a general polynomials of degree five and above (infact there cant be see here). So theoretically, you could proceed using the same methodology for solutions to cubics and quartics, but even that would be a lot of hard work given very unwieldy formulae for roots of quartics. You could also use a CAS like SymPy to find out those formulae.
For an ocean shader, I need a fast function that computes a very approximate value for sin(x). The only requirements are that it is periodic, and roughly resembles a sine wave.
The taylor series of sin is too slow, since I'd need to compute up to the 9th power of x just to get a full period.
Any suggestions?
EDIT: Sorry I didn't mention, I can't use a lookup table since this is on the vertex shader. A lookup table would involve a texture sample, which on the vertex shader is slower than the built in sin function.
It doesn't have to be in any way accurate, it just has to look nice.
Use a Chebyshev approximation for as many terms as you need. This is particularly easy if your input angles are constrained to be well behaved (-π .. +π or 0 .. 2π) so you do not have to reduce the argument to a sensible value first. You might use 2 or 3 terms instead of 9.
You can make a look-up table with sin values for some values and use linear interpolation between that values.
A rational algebraic function approximation to sin(x), valid from zero to π/2 is:
f = (C1 * x) / (C2 * x^2 + 1.)
with the constants:
c1 = 1.043406062
c2 = .2508691922
These constants were found by least-squares curve fitting. (Using subroutine DHFTI, by Lawson & Hanson).
If the input is outside [0, 2π], you'll need to take x mod 2 π.
To handle negative numbers, you'll need to write something like:
t = MOD(t, twopi)
IF (t < 0.) t = t + twopi
Then, to extend the range to 0 to 2π, reduce the input with something like:
IF (t < pi) THEN
IF (t < pi/2) THEN
x = t
ELSE
x = pi - t
END IF
ELSE
IF (t < 1.5 * pi) THEN
x = t - pi
ELSE
x = twopi - t
END IF
END IF
Then calculate:
f = (C1 * x) / (C2 * x*x + 1.0)
IF (t > pi) f = -f
The results should be within about 5% of the real sine.
Well, you don't say how accurate you need it to be. The sine can be approximated by straight lines of slopes 2/pi and -2/pi on intervals [0, pi/2], [pi/2, 3*pi/2], [3*pi/2, 2*pi]. This approximation can be had for the cost of a multiplication and an addition after reducing the angle mod 2*pi.
Using a lookup table is probably the best way to control the tradeoff between speed and accuracy.