Efficient computation of bivariate empirical cdf in R/Fortran - r

Given an n*2 data matrix X I'd like to calculate the bivariate empirical cdf for each observation, i.e. for each i in 1:n, return the percentage of observations with 1st element not greater than X[i,1] and 2nd element not greater than X[i,2].
Because of the nested search involved it gets terribly slow for n ~ 100k, even after porting it to Fortran. Does anyone know if there's a better way of handling sample sizes like this?
Edit: I believe this problem is similar (in terms of complexity) to finding Kendall's tau, which is of order O(n^2). In that case Knight (1966) has an algorithm to reduce it to O(n log(n)). Just wondering if there's any O(n*log(n)) algorithm for finding bivariate ecdf already out there.
Edit 2: This is the code I have in Fortran, as requested. This is called in R in the usual way, so the R code is omitted here. The code is meant for arbitrary dimensions, but for the specific thing I'm doing a bivariate one is good enough.
! Calculates multivariate empirical cdf for each point
! n: number of observations
! d: dimension (>=2)
! umat: data matrix
! outvec: vector of ecdf
subroutine mecdf(n,d,umat,outvec)
implicit none
integer :: n, d, i, j, k, tempsum
double precision, dimension(n) :: outvec
double precision, dimension(n,d) :: umat
logical :: flag
do i = 1,n
tempsum = 0
do j = 1,n
flag = .true.
do k = 1,d
if (umat(i,k) < umat(j,k)) then
flag = .false.
exit
end if
end do
if (flag) then
tempsum = tempsum + 1
end if
end do
outvec(i) = real(tempsum)/n
end do
return
end subroutine

I think my first effort was not really an ecdf, although it did map the points to the interval [0,1] The example, a 25 x 2 matrix generated with:
#M <- matrix(runif(100), ncol=2)
M <-
structure(c(0.0468267474789172, 0.296053855214268, 0.205678076483309,
0.467400068417192, 0.968577065737918, 0.435642971657217, 0.929023026255891,
0.038406387437135, 0.304360694251955, 0.964778139721602, 0.534192910650745,
0.741682186257094, 0.0848641532938927, 0.405901980120689, 0.957696850644425,
0.384813814423978, 0.639882878866047, 0.231505588628352, 0.271994129288942,
0.786155494628474, 0.349499785574153, 0.279077709652483, 0.206662984099239,
0.777465222170576, 0.705439242534339, 0.643429880728945, 0.887209519045427,
0.0794123203959316, 0.849177583120763, 0.704594585578889, 0.736909110797569,
0.503158083418384, 0.49449566937983, 0.408533290959895, 0.236613316927105,
0.297427259152755, 0.0677345870062709, 0.623845702270046, 0.139933609170839,
0.740499466424808, 0.628097783308476, 0.678438259987161, 0.186680511338636,
0.339367639739066, 0.373212536331266, 0.976724133593962, 0.94558056560345,
0.610417427960783, 0.887977657606825, 0.663434249348938, 0.447939050383866,
0.755168803501874, 0.478974275058135, 0.737040047068149, 0.429466919740662,
0.0021107573993504, 0.697435079608113, 0.444197302218527, 0.108997165458277,
0.856855363817886, 0.891898229718208, 0.93553287582472, 0.991948011796921,
0.630414301762357, 0.0604106825776398, 0.908968194155023, 0.0398679254576564,
0.251426834380254, 0.235532913124189, 0.392070295521989, 0.530511683085933,
0.319339724024758, 0.534880011575297, 0.92030712752603, 0.138276003766805,
0.213625695323572, 0.407931711757556, 0.605797187192366, 0.424798395251855,
0.471233424032107, 0.0105366336647421, 0.625802840106189, 0.524665891425684,
0.0375960320234299, 0.54812005511485, 0.0105806747451425, 0.438266788609326,
0.791981092421338, 0.363821814302355, 0.157931488472968, 0.47945317090489,
0.906797411618754, 0.762243523262441, 0.258681379957125, 0.308056800393388,
0.91944490163587, 0.412255838746205, 0.347220918396488, 0.68236422073096,
0.559149842709303), .Dim = c(50L, 2L))
So the task is to do a single summation of a two-part logical test on N items which I suspect is O(N*3). It might be marginally faster if implemented in Rcpp, but these are vectorized operations.
# Wrong: ecdf2d <- function(m,i,j) { ord <- rank(m[ , 1]^2+m[ , 2]^2)
# ord[i]/nrow(m)} # scales to [0,1] interval
ecdf2d.v2 <- function(obj, x, y) sum( obj[,1] < x & obj[,2] < y)/nrow(obj)

Related

Puzzling result from boundary condition code in Julia BVP solver

I am trying to solve a boundary value problem in Julia, following the example found here, using the BoundaryValueDiffEq package. In the boundary condition function, the example requires a for loop to update each index individually, à la
function bc1!(residual, u, p, t)
for i in 1:n
residual[i] = u[end][i] - 10
end
end
I would like to use the following code, which should be more efficient:
function bc1!(residual, u, p, t)
residual = u[end] .- 10
end
Though the resulting value of residual is the same for both versions of the code, the solver gives the correct result in the first case and an incorrect result in the second case.
All I can think of is that there is some difference between updating residual
index by index and assigning a new vector to it, even if the result is identical in value and in type. Why is this the case, and is it possible to make the code more efficient while preserving the correct result?
Here is the full code in case it helps.
using BoundaryValueDiffEq, Plots
n = 3
f(t) = .1
F(t) = .1*t
function du!(du,u,p,t)
fn(i) = 1/(u[i]-t)
for i in 1:n
du[i] = 1/(n-1)*F(u[i])/f(u[i])*((2-n)/(u[i]-t)+sum(map(fn,
vcat(1:i-1,i+1:n))))
end
end
function bc1!(residual, u, p, t)
#residual = u[end] .- 10
for i in 1:n
residual[i] = u[end][i]-10
end
end
# exact solution
xvals = LinRange(0,20/3,200)
yvals = 1.5*xvals
# solving BVP
tspan = (0.0,20/3)
bvp1 = BVProblem(du!, bc1!, 10*ones(Int8,n), tspan)
sol1 = solve(bvp1, GeneralMIRK4(), dt=.2)
# plotting computed solution vs actual solution
plot(sol1,vars=(0,1))
plot!(xvals,yvals,label="Exact solution")
You overrode the array instead of mutating it. You need to use .= to update it in-place.
function bc1!(residual, u, p, t)
residual .= u[end] .- 10
end
or safer:
function bc1!(residual, u, p, t)
#. residual = u[end] .- 10
end

How to use NLopt in Julia with equality_constraint

I'm struggling to amend the Julia-specific tutorial on NLopt to meet my needs and would be grateful if someone could explain what I'm doing wrong or failing to understand.
I wish to:
Minimise the value of some objective function myfunc(x); where
x must lie in the unit hypercube (just 2 dimensions in the example below); and
the sum of the elements of x must be one.
Below I make myfunc very simple - the square of the distance from x to [2.0, 0.0] so that the obvious correct solution to the problem is x = [1.0,0.0] for which myfunc(x) = 1.0. I have also added println statements so that I can see what the solver is doing.
testNLopt = function()
origin = [2.0,0.0]
n = length(origin)
#Returns square of the distance between x and "origin", and amends grad in-place
myfunc = function(x::Vector{Float64}, grad::Vector{Float64})
if length(grad) > 0
grad = 2 .* (x .- origin)
end
xOut = sum((x .- origin).^2)
println("myfunc: x = $x; myfunc(x) = $xOut; ∂myfunc/∂x = $grad")
return(xOut)
end
#Constrain the sums of the x's to be 1...
sumconstraint =function(x::Vector{Float64}, grad::Vector{Float64})
if length(grad) > 0
grad = ones(length(x))
end
xOut = sum(x) - 1
println("sumconstraint: x = $x; constraint = $xOut; ∂constraint/∂x = $grad")
return(xOut)
end
opt = Opt(:LD_SLSQP,n)
lower_bounds!(opt, zeros(n))
upper_bounds!(opt,ones(n))
equality_constraint!(opt,sumconstraint,0)
#xtol_rel!(opt,1e-4)
xtol_abs!(opt,1e-8)
min_objective!(opt, myfunc)
maxeval!(opt,20)#to ensure code always terminates, remove this line when code working correctly?
optimize(opt,ones(n)./n)
end
I have read this similar question and documentation here and here, but still can't figure out what's wrong. Worryingly, each time I run testNLopt I see different behaviour, as in this screenshot including occasions when the solver uselessly evaluates myfunc([NaN,NaN]) many times.
You aren't actually writing to the grad parameters in-place, as you write in the comments;
grad = 2 .* (x .- origin)
just overrides the local variable, not the array contents -- and I guess that's why you see these df/dx = [NaN, NaN] everywhere. The simplest way to fix that would be with broadcasting assignment (note the dot):
grad .= 2 .* (x .- origin)
and so on. You can read about that behaviour here and here.

Julia: swap gives errors

I'm using Julia 0.3.4
I'm trying to write LU-decomposition using Gaussian elimination. So I have to swap rows. And here's my problem:
If I'm using a,b = b,a I get an error,
but if I'm using:
function swapRows(row1, row2)
temp = row1
row1 = row2
row2 = temp
end
then everything works just fine.
Am I doing something wrong or it's a bug?
Here's my source code:
function lu_t(A::Matrix)
# input value: (A), where A is a matrix
# return value: (L,U), where L,U are matrices
function swapRows(row1, row2)
temp = row1
row1 = row2
row2 = temp
return null
end
if size(A)[1] != size(A)[2]
throw(DimException())
end
n = size(A)[1] # matrix dimension
U = copy(A) # upper triangular matrix
L = eye(n) # lower triangular matrix
for k = 1:n-1 # direct Gaussian elimination for each column `k`
(val,id) = findmax(U[k:end,k]) # find max pivot element and it's row `id`
if val == 0 # check matrix for singularity
throw(SingularException())
end
swapRows(U[k,k:end],U[id,k:end]) # swap row `k` and `id`
# U[k,k:end],U[id,k:end] = U[id,k:end],U[k,k:end] - error
for i = k+1:n # for each row `i` > `k`
μ = U[i,k] / U[k,k] # find elimination coefficient `μ`
L[i,k] = μ # save to an appropriate position in lower triangular matrix `L`
for j = k:n # update each value of the row `i`
U[i,j] = U[i,j] - μ⋅U[k,j]
end
end
end
return (L,U)
end
###### main code ######
A = rand(4,4)
#time (L,U) = lu_t(A)
#test_approx_eq(L*U, A)
The swapRows function is a no-op and has no effect whatsoever – all it does is swap around some local variable names. See various discussions of the difference between assignment and mutation:
https://groups.google.com/d/msg/julia-users/oSW5hH8vxAo/llAHRvvFVhMJ
http://julia.readthedocs.org/en/latest/manual/faq/#i-passed-an-argument-x-to-a-function-modified-it-inside-that-function-but-on-the-outside-the-variable-x-is-still-unchanged-why
http://julia.readthedocs.org/en/latest/manual/faq/#why-does-x-y-allocate-memory-when-x-and-y-are-arrays
The constant null doesn't mean what you think it does – in Julia v0.3 it's a function that computes the null space of a linear transformation; in Julia v0.4 it still means this but has been deprecated and renamed to nullspace. The "uninteresting" value in Julia is called nothing.
I'm not sure what's wrong with your commented out row swapping code, but this general approach does work:
julia> X = rand(3,4)
3x4 Array{Float64,2}:
0.149066 0.706264 0.983477 0.203822
0.478816 0.0901912 0.810107 0.675179
0.73195 0.756805 0.345936 0.821917
julia> X[1,:], X[2,:] = X[2,:], X[1,:]
(
1x4 Array{Float64,2}:
0.478816 0.0901912 0.810107 0.675179,
1x4 Array{Float64,2}:
0.149066 0.706264 0.983477 0.203822)
julia> X
3x4 Array{Float64,2}:
0.478816 0.0901912 0.810107 0.675179
0.149066 0.706264 0.983477 0.203822
0.73195 0.756805 0.345936 0.821917
Since this creates a pair of temporary arrays that we can't yet eliminate the allocation of, this isn't the most efficient approach. If you want the most efficient code here, looping over the two rows and swapping pairs of scalar values will be faster:
function swapRows!(X, i, j)
for k = 1:size(X,2)
X[i,k], X[j,k] = X[j,k], X[i,k]
end
end
Note that it is conventional in Julia to name functions that mutate one or more of their arguments with a trailing !. Currently, closures (i.e. inner functions) have some performance issues, so you'll want such a helper function to be defined at the top-level scope instead of inside of another function the way you've got it.
Finally, I assume this is an exercise since Julia ships with carefully tuned generic (i.e. it works for arbitrary numeric types) LU decomposition: http://docs.julialang.org/en/release-0.3/stdlib/linalg/#Base.lu.
-
It's quite simple
julia> A = rand(3,4)
3×4 Array{Float64,2}:
0.241426 0.283391 0.201864 0.116797
0.457109 0.138233 0.346372 0.458742
0.0940065 0.358259 0.260923 0.578814
julia> A[[1,2],:] = A[[2,1],:]
2×4 Array{Float64,2}:
0.457109 0.138233 0.346372 0.458742
0.241426 0.283391 0.201864 0.116797
julia> A
3×4 Array{Float64,2}:
0.457109 0.138233 0.346372 0.458742
0.241426 0.283391 0.201864 0.116797
0.0940065 0.358259 0.260923 0.578814

(in R) Why is result of ksvm using user-defined linear kernel different from that of ksvm using "vanilladot"?

I wanted to use user-defined kernel function for Ksvm in R.
so, I tried to make a vanilladot kernel and compare with "vanilladot" which is built in "kernlab" as practice.
I write my kernel as follow.
#
###vanilla kernel with class "kernel"
#
kfunction.k <- function(){
k <- function (x,y){crossprod(x,y)}
class(k) <- "kernel"
k}
l<-0.1 ; C<-1/(2*l)
###use kfunction.k
tmp<-ksvm(x,factor(y),scaled=FALSE, type = "C-svc", kernel=kfunction.k(), C = C)
alpha(tmp)[[1]]
ind<-alphaindex(tmp)[[1]]
x.s<-x[ind,] ; y.s<-y[ind]
w.class.k<-t(alpha(tmp)[[1]]*y.s)%*%x.s
w.class.k
I thouhgt result of this operation is eqaul to that of following.
However It dosn't.
#
###use "vanilladot"
#
l<-0.1 ; C<-1/(2*l)
tmp1<-ksvm(x,factor(y),scaled=FALSE, type = "C-svc", kernel="vanilladot", C = C)
alpha(tmp1)[[1]]
ind1<-alphaindex(tmp1)[[1]]
x.s<-x[ind1,] ; y.s<-y[ind1]
w.tmp1<-t(alpha(tmp1)[[1]]*y.s)%*%x.s
w.tmp1
I think maybe this problem is related to kernel class.
When class is set to "kernel", this problem is occured.
However When class is set to "vanillakernel", the result of ksvm using user-defined kernel is equal to that of ksvm using "vanilladot" which is built in Kernlab.
#
###vanilla kernel with class "vanillakernel"
#
kfunction.v.k <- function(){
k <- function (x,y){crossprod(x,y)}
class(k) <- "vanillakernel"
k}
# The only difference between kfunction.k and kfunction.v.k is "class(k)".
l<-0.1 ; C<-1/(2*l)
###use kfunction.v.k
tmp<-ksvm(x,factor(y),scaled=FALSE, type = "C-svc", kernel=kfunction.v.k(), C = C)
alpha(tmp)[[1]]
ind<-alphaindex(tmp)[[1]]
x.s<-x[ind,] ; y.s<-y[ind]
w.class.v.k<-t(alpha(tmp)[[1]]*y.s)%*%x.s
w.class.v.k
I don't understand why the result is different from "vanilladot", when setting the class to "kernel".
Is there an error in my operation?
First, it seems like a really good question!
Now to the point. In the sources of ksvm we can find when is a line drawn between using user-defined kernel, and the built-ins:
if (type(ret) == "spoc-svc") {
if (!is.null(class.weights))
weightedC <- class.weights[weightlabels] * rep(C,
nclass(ret))
else weightedC <- rep(C, nclass(ret))
yd <- sort(y, method = "quick", index.return = TRUE)
xd <- matrix(x[yd$ix, ], nrow = dim(x)[1])
count <- 0
if (ktype == 4)
K <- kernelMatrix(kernel, x)
resv <- .Call("tron_optim", as.double(t(xd)), as.integer(nrow(xd)),
as.integer(ncol(xd)), as.double(rep(yd$x - 1,
2)), as.double(K), as.integer(if (sparse) xd#ia else 0),
as.integer(if (sparse) xd#ja else 0), as.integer(sparse),
as.integer(nclass(ret)), as.integer(count), as.integer(ktype),
as.integer(7), as.double(C), as.double(epsilon),
as.double(sigma), as.integer(degree), as.double(offset),
as.double(C), as.double(2), as.integer(0), as.double(0),
as.integer(0), as.double(weightedC), as.double(cache),
as.double(tol), as.integer(10), as.integer(shrinking),
PACKAGE = "kernlab")
reind <- sort(yd$ix, method = "quick", index.return = TRUE)$ix
alpha(ret) <- t(matrix(resv[-(nclass(ret) * nrow(xd) +
1)], nclass(ret)))[reind, , drop = FALSE]
coef(ret) <- lapply(1:nclass(ret), function(x) alpha(ret)[,
x][alpha(ret)[, x] != 0])
names(coef(ret)) <- lev(ret)
alphaindex(ret) <- lapply(sort(unique(y)), function(x)
which(alpha(ret)[,
x] != 0))
xmatrix(ret) <- x
obj(ret) <- resv[(nclass(ret) * nrow(xd) + 1)]
names(alphaindex(ret)) <- lev(ret)
svindex <- which(rowSums(alpha(ret) != 0) != 0)
b(ret) <- 0
param(ret)$C <- C
}
The important parts are two things, first, if we provide ksvm with our own kernel, then ktype=4 (while for vanillakernel, ktype=0) so it makes two changes:
in case of user-defined kernel, the kernel matrix is computed instead of actually using the kernel
tron_optim routine is ran with the information regarding the kernel
Now, in the svm.cpp we can find the tron routines, and in the tron_run (called from tron_optim), that LINEAR kernel has a separate optimization routine
if (param->kernel_type == LINEAR)
{
/* lots of code here */
while (Cpj < Cp)
{
totaliter += s.Solve(l, prob->x, minus_ones, y, alpha, w,
Cpj, Cnj, param->eps, sii, param->shrinking,
param->qpsize);
/* lots of code here */
}
totaliter += s.Solve(l, prob->x, minus_ones, y, alpha, w, Cp, Cn,
param->eps, sii, param->shrinking, param->qpsize);
delete[] w;
}
else
{
Solver_B s;
s.Solve(l, BSVC_Q(*prob,*param,y), minus_ones, y, alpha, Cp, Cn,
param->eps, sii, param->shrinking, param->qpsize);
}
As you can see, the linear case is treated in the more complex, more detailed way. There is an inner optimization loop calling the solver many times. It would require really deep analysis of actual optimization being performed here, but at this step one can answer your question in a following way:
There is no error in your operation
kernlab's svm has a separate routine for training SVM with linear kernel, which is based on the type of kernel passed to the code, changing "kernel" to "vanillakernel" made the ksvm think it is actually working with vanillakernel, and so performed this separate optimization routine
It does not seem as a bug in fact, as the linear SVM is in fact very different from the kernelized version in terms of efficient optimization techniques. Amount of heuristic as well as numerical issues that has to be taken care of is really big. As a result, some approximations are required and can lead to the different results. While for the rich feature space (like those induced by RBF kernel) it should not really matter, for simple kernels line linear ones - this simplifications can lead to significant output changes.

Why do i get this error - MATLAB

I have the image and the vector
a = imread('Lena.tiff');
v = [0,2,5,8,10,12,15,20,25];
and this M-file
function y = Funks(I, gama, c)
[m n] = size(I);
for i=1:m
for j=1:n
J(i, j) = (I(i, j) ^ gama) * c;
end
end
y = J;
imshow(y);
when I'm trying to do this:
f = Funks(a,v,2)
I am getting this error:
??? Error using ==> mpower
Integers can only be combined with integers of the same class, or scalar doubles.
Error in ==> Funks at 5
J(i, j) = (I(i, j) ^ gama) * c;
Can anybody help me, with this please?
The error is caused because you're trying to raise a number to a vector power. Translated (i.e. replacing formal arguments with actual arguments in the function call), it would be something like:
J(i, j) = (a(i, j) ^ [0,2,5,8,10,12,15,20,25]) * 2
Element-wise power .^ won't work either, because you'll try to "stuck" a vector into a scalar container.
Later edit: If you want to apply each gamma to your image, maybe this loop is more intuitive (though not the most efficient):
a = imread('Lena.tiff'); % Pics or GTFO
v = [0,2,5,8,10,12,15,20,25]; % Gamma (ar)ray -- this will burn any picture
f = cell(1, numel(v)); % Prepare container for your results
for k=1:numel(v)
f{k} = Funks(a, v(k), 2); % Save result from your function
end;
% (Afterwards you use cell array f for further processing)
Or you may take a look at the other (more efficient if maybe not clearer) solutions posted here.
Later(er?) edit: If your tiff file is CYMK, then the result of imread is a MxNx4 color matrix, which must be handled differently than usual (because it 3-dimensional).
There are two ways I would follow:
1) arrayfun
results = arrayfun(#(i) I(:).^gama(i)*c,1:numel(gama),'UniformOutput',false);
J = cellfun(#(x) reshape(x,size(I)),results,'UniformOutput',false);
2) bsxfun
results = bsxfun(#power,I(:),gama)*c;
results = num2cell(results,1);
J = cellfun(#(x) reshape(x,size(I)),results,'UniformOutput',false);
What you're trying to do makes no sense mathematically. You're trying to assign a vector to a number. Your problem is not the MATLAB programming, it's in the definition of what you're trying to do.
If you're trying to produce several images J, each of which corresponds to a certain gamma applied to the image, you should do it as follows:
function J = Funks(I, gama, c)
[m n] = size(I);
% get the number of images to produce
k = length(gama);
% Pre-allocate the output
J = zeros(m,n,k);
for i=1:m
for j=1:n
J(i, j, :) = (I(i, j) .^ gama) * c;
end
end
In the end you will get images J(:,:,1), J(:,:,2), etc.
If this is not what you want to do, then figure out your equations first.

Resources