Optimization of values in a matrix - r

I typically use Rsolnp for optimization but I am having trouble figuring out how to ask R to find values to fill a matrix (instead of a vector). Is that possible with Rsolnp or any other optimizer?
Here is a simplified example that is not working:
library(Rsolnp)
a<-matrix(rnorm(9), ncol=3)
b<-matrix(rnorm(9), ncol=3)
f1<-function(H) {
return(sum(H*a))
}
f2<-function(H) {
return(sum(H*b))
}
lH<-matrix(rep(0, 9), ncol=3)
uH<-matrix(rep(1, 9), ncol=3)
pars<-uH
target<-1.2
sol <- gosolnp(pars, fixed=NULL, fun=f1, eqfun=f2, eqB=target, LB=lH, UB=uH, distr=uH, n.restarts=10, n.sim=20000, cluster= NULL)
As you can see from the output, Rsolnp seems to be confused by the request:
> sol
$values
[1] 1e+10
$convergence
[1] 0
$pars
[1] NA NA NA NA NA NA NA NA NA
$start.pars
[1] 0.90042133 0.33262541 0.94586530 0.02083822 0.99953060 0.10720068 0.14302770 0.67162637 0.25463806
$rseed
[1] 1487866229

It seems that gosolnp() does not work with matrices. I went through the function in debugging mode and there is a call of solnp() that fails with the message:
Error in pb/cbind(vscale[(neq + 2):(neq + mm + 1)], vscale[(neq + 2):(neq + :
non-conformable arrays
But since a matrix is just a vector with the dimension attribute set, you can always reformulate your problem in terms of vectors. In your case, this is very easy, because you never do something that actually requires a matrix (like, for instance, a matrix product). Just omitting matrix() everywhere works fine.
But I assume that this is just a property of your simplified problem and your actual problem indeed needs to be expressed in terms of matrices. You could get around the problem by converting your vectors into matrices only inside the functions f1() and f2() as follows:
f1 <- function(H) {
return(sum(matrix(H, ncol = 3) * a))
}
f2 <- function(H) {
return(sum(matrix(H, ncol = 3) * b))
}
You can then define a and b as matrices as before, but lH and uH must be vectors:
a <- matrix(rnorm(9), ncol=3)
b <- matrix(rnorm(9), ncol=3)
lH <- rep(0, 9)
uH <- rep(1, 9)
pars <- uH
target <- 1.2
And now you can call gosolnp():
sol <- gosolnp(pars, fixed = NULL, fun = f1, eqfun = f2,
eqB = target, LB = lH, UB = uH, distr = uH,
n.restarts = 10, n.sim = 20000, cluster = NULL)
sol$pars
## [1] 3.917819e-08 9.999997e-01 4.748336e-07 1.000000e+00 5.255060e-09 5.114680e-10
## [7] 4.899963e-01 1.000000e+00 9.260947e-08

Related

Matrix calculations within an R function

I am trying to code a function which will identify which row of an nxm matrix M is closest to a vector y of length m.
What am I doing wrong in my code please? I am aiming for the function to produce a column vector of length n which gives the distance between each row coordinates of the matrix and the vector y. I then want to output the row number of the Matrix for which is the closest point to the vector.
closest.point <- function(M, y) {
p <- length(y)
k <- nrow(M)
T <- matrix(nrow=k)
T <- for(i in 1:n)
for(j in 1:m) {
(X[i,j] - x[j])^2 + (X[i,j] - x[j])^2
}
W <- rowSums(T)
max(W)
df[which.max(W),]
}
Even though there is already a better approach (not using for loops when dealing with matrices) to the problem, I would like to give you a solution to your approach with a for loop.
There were some mistakes in your function. There are some undefined variables like n, m or X.
Also try to avoid to name variables as T, because R interprets T as TRUE. It works but could result in some errors if one uses T as TRUE in the following code lines.
When looping, you need to give an index to your variable that you are updating, like T.matrix[i, j] and not only T.matrix as this will overwrite T.matrix at every iteration.
closest.point <- function(M, y) {
k <- nrow(M)
m <- ncol(M)
T.matrix <- matrix(nrow = k, ncol = m)
for (i in 1:k) {
for (j in 1:m) {
T.matrix[i, j] <- (M[i,j] - y[j])^2 + (M[i,j] - y[j])^2
}
}
W <- rowSums(T.matrix)
return(which.min(W))
}
# example 1
closest.point(M = rbind(c(1, 1, 1),
c(1, 2, 5)),
y = cbind(c(1, 2, 5)))
# [1] 2
# example 2
closest.point(M = rbind(c(1, 1, 1, 1),
c(1, 2, 5, 7)),
y = cbind(c(2, 2, 6, 2)))
# [1] 2
You should try to avoid using for loop to do operations on vectors and matrices. The dist base function calculates distances. Then which.min will give you the index of the minimal distance.
set.seed(0)
M <- matrix(rnorm(100), ncol = 5)
y <- rnorm(5)
closest_point <- function(M, y) {
dist_mat <- as.matrix(dist(rbind(M, y)))
all_distances <- dist_mat[1:nrow(M),ncol(dist_mat)]
which.min(all_distances)
}
closest_point(M, y)
#>
#> 14
Created on 2021-12-10 by the reprex package (v2.0.1)
Hope this makes sense, let me know if you have questions.
There are a number of problems here
p is defined but never used.
Although not wrong T does not really have to be a matrix. It would be sufficient to have it be a vector.
Although not wrong using T as a variable is dangerous because T also means TRUE.
The code defines T and them immediately throws it away in the next statement overwriting it. The prior statement defining T is never used.
for always has the value of NULL so assigning it to T is pointless.
the double for loop doesn't do anything. There are no assignments in it so the loops have no effect.
the loops refer to m, n, X and x but these are nowhere defined.
(X[i,j] - x[j])^2 is repeated. It is only needed once.
Writing max(W) on a line by itself has no effect. It only causes printing to be done if done directly in the console. If done in a function it has no effect. If you meant to print it then write print(max(W)).
We want the closest point, not the farthest point, so max should be min.
df is used in the last line but is not defined anywhere.
The question is incomplete without a test run.
I have tried to make the minimum changes to make this work:
closest.point <- function(M, y) {
nr <- nrow(M)
nc <- ncol(M)
W <- numeric(nr) # vector having nr zeros
for(i in 1:nr) {
for(j in 1:nc) {
W[i] <- W[i] + (M[i,j] - y[j])^2
}
}
print(W)
print(min(W))
M[which.min(W),]
}
set.seed(123)
M <- matrix(rnorm(12), 4); M
## [,1] [,2] [,3]
## [1,] -0.56047565 0.1292877 -0.6868529
## [2,] -0.23017749 1.7150650 -0.4456620
## [3,] 1.55870831 0.4609162 1.2240818
## [4,] 0.07050839 -1.2650612 0.3598138
y <- rnorm(3); y
## [1] 0.4007715 0.1106827 -0.5558411
closest.point(M, y)
## [1] 0.9415062 2.9842785 4.6316069 2.8401691 <--- W
## [1] 0.9415062 <--- min(W)
## [1] -0.5604756 0.1292877 -0.6868529 <-- closest row
That said the calculation of the closest row can be done in this function with a one-line body. We transpose M and then subtract y from it which will subtract y from each column but the columns of the transpose are the rows of M so this subtracts y from each row. Then take the column sums of the squared differences and find which one is least. Subscript M using that.
closest.point2 <- function(M, y) {
M[which.min(colSums((t(M) - y)^2)), ]
}
closest.point2(M, y)
## [1] -0.5604756 0.1292877 -0.6868529 <-- closest row

Strange behavior in `matrix` in R

I'm getting strange behavior where matrix dimensions are not working as expected here is a toy example
n <- 10
delt <- 0.00001
s <- n/delt + 1
print(s)
s = 1000001
x <- matrix(0, nrow = s, ncol = 2)
dim(x)
1000000 2
However, if I type
x <- matrix(0, nrow = 1000001, ncol = 2)
dim(x)
I get what I expect 1000001 2
This is why:
print(s,digits=20L); ## s is slightly under 1000001 after all
## [1] 1000000.9999999998836
as.integer(s); ## truncates to 1000000
## [1] 1000000
The documentation on matrix() doesn't explicitly say it, but the nrow and ncol arguments are internally coerced to integer.
Also see Why are these numbers not equal?.

R Cran ldei: error in svd(V2, nu = 0, nv = unsolvable) : a dimension is zero

I am quite new to R-Cran. I would like to solve a linear inverse model with constrains. I am using the ldei-function in the limSolve package.
Here are my linear system and constrains:
A x X = C
G x X >=H, with G=I and H=0. (Basically: X>=0)
where:
A in a 2x2 matrix;
X (nrow=2, ncol=n) is the unknown-vector-(matrix);
C (nrow=2, ncol=n) is the constant-known vector-(matrix).
The matrix G is the identity matrix and H is a vector-(matrix) with zero values.
The constrain X>=0 needs to be satisfied as X represents concentrations of nitrogen dioxide and ozone in outdoor air, and they cannot be negative.
Here is my r-code, I assume n=10:
library(limSolve)
A <- matrix(data = NA, nrow=2,ncol=2)
A[1,c(1:2)] <- c(-3.956946e-05,-1.558643e-05)
A[2,c(1:2)] <- c(-8.785099e-05, 1.540414e-04)
ctmp1 <- c(-3.000286e-04,-0.0003545647,-0.0002958569,-0.000356863,-0.0003602479,-0.0004177914,-0.0004280350,-0.0003890670,-0.0004984785,-0.0005695379)
ctmp2 <- c(-6.462205e-05,-0.0007740174,-0.0006427914,-0.001056369,-0.0009569179,-0.0008562010,-0.0005402486,-0.0005043381,-0.0006366220,-0.0009332219)
inC <- rbind(ctmp1,ctmp2)
C <- matrix(data=inC, nrow=2, ncol=10)
G <- matrix(ncol=2,nrow=2,data= c(1,0,0,1))
inH <- rbind(rep(0., length.out=10),rep(0., length.out=10))
H <- matrix(data=inH, nrow=2, ncol=10)
# I am aware I need to use the apply-family instead of do loops-- this is my work in progress--..
for (i in 1:10){
print(i)
E <- A
FF <- c(C[1,i],C[2,i])
GG <- G
HH <- H[,i]
res <-ldei(E,FF,GG,HH)
print(res$X)
}
### Here the output:
[1] 1
[1] 6.326385 3.188475
[1] 2
[1] 8.93305028 0.06985077
[1] 3
[1] 7.44753875 0.07454004
[1] 4
Error in svd(V2, nu = 0, nv = unsolvable) : a dimension is zero
I would be very grateful for any hint to solve the issue:
"Error in svd(V2, nu = 0, nv = unsolvable) : a dimension is zero"
Many thanks for your help!
Firstly, welcome to R!
If you want to see the code for a function you can just type the name of the function without paretheses and R shows the code. So, for svd, we can see that it gives the error you see when there aren't enough dimensions... which means the as.matrix(x) hasn't given an object with dimensions, which means its input x must be not quite as expected.
You can edit these functions too... for debugging purposes. I normally just paste into a text editor, edit and paste back into R.
svd = function (x, nu = min(n, p), nv = min(n, p), LINPACK = FALSE)
{
x <- as.matrix(x)
if (any(!is.finite(x)))
stop("infinite or missing values in 'x'")
dx <- dim(x)
n <- dx[1L]
p <- dx[2L]
if (!n || !p) {
print("x")
print(x)
print("dx")
print(dx)
print("dx1L")
print (dx[1L])
print("dx2L")
print(dx[2L])
stop("a dimension is zero");}
La.res <- La.svd(x, nu, nv)
res <- list(d = La.res$d)
if (nu)
res$u <- La.res$u
if (nv) {
if (is.complex(x))
res$v <- Conj(t(La.res$vt))
else res$v <- t(La.res$vt)
}
res
}
I've made it print out the values of interest...
[1] "x"
[1,]
[2,]
[1] "dx"
[1] 2 0
[1] "dx1L"
[1] 2
[1] "dx2L"
[1] 0
As suspected, the input has no data...
I pulled the same trick in ldei to get the values of E and V2 (which are the x given to svd) I won't put that code here as it's a big much.
The upshot of it is that rnd in
V2 <- V2 %*% rnd
is width 0 because
unsolvable <- Nx - solvable
...
rnd <- matrix(data = runif(Nx * unsolvable), nrow = Nx,
ncol = unsolvable)
and both Nx and solvable are equal to 2!
The reason it even reaches that part of the code is that no all of CC are bigger than -tol... in this case:
[1] "CC"
[,1]
[1,] 9.570040
[2,] -1.399828
[1] "-tol"
[1] -1.490116e-08
Ultimately, the difference in CC is because of the value of F.
So my guess is that the case where FF = c(C[1,4],C[2,4]) produces an error that is not handled by the function.
Sorry I can't do better!
Following the tips Jimi provided, which I am really grateful for, I contacted directly the maintainer. Here below are her suggestions to "skip" the error. They worked for my case.
"The error means that the system is not solvable with ldei, and the function does not catch the error. What you can do is:
for (i in 1:10){
print(i)
E <- A
FF <- c(C[1,i],C[2,i])
GG <- G
HH <- H[,i]
res <-try(ldei(E,FF,GG,HH))
if (! class(res)%in% "try-error")
print(res$X)
}
Therefore, when class(res)%in% "try-error", then it is not solvable (likely at the requested precision)"

Functional programming in R : illustration with vandermonde matrix

I would like to get a feel of functional programming in R.
To that effect, I would like to write the vandermonde matrix computation, as it can involve a few constructs.
In imperative style that would be :
vandermonde.direct <- function (alpha, n)
{
if (!is.vector(alpha)) stop("argument alpha is not a vector")
if (!is.numeric(alpha)) stop("argument n is not a numeric vector")
m <- length(alpha)
V <- matrix(0, nrow = m, ncol = n)
V[, 1] <- rep(1, m)
j <- 2
while (j <= n) {
V[, j] <- alpha^(j - 1)
j <- j + 1
}
return(V)
}
How would you write that elegantly in R in functional style ?
The following does not work :
x10 <- runif(10)
n <- 3
Reduce(cbind, aaply(seq_len(n-1),1, function (i) { function (x) {x**i}}), matrix(1,length(x10),1))
As it tells me Error: Results must have one or more dimensions. for list of function which go from i in seq_len(3-1) to the function x -> x**i.
It does not seem very natural to use Reduce for this task.
The error message is caused by aaply, which tries to return an array:
you can use alply instead; you also need to call your functions, somewhere.
Here are a few idiomatic alternatives:
outer( x10, 0:n, `^` )
t(sapply( x10, function(u) u^(0:n) ))
sapply( 0:3, function(k) x10^k )
Here it is with Reduce:
m <- as.data.frame(Reduce(f=function(left, right) left * x10,
x=1:(n-1), init=rep(1,length(x10)), accumulate=TRUE))
names(m) <- 1:n - 1
Here's another option, that uses the environment features of R:
vdm <- function(a)
{
function(i, j) a[i]^(j-1)
}
This will work for arbitrary n (the number of columns).
To create the "Vandermonde functional" for a given a, use this:
v <- vdm(a=c(10,100))
To build a matrix all at once, use this:
> outer(1:3, 1:4, v)
[,1] [,2] [,3] [,4]
[1,] 1 10 100 1e+03
[2,] 1 100 10000 1e+06
[3,] 1 NA NA NA
Note that index a[3] is out of bounds, thus returning NA (except for the first column, which is 1).

how to calculate the Euclidean norm of a vector in R?

I tried norm, but I think it gives the wrong result. (the norm of c(1, 2, 3) is sqrt(1*1+2*2+3*3), but it returns 6..
x1 <- 1:3
norm(x1)
# Error in norm(x1) : 'A' must be a numeric matrix
norm(as.matrix(x1))
# [1] 6
as.matrix(x1)
# [,1]
# [1,] 1
# [2,] 2
# [3,] 3
norm(as.matrix(x1))
# [1] 6
Does anyone know what's the function to calculate the norm of a vector in R?
norm(c(1,1), type="2") # 1.414214
norm(c(1, 1, 1), type="2") # 1.732051
This is a trivial function to write yourself:
norm_vec <- function(x) sqrt(sum(x^2))
I was surprised that nobody had tried profiling the results for the above suggested methods, so I did that. I've used a random uniform function to generate a list and used that for repetition (Just a simple back of the envelop type of benchmark):
> uut <- lapply(1:100000, function(x) {runif(1000, min=-10^10, max=10^10)})
> norm_vec <- function(x) sqrt(sum(x^2))
> norm_vec2 <- function(x){sqrt(crossprod(x))}
>
> system.time(lapply(uut, norm_vec))
user system elapsed
0.58 0.00 0.58
> system.time(lapply(uut, norm_vec2))
user system elapsed
0.35 0.00 0.34
> system.time(lapply(uut, norm, type="2"))
user system elapsed
6.75 0.00 6.78
> system.time(lapply(lapply(uut, as.matrix), norm))
user system elapsed
2.70 0.00 2.73
It seems that taking the power and then sqrt manually is faster than the builtin norm for real values vectors at least. This is probably because norm internally does an SVD:
> norm
function (x, type = c("O", "I", "F", "M", "2"))
{
if (identical("2", type)) {
svd(x, nu = 0L, nv = 0L)$d[1L]
}
else .Internal(La_dlange(x, type))
}
and the SVD function internally converts the vector into a matrix, and does more complicated stuff:
> svd
function (x, nu = min(n, p), nv = min(n, p), LINPACK = FALSE)
{
x <- as.matrix(x)
...
EDIT (20 Oct 2019):
There have been some comments to point out the correctness issue which the above test case doesn't bring out:
> norm_vec(c(10^155))
[1] Inf
> norm(c(10^155), type="2")
[1] 1e+155
This happens because large numbers are considered as infinity in R:
> 10^309
[1] Inf
So, it looks like:
It seems that taking the power and then sqrt manually is faster than the builtin norm for real values vectors for small numbers.
How small? So that the sum of squares doesn't overflow.
norm(x, type = c("O", "I", "F", "M", "2"))
The default is "O".
"O", "o" or "1" specifies the one norm, (maximum absolute column sum);
"F" or "f" specifies the Frobenius norm (the Euclidean norm of x treated as if it were a vector);
norm(as.matrix(x1),"o")
The result is 6, same as norm(as.matrix(x1))
norm(as.matrix(x1),"f")
The result is sqrt(1*1+2*2+3*3)
So, norm(as.matrix(x1),"f") is answer.
We can also find the norm as :
Result<-sum(abs(x)^2)^(1/2)
OR Even You can also try as:
Result<-sqrt(t(x)%*%x)
Both will give the same answer
I'mma throw this out there too as an equivalent R expression
norm_vec(x) <- function(x){sqrt(crossprod(x))}
Don't confuse R's crossprod with a similarly named vector/cross product. That naming is known to cause confusion especially for those with a physics/mechanics background.
Answer for Euclidean length of a vector (k-norm) with scaling to avoid destructive underflow and overflow is
norm <- function(x, k) { max(abs(x))*(sum((abs(x)/max(abs(x)))^k))^(1/k) }
See below for explanation.
1. Euclidean length of a vector with no scaling:
norm() is a vector-valued function which computes the length of the vector. It takes two arguments such as the vector x of class matrix and the type of norm k of class integer.
norm <- function(x, k) {
# x = matrix with column vector and with dimensions mx1 or mxn
# k = type of norm with integer from 1 to +Inf
stopifnot(k >= 1) # check for the integer value of k greater than 0
stopifnot(length(k) == 1) # check for length of k to be 1. The variable k is not vectorized.
if(k == Inf) {
# infinity norm
return(apply(x, 2, function(vec) max(abs(vec)) ))
} else {
# k-norm
return(apply(x, 2, function(vec) (sum((abs(vec))^k))^(1/k) ))
}
}
x <- matrix(c(1,-2,3,-4)) # column matrix
sapply(c(1:4, Inf), function(k) norm(x = x, k = k))
# [1] 10.000000 5.477226 4.641589 4.337613 4.000000
1-norm (10.0) converges to infinity-norm (4.0).
k-norm is also called as "Euclidean norm in Euclidean n-dimensional space".
Note:
In the norm() function definition, for vectors with real components, the absolute values can be dropped in norm-2k or even indexed norms, where k >= 1.
If you are confused with the norm function definition, you can read each one individually as given below.
norm_1 <- function(x) sum(abs(x))
norm_2 <- function(x) (sum((abs(x))^2))^(1/2)
norm_3 <- function(x) (sum((abs(x))^3))^(1/3)
norm_4 <- function(x) (sum((abs(x))^4))^(1/4)
norm_k <- function(x) (sum((abs(x))^k))^(1/k)
norm_inf <- max(abs(x))
2. Euclidean length of a vector with scaling to avoid destructive overflow and underflow issues:
Note-2:
The only problem with this solution norm() is that it does not guard against overflow or underflow problems as alluded here and here.
Fortunately, someone had already solved this problem for 2-norm (euclidean length) in the blas (basic linear algebra subroutines) fortran library. A description of this problem can be found in the textbook of "Numerical Methods and Software by Kahaner, Moler and Nash" - Chapter-1, Section 1.3, page - 7-9.
The name of the fortran subroutine is dnrm2.f, which handles destructive overflow and underflow issues in the norm() by scaling with the maximum of the vector components. The destructive overflow and underflow problem arise due to radical operation in the norm() function.
I will show how to implement dnrm2.f in R below.
#1. find the maximum among components of vector-x
max_x <- max(x)
#2. scale or divide the components of vector by max_x
scaled_x <- x/max_x
#3. take square of the scaled vector-x
sq_scaled_x <- (scaled_x)^2
#4. sum the square of scaled vector-x
sum_sq_scaled_x <- sum(sq_scaled_x)
#5. take square root of sum_sq_scaled_x
rt_sum_sq_scaled_x <- sqrt(sum_sq_scaled_x)
#6. multiply the maximum of vector x with rt_sum_sq_scaled_x
max_x*rt_sum_sq_scaled_x
one-liner of the above 6-steps of dnrm2.f in R is:
# Euclidean length of vector - 2norm
max(x)*sqrt(sum((x/max(x))^2))
Lets try example vectors to compute 2-norm (see other solutions in this thread) for this problem.
x = c(-8e+299, -6e+299, 5e+299, -8e+298, -5e+299)
max(x)*sqrt(sum((x/max(x))^2))
# [1] 1.227355e+300
x <- (c(1,-2,3,-4))
max(x)*sqrt(sum((x/max(x))^2))
# [1] 5.477226
Therefore, the recommended way to implement a generalized solution for k-norm in R is that single line, which guard against the destructive overflow or underflow problems. To improve this one-liner, you can use a combination of norm() without scaling for a vector containing not-too-small or not-too-large components and knorm() with scaling for a vector with too-small or too-large components. Implementing scaling for all vectors results in too many calculations. I did not implement this improvement in knorm() given below.
# one-liner for k-norm - generalized form for all norms including infinity-norm:
max(abs(x))*(sum((abs(x)/max(abs(x)))^k))^(1/k)
# knorm() function using the above one-liner.
knorm <- function(x, k) {
# x = matrix with column vector and with dimensions mx1 or mxn
# k = type of norm with integer from 1 to +Inf
stopifnot(k >= 1) # check for the integer value of k greater than 0
stopifnot(length(k) == 1) # check for length of k to be 1. The variable k is not vectorized.
# covert elements of matrix to its absolute values
x <- abs(x)
if(k == Inf) { # infinity-norm
return(apply(x, 2, function(vec) max(vec)))
} else { # k-norm
return(apply(x, 2, function(vec) {
max_vec <- max(vec)
return(max_vec*(sum((vec/max_vec)^k))^(1/k))
}))
}
}
# 2-norm
x <- matrix(c(-8e+299, -6e+299, 5e+299, -8e+298, -5e+299))
sapply(2, function(k) knorm(x = x, k = k))
# [1] 1.227355e+300
# 1-norm, 2-norm, 3-norm, 4-norm, and infinity-norm
sapply(c(1:4, Inf), function(k) knorm(x = x, k = k))
# [1] 2.480000e+300 1.227355e+300 9.927854e+299 9.027789e+299 8.000000e+299
x <- matrix(c(1,-2,3,-4))
sapply(c(1:4, Inf), function(k) knorm(x = x, k = k))
# [1] 10.000000 5.477226 4.641589 4.337613 4.000000
x <- matrix(c(1,-2,3,-4, 0, -8e+299, -6e+299, 5e+299, -8e+298, -5e+299), nc = 2)
sapply(c(1:4, Inf), function(k) knorm(x = x, k = k))
# [,1] [,2] [,3] [,4] [,5]
# [1,] 1.00e+01 5.477226e+00 4.641589e+00 4.337613e+00 4e+00
# [2,] 2.48e+300 1.227355e+300 9.927854e+299 9.027789e+299 8e+299
If you have a data.frame or a data.table 'DT', and want to compute the Euclidian norm (norm 2) across each row, the apply function can be used.
apply(X = DT, MARGIN = 1, FUN = norm, '2')
Example:
>DT
accx accy accz
1: 9.576807 -0.1629486 -0.2587167
2: 9.576807 -0.1722938 -0.2681506
3: 9.576807 -0.1634264 -0.2681506
4: 9.576807 -0.1545590 -0.2681506
5: 9.576807 -0.1621254 -0.2681506
6: 9.576807 -0.1723825 -0.2682434
7: 9.576807 -0.1723825 -0.2728810
8: 9.576807 -0.1723825 -0.2775187
> apply(X = DT, MARGIN = 1, FUN = norm, '2')
[1] 9.581687 9.582109 9.581954 9.581807 9.581932 9.582114 9.582245 9.582378
Following AbdealiJK's answer,
I experimented further to gain some insight.
Here's one.
x = c(-8e+299, -6e+299, 5e+299, -8e+298, -5e+299)
sqrt(sum(x^2))
norm(x, type='2')
The first result is Inf and the second one is 1.227355e+300 which is quite correct as I show you in the code below.
library(Rmpfr)
y <- mpfr(x, 120)
sqrt(sum(y*y))
The result is 1227354879.... I didn't count the number of trailing numbers but it looks all right. I know there another way around this OVERFLOW problem which is first applying log function to all numbers and summing up, which I do not have time to implement!
Create your matrix as column vise using cbind then the norm function works well with Frobenius norm (the Euclidean norm) as an argument.
x1<-cbind(1:3)
norm(x1,"f")
[1] 3.741657
sqrt(1*1+2*2+3*3)
[1] 3.741657

Resources