I have an 8-core RHEL Linux machine running R 4.0.2.
If I ask R for the number of cores, I can confirm that 8 are available.
> print(future::availableWorkers())
[1] "localhost" "localhost" "localhost" "localhost" "localhost" "localhost"
[7] "localhost" "localhost"
> print(parallel::detectCores())
[1] 8
However, if I run this simple example
f <- function(out=0) {
for (i in 1:1e10) out <- out + 1
}
output <- parallel::mclapply(1:8, f, mc.cores = 8)
my top indicates that only 1 core is being used (so that each worker is using 1/8th of that core, or 1/64th of the entire machine).
%Cpu0 :100.0 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu1 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu2 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu3 : 2.0 us, 0.0 sy, 0.0 ni, 98.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu4 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu5 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu6 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu7 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 32684632 total, 28211076 free, 2409992 used, 2063564 buff/cache
KiB Swap: 16449532 total, 11475052 free, 4974480 used. 29213180 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3483 user 20 0 493716 57980 948 R 1.8 0.2 0:18.09 R
3479 user 20 0 493716 57980 948 R 1.5 0.2 0:18.09 R
3480 user 20 0 493716 57980 948 R 1.5 0.2 0:18.08 R
3481 user 20 0 493716 57980 948 R 1.5 0.2 0:18.09 R
3482 user 20 0 493716 57980 948 R 1.5 0.2 0:18.09 R
3484 user 20 0 493716 57980 948 R 1.5 0.2 0:18.09 R
3485 user 20 0 493716 57980 948 R 1.5 0.2 0:18.09 R
3486 user 20 0 493716 57980 948 R 1.5 0.2 0:18.09 R
Does anyone know what might be going on here? Another StackOverflow question that documents similar behavior is here. It's clear that I messed up the install somehow. I followed these install instructions for RHEL 7. I'm guessing there is a dependency missing, but I have no idea where to look. If anyone has any ideas of diagnostics to run, etc., they would be most appreciated.
For further context, I have R 3.4.1 also installed on my machine, and when I run this code, everything works fine. (I installed that version through yum.)
I also installed R 4.0.3 yesterday using the same instructions linked above, and it suffers from the same problem.
First run
system(sprintf("taskset -p 0xffffffff %d", Sys.getpid()))
then your simple example
f <- function(out=0) { for (i in 1:1e10) out <- out + 1 }
output <- parallel::mclapply(1:8, f, mc.cores = 8)
works on all 8 cores.
Related
I've been doing some exercises in julia, and I am currently trying to #assert a vector that has been diagonalized into a matrix against a "solution matrix" given in the exercise notebook. However, I get an AssertionError when asserting my code against the provided solution. Example of my code:
julia> using LinearAlgebra
julia> A =
[
140 97 74 168 131
97 106 89 131 36
74 89 152 144 71
168 131 144 54 142
131 36 71 142 36
]
5×5 Matrix{Int64}:
140 97 74 168 131
97 106 89 131 36
74 89 152 144 71
168 131 144 54 142
131 36 71 142 36
julia> A_eigv = eigen(A).values
5-element Vector{Float64}:
-128.49322764802145
-55.887784553057
42.752167279318854
87.16111477514494
542.4677301466137
julia> A_diag = Diagonal(A_eigv)
5×5 Diagonal{Float64, Vector{Float64}}:
-128.493 ⋅ ⋅ ⋅ ⋅
⋅ -55.8878 ⋅ ⋅ ⋅
⋅ ⋅ 42.7522 ⋅ ⋅
⋅ ⋅ ⋅ 87.1611 ⋅
⋅ ⋅ ⋅ ⋅ 542.468
julia> #assert A_diag == [-128.493 0.0 0.0 0.0 0.0;
0.0 -55.8878 0.0 0.0 0.0;
0.0 0.0 42.7522 0.0 0.0;
0.0 0.0 0.0 87.1611 0.0;
0.0 0.0 0.0 0.0 542.468]
AssertionError: A_diag == [-128.493 0.0 0.0 0.0 0.0; 0.0 -55.8878 0.0 0.0 0.0; 0.0 0.0 42.7522 0.0 0.0; 0.0 0.0 0.0 87.1611 0.0; 0.0 0.0 0.0 0.0 542.468]
Stacktrace:
[1] top-level scope
# In[90]:1
[2] eval
# ./boot.jl:360 [inlined]
[3] include_string(mapexpr::typeof(REPL.softscope), mod::Module, code::String, filename::String)
# Base ./loading.jl:1094
My inital assumption was that a difference in number of decimals was the cause of the error. I therefore replaced == with ≈ (\approx). However, as the code example below shows, the error persists:
julia> #assert A_diag ≈ #\approx
[-128.493 0.0 0.0 0.0 0.0;
0.0 -55.8878 0.0 0.0 0.0;
0.0 0.0 42.7522 0.0 0.0;
0.0 0.0 0.0 87.1611 0.0;
0.0 0.0 0.0 0.0 542.468]
AssertionError: A_diag ≈ [-128.493 0.0 0.0 0.0 0.0; 0.0 -55.8878 0.0 0.0 0.0; 0.0 0.0 42.7522 0.0 0.0; 0.0 0.0 0.0 87.1611 0.0; 0.0 0.0 0.0 0.0 542.468]
Stacktrace:
[1] top-level scope
# In[97]:1
[2] eval
# ./boot.jl:360 [inlined]
[3] include_string(mapexpr::typeof(REPL.softscope), mod::Module, code::String, filename::String)
# Base ./loading.jl:1094
I've been reading through my code multiple times now, and I am at a loss. The values in my diagonal matrix (A_diag) are seemingly identical to the solution matrix. Furthermore, setting the statement to approximately equal (\approx) renders the same error, so I assume I can count out decimal error.
My main question is: What causes the AssertionError?
No, the dots are treated as 0 for the purpose of testing equality.
julia> Diagonal(1:2) == [1 0; 0 2]
true
Your problem is that your arrays are actually not equal; -128.49322764802145 is not the same as -128.493. (The pretty-printed version of the array truncates floats for display, but that's not the true underlying value!).
[Edit:]
Using ≈ (\approx) will also fail in this case. The reason for this is explained in the documentation for isapprox()
The binary operator ≈ is equivalent to isapprox with the default arguments,
if an atol > 0 is not specified, rtol defaults to the square root of eps of the type of x or y, whichever is bigger (least precise).
Essentially, what this means is that ≈ will test for approximate equality with a relative tolerance of √eps() which is approximately equal to 1.5e-8, or 0.0000015%. This tolerance is waaay too low, and increasing the tolerance will resolve the issue. E.g.:
# Option 1: Absolute tolerance. Set to a reasonable max deviation:
julia> isapprox(A_diag, sol_mat, atol = 1e-3)
true
# Option 2: Relative tolerance. Setting rtol = 1e-n, n is the number of significant digits in either matrices will work in most cases.
julia> isapprox(A_diag, sol_mat, rtol = 1e-6)
true
Since the solution matrix provides the values in six significant digits, another alternative is to round the values in A_diag to this number og digits and test for equality. E.g.:
julia> round.(A_diag, RoundNearestTiesUp, sigdigits=6) ==
[-128.493 0.0 0.0 0.0 0.0;
0.0 -55.8878 0.0 0.0 0.0;
0.0 0.0 42.7522 0.0 0.0;
0.0 0.0 0.0 87.1611 0.0;
0.0 0.0 0.0 0.0 542.468]
true
Given a list of columns and rows, I want to produce a submatrix of a cholesky factorization. Example:
julia> A = rand(10,10)
julia> R = chol(A'*A)
julia> ind = [1,3,6,8,9]
julia> R[ind,ind]
However, this results in an error:
ERROR: BoundsError: attempt to access 5x5
UpperTriangular{Float64,Array{Float64,2}}:
1.28259 0.0 0.0 0.0 0.0
0.0 6.51646e-314 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
at index [2,1]
in _unsafe_getindex at multidimensional.jl:197
in getindex at abstractarray.jl:483
I understand that this would work for a typical matrix, but the UpperTriangular type obviously requires something different... I can't find documentation on this.
Looks like Triangular matrices haven't been updated to take advantage of fallback non-scalar indexing in 0.4 (this was a missing method error in 0.3).
Easiest ways around this for now are conversion to a full array before you index:
julia> full(R)[ind,ind]
5x5 Array{Float64,2}:
2.2261 1.28096 1.69087 1.26135 1.50703
0.0 1.03681 0.115735 0.559855 0.70766
0.0 0.0 0.702936 -0.111155 -0.61263
0.0 0.0 0.0 0.661491 0.33661
0.0 0.0 0.0 0.0 0.159691
Or by using a SubArray, which creates a view into the original data (so modifications will propagate):
julia> sub(R, ind, ind)
5x5 SubArray{Float64,2,UpperTriangular{Float64,Array{Float64,2}},Tuple{Array{Int64,1},Array{Int64,1}},0}:
2.2261 1.28096 1.69087 1.26135 1.50703
0.0 1.03681 0.115735 0.559855 0.70766
0.0 0.0 0.702936 -0.111155 -0.61263
0.0 0.0 0.0 0.661491 0.33661
0.0 0.0 0.0 0.0 0.159691
I have a zoo obj like that colled z.
> z["2013-12",1]
Allerona
2013-12-01 0.0
2013-12-02 0.0
2013-12-03 0.0
2013-12-04 0.0
2013-12-05 0.2
2013-12-06 0.0
2013-12-07 0.0
2013-12-08 0.2
2013-12-09 0.0
....
It stores daily value of rainfall.
I'm able to compute the 5-days accumulation using rollapply usingi:
m=rollapply(z, width=3, FUN=sum, by=1, by.column=TRUE, fill=NA, align="right")
It looks ok
> m["2013-12",1]
Allerona
2013-12-01 0.0
2013-12-02 0.0
2013-12-03 0.0
2013-12-04 0.0
2013-12-05 0.2
2013-12-06 0.2
2013-12-07 0.2
2013-12-08 0.2
2013-12-09 0.2
...
How can I calculate for each day themean for 5-years before?
Thanks
SMA (x, n=5*365)
does not do the trick ?
I sorted my problem.
The solution was to use a list into the width parameter of rollapply.
here below the code:
mean5year=rollapply(as.zoo(m), list(-365*5:1), function(x) {mean(x,na.rm = TRUE)},fill=NA)
where
list(-365*5:1)
takes the same day but in the previous 5-years. I should also use a mean with na.rm =TRUE to compute mean also if NA are in the sequence
I found a weird result in some data I am working on and decided to test closeness and shortest.paths functions with the following matrix.
test<-c(0,0.3,0.7,0.9,0.3,0,0,0,0.7,0,0,0.5,0.9,0,0.5,0)
test<-matrix(test,nrow=4)
colnames(test)<-c("A","B","C,","D")
rownames(test)<-c("A","B","C,","D")
test
A B C D
A 0.0 0.3 0.7 0.9
B 0.3 0.0 0.0 0.0
C 0.7 0.0 0.0 0.5
D 0.9 0.0 0.5 0.0
grafo=graph.adjacency(abs(test),mode="undirected",weighted=TRUE,diag=FALSE)
When I measure closeness() I get this:
> closeness(grafo)
A B C D
0.5263158 0.4000000 0.4545455 0.3846154
Which is merely the sum of the weights and NOT the distancies (1-weights).
> 1/(0.7+(0.7+0.3)+0.5)
[1] 0.4545455
When I define distance as 1-weight, I get this
> 1/((1-0.7)+((1-0.7)+(1-0.3))+(1-0.5))
[1] 0.5555556
In the igraph manual, it says, in the formula, that it is the sum of distances. My question is, does the function actually consider the weight and, therefore, it is a bug, or WE should consider (and modify) our graphs' edges as distance to run this function?
The SAME issue occurs with the shortest.paths function btw. It gives me a sum of the weights, NOT distances.
> shortest.paths(grafo)
A B C D
A 0.0 0.3 0.7 0.9
B 0.3 0.0 1.0 1.2
C 0.7 1.0 0.0 0.5
D 0.9 1.2 0.5 0.0
Thanks.
I'd coded a Julia function with an array bounds error:
function wrong()
alphas = [ 0.5, 1, 1.25, 2.0 ] ;
theta = 0:0.02:1 * pi ;
U = zeros( length(theta), 4 ) ;
i = 1 ;
j = 1 ;
for a = alphas
kd = pi * a ;
for t = theta
v = (cos( kd * cos( t ) ) - cos( kd ))/sin( t ) ;
U[i, j] = v ;
i = i + 1 ;
end
j = j + 1 ;
end
end
Here i=1 should be in the loop. I get:
julia> wrong()
ERROR: BoundsError()
in setindex! at array.jl:308 (repeats 2 times)
Is there any way to get the julia interpreter to give more detailed information about exceptions when they are hit, or ways to debug into the failing statement and see what's going on? For example, knowing what the index values that were caused the bounds error when this occurred would have been helpful to debug this.
Bounds error reporting has improved in julia v0.4 via this pull request https://github.com/JuliaLang/julia/pull/9534. In julia 0.4 the array as well as the index you were trying to access get printed by default:
julia> wrong()
ERROR: BoundsError: attempt to access 158x4 Array{Float64,2}:
NaN 0.0 0.0 0.0
0.0157085 0.0 0.0 0.0
0.0314201 0.0 0.0 0.0
0.047138 0.0 0.0 0.0
0.0628651 0.0 0.0 0.0
0.0786045 0.0 0.0 0.0
0.094359 0.0 0.0 0.0
0.110131 0.0 0.0 0.0
0.125924 0.0 0.0 0.0
0.141739 0.0 0.0 0.0
⋮
0.127183 0.0 0.0 0.0
0.111388 0.0 0.0 0.0
0.0956143 0.0 0.0 0.0
0.0798585 0.0 0.0 0.0
0.064118 0.0 0.0 0.0
0.04839 0.0 0.0 0.0
0.0326715 0.0 0.0 0.0
0.0169595 0.0 0.0 0.0
0.00125087 0.0 0.0 0.0
at index [159,2]
in wrong at none:15
I don't know if you can backport the changes to your julia version, but switching to 0.4 should solve your problem.