Is it normal for Tokio to open this many threads?
top - 19:50:25 up 5:55, 2 users, load average: 0.00, 0.00, 0.00
Threads: 9 total, 0 running, 9 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 19963.7 total, 19046.5 free, 268.3 used, 648.9 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 19449.8 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1665 root 20 0 17556 4 0 S 0.0 0.0 0:00.00 nfwd
1666 root 20 0 17556 4 0 S 0.0 0.0 0:00.00 tokio-runtime-w
1667 root 20 0 17556 4 0 S 0.0 0.0 0:00.00 tokio-runtime-w
1668 root 20 0 17556 4 0 S 0.0 0.0 0:00.00 tokio-runtime-w
1669 root 20 0 17556 4 0 S 0.0 0.0 0:00.00 tokio-runtime-w
1670 root 20 0 17556 4 0 S 0.0 0.0 0:00.00 tokio-runtime-w
1671 root 20 0 17556 4 0 S 0.0 0.0 0:00.00 tokio-runtime-w
1672 root 20 0 17556 4 0 S 0.0 0.0 0:00.00 tokio-runtime-w
1673 root 20 0 17556 4 0 S 0.0 0.0 0:00.00 tokio-runtime-w
All I'm doing is using the tokio::net::UnixListener to listen on a socket. But as you can see in top Tokio has opened 8 different processes/threads named tokio-runtime-w.
What is causing this, it is necessary to have this many and can it be limited/omited?
pub struct StreamServer {
pub socket: UnixListener,
}
impl StreamServer {
pub async fn new() -> Result<Self, Box<dyn Error>> {
let directory = temp_dir();
let path = directory.as_path().join("d");
if path.exists() {
remove_file(&path).await?;
}
let socket = UnixListener::bind(&path)?;
Ok(Self { socket })
}
pub async fn run(&mut self) -> Result<(), Box<dyn Error>> {
loop {
match &self.socket.accept().await {
Ok((stream, addr)) => loop {
match &self.stream.readable().await {
Ok(()) => {
let mut buf = Vec::with_capacity(1024);
match &self.stream.try_read_buf(&mut buf) {
Ok(n) => {
let msg = String::from_utf8((&buf.as_slice()[..*n]).to_vec())?;
Ok((msg, *n))
}
Err(ref e) if e.kind() == ErrorKind::WouldBlock => {
Ok(("".to_string(), 0))
}
Err(e) => todo!("{:?}", e),
}
}
Err(e) => todo!("{:?}", e),
}
},
Err(ref e) if e.kind() == ErrorKind::WouldBlock => {
continue;
}
Err(_) => todo!(),
};
}
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let mut stream = StreamServer::new().await?;
stream.run().await?;
}
Yes, this looks like a normal result.
There are a fixed number of worker_threads:
The default value is the number of cores available to the system.
And a dynamic number of blocking_threads:
The default value is 512.
You can configure the Runtime using the methods linked above, or worker_threads is configurable in the attribute like so:
#[tokio::main(worker_threads = 2)]
Related
Any idea how to draw this in Postscript using for loop and ifelse conditional?
My idea was to make a large red circle, then a smaller white circle and a smaller red circle again...
Also we can see that the color is getting darker so it should also be saved as a variable that gets darker.
50 50 translate
/coordinate_system {0.5 0.3 0 0 setcmykcolor
gsave
2 setlinewidth 500 0 moveto 0 0 lineto 0 500 lineto stroke
grestore
gsave
0.3 setlinewidth
9 { 30 100 moveto 500 100 lineto stroke 0 50 translate } repeat
grestore
gsave
0.3 setlinewidth
10 { 100 20 moveto 100 500 lineto stroke 50 0 translate } repeat
grestore
gsave
/tekst 3 string def /Helvetica findfont 10 scalefont setfont
100 100 500 { /y exch def 5 y 2 sub moveto y tekst cvs show } for
90 100 500 { /x exch def x 5 moveto x 10 add tekst cvs show } for
grestore
0 setgray } bind def
/s { mark pstack pop } def
coordinate_system
And this is the code so far...
100 100 translate
%100 -3 0 {{1 0 0 setrgbcolor exch 0 exch 0 360 arc stroke}{0 0 0 setrgbcolor exch 0 exch 0 360 arc stroke} ifelse} for
3 4 lt {1 0 0 setrgbcolor 0 0 50 0 360 arc stroke}{0 0 0 setrgbcolor 0 0 100 0 360 arc stroke} ifelse
The following code loops for i = 1, 2, ..., 10. I am using i to control the radius and color of the circle.
/i 1 def
{
i 0.1 mul 0 0 setrgbcolor % RGB (i*0.1, 0, 0)
i 10 gt { exit } if % exit the loop if i > 10
300 300 % center at 300 300
20 i mul % radius 20*i
drawcircle
/i i 1 add def % i = i + 1
} loop
drawcircle code:
/drawcircle % XO YO R
{
newpath
0 360 arc
closepath
stroke
} bind def
3 setlinewidth
My output (cropped a bit) is:
I am loading some large .csv files on an RStudio server which require circa 73gb. The server is a Ubuntu 18.04.5 LTS with 125GB RAM, so theoretically I should be able to load more data on RAM and do computations which require more RAM. However, RStudio Server denies that with the following error message:
"Error: cannot allocate vector of size 3.9 Gb".
Looking into the free -mh it seems that there is available memory of 42GB, but RStudio does not utilise it.
~:$ free -mh
total used free shared buff/cache available
Mem: 125G 81G 1.2G 28M 43G 42G
Swap: 979M 4.0M 975M`
Why is this happening and how can I utilise these 42GB for computations?
top output:
top - 11:05:58 up 21:19, 1 user, load average: 0.24, 0.22, 0.22
Tasks: 307 total, 1 running, 201 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.6 us, 0.2 sy, 0.1 ni, 99.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 13170917+total, 1216824 free, 85389312 used, 45103040 buff/cache
KiB Swap: 1003516 total, 999420 free, 4096 used. 45033504 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4292 user1 30 10 39956 5052 4272 S 2.6 0.0 0:06.94 galaxy
1171 root 20 0 488892 44216 7540 S 1.3 0.0 12:24.39 cbdaemon
5998 user1 20 0 44428 3936 3168 R 1.0 0.0 0:00.07 top
6153 user1 20 0 275740 101112 29396 S 0.7 0.1 25:55.97 nxagent
1207 root 20 0 377116 163944 68132 S 0.3 0.1 5:37.13 evl_manager
1247 root 20 0 1262092 63308 14956 S 0.3 0.0 0:35.77 mcollectived
6623 user1 20 0 126640 5852 3212 S 0.3 0.0 0:02.48 sshd
7413 user1 20 0 80.623g 0.078t 53712 S 0.3 63.8 20:14.46 rsession
1 root 20 0 225756 8928 6320 S 0.0 0.0 0:23.05 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kthreadd
...
Surprisingly, the problem was not on the OS or R/RStudio Sever, but instead on the use of data.table, which I used to open the .csv files. Apparently, it prevented the system from using the available memory. Once I moved to dplyr, I had no problem utilising all the memory the system has.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
root#0cd6bfb7e363 app]# s6-svc /etc/services.d/uwsgi/
s6-svc: fatal: unable to control /etc/services.d/uwsgi/: supervisor not listening
[root#0cd6bfb7e363 app]# s6-svc -r /etc/services.d/uwsgi/
s6-svc: fatal: unable to control /etc/services.d/uwsgi/: supervisor not listening
however
[root#0cd6bfb7e363 app]# ps aufx
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 204 4 pts/0 Ss 01:42 0:00 s6-svscan -t0 /var/run/s6/services
root 30 0.0 0.0 188 8 pts/0 S 01:42 0:00 foreground if /etc/s6/init/init-stage2-redirfd foreground if if s6-echo -n --
root 40 0.0 0.0 184 4 pts/0 S 01:42 0:00 \_ foreground s6-setsid -gq -- with-contenv backtick -D 0 -n S6_LOGGING printcontenv S6_LO
root 261 0.0 0.0 15540 3864 pts/0 S 01:42 0:00 \_ /bin/bash
root 516 0.0 0.0 55204 3932 pts/0 R+ 01:52 0:00 \_ ps aufx
root 31 0.0 0.0 204 4 pts/0 S 01:42 0:00 s6-supervise s6-fdholderd
root 241 0.0 0.0 204 4 pts/0 S 01:42 0:00 s6-supervise nginx
root 246 0.0 0.0 56840 7228 ? Ss 01:42 0:00 \_ nginx: master process /usr/sbin/nginx
nginx 267 0.0 0.0 57484 5048 ? S 01:42 0:00 \_ nginx: worker process
root 243 0.0 0.0 204 4 pts/0 S 01:42 0:00 s6-supervise uwsgi
root 245 0.0 0.0 15140 3012 ? Ss 01:42 0:00 \_ bash ./run
root 255 0.2 0.0 259944 82060 ? S 01:42 0:01 \_ uwsgi /etc/uwsgi.ini
root 339 0.0 0.0 259944 70452 ? S 01:43 0:00 \_ uwsgi /etc/uwsgi.ini
root 340 0.0 0.0 259944 70452 ? S 01:43 0:00 \_ uwsgi /etc/uwsgi.ini
root 341 0.1 0.0 297500 96408 ? S 01:43 0:01 \_ uwsgi /etc/uwsgi.ini
root 342 0.0 0.0 259944 70452 ? S 01:43 0:00 \_ uwsgi /etc/uwsgi.ini
What could be wrong?
Because s6-overlay copies the services to a different folder. Look at the first line in the ps dump:
[root#0cd6bfb7e363 app]# ps aufx
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 204 4 pts/0 Ss 01:42 0:00 s6-svscan -t0 /var/run/s6/services
The s6-supervisor is listening to the services under /var/run/s6/services, therefore what you should do is:
$ s6-svc -u /var/run/s6/services/uwsgi
It is also explained in the README file:
Stage 2.iii) Copy user services (/etc/services.d) to the folder where s6 is running its supervision and signal it so that it can properly start supervising them.
and
You may want to use a Read-Only Root Filesystem, since that ensures s6-overlay copies files into the /var/run/s6 directory rather than use symlinks.
Program R
I want to create a function in R which would take the column names from a data frame and create a new function which argument names are the same as the column names, having as many arguments as there are columns.
my_function<-function(data_frame){
return(new_function<-function(#column_name_1,#column_name_2,#column_name3,#column_name_x){
[function does something cool]
}
)
}
For example, if I have a data frame:
> dimorphandra
genus_species temp germinability waterpotential
1 Dimorphandra_mollis 30.8 86 0.0
2 Dimorphandra_mollis 32.5 94 0.0
3 Dimorphandra_mollis 35.0 74 0.0
4 Dimorphandra_mollis 37.0 44 0.0
5 Dimorphandra_mollis 39.0 2 0.0
6 Dimorphandra_mollis 41.0 0 0.0
Then I would apply my_function to it:
my_function(dimorphandra)
Which would output:
new_function<-function(genus_species,temp,germinability,water potential){
[function does something cool]
}
How can I create my_function?
Thank you very much!
I am submitting a job using qsub that runs parallelized R. My
intention is to have R programme running on 4 different cores rather than 8 cores. Here are some of my settings in PBS file:
#PBS -l nodes=1:ppn=4
....
time R --no-save < program1.R > program1.log
I am issuing the command ta job_id and I'm seeing that 4 cores are listed. However, the job occupies a large amount of memory(31944900k used vs 32949628k total). If I were to use 8 cores, the jobs got hang due to memory limitation.
top - 21:03:53 up 77 days, 11:54, 0 users, load average: 3.99, 3.75, 3.37
Tasks: 207 total, 5 running, 202 sleeping, 0 stopped, 0 zombie
Cpu(s): 30.4%us, 1.6%sy, 0.0%ni, 66.8%id, 0.0%wa, 0.0%hi, 1.2%si, 0.0%st
Mem: 32949628k total, 31944900k used, 1004728k free, 269812k buffers
Swap: 2097136k total, 8360k used, 2088776k free, 6030856k cached
Here is a snapshot when issuing command ta job_id
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1794 x 25 0 6247m 6.0g 1780 R 99.2 19.1 8:14.37 R
1795 x 25 0 6332m 6.1g 1780 R 99.2 19.4 8:14.37 R
1796 x 25 0 6242m 6.0g 1784 R 99.2 19.1 8:14.37 R
1797 x 25 0 6322m 6.1g 1780 R 99.2 19.4 8:14.33 R
1714 x 18 0 65932 1504 1248 S 0.0 0.0 0:00.00 bash
1761 x 18 0 63840 1244 1052 S 0.0 0.0 0:00.00 20016.hpc
1783 x 18 0 133m 7096 1128 S 0.0 0.0 0:00.00 python
1786 x 18 0 137m 46m 2688 S 0.0 0.1 0:02.06 R
How can I prevent other users from using the other 4 cores? I like to mask somehow that my job is using 8 cores with 4 cores idling.
Could anyone kindly help me out on this? Can this be solved using pbs?
Many Thanks
"How can I prevent other users from using the other 4 cores? I like to mask somehow that my job is using 8 cores with 4 cores idling."
Maybe a simple way around it is to send a 'sleep' job on the other 4? Seems hackish though! (ans warning, my PBS is rusty!)
Why not do the following -
ask PBS for ppn=4, additionally, ask for all the memory on the node, i e
#PBS -l nodes=1:ppn=4 -l mem=31944900k
This might not be possible on your setup.
I am not sure how R is parallelized, but if it is OPENMP you could definitely ask for 8 cores but set OMP_NUM_THREADS to 4