I'm trying the set the maximum memory usage of a command with ulimit.
My command is currently:
ulimit -m 1024 -v 0; python file.py
The python process uses over 5MB of memory. What have I done wrong with the ulimit command?
Related
I wrote a code in Python which implements mpi4py to scatter chunks of data across the processors of a cluster. Each processor writes the given chunk of data into a .txt file, then all these .txt files are merged in one.
Everything is working as expected.
However, for very large .txt files, the cluster is complaining about memory:
mpiexec noticed that process ... rank ... on node ... exited on signal 9 (Killed)
I'm trying to set the parameters in the PBS file in a way which avoids this issue. So far, this is not working:
#!/bin/bash
#PBS -S /bin/bash
## job name and output file
#PBS -N test
#PBS -j oe
#PBS -o job.o
#PBS -V
###########################################################
# USER PARAMETERS
##PBS -l select=16:mpiprocs=1:mem=8000mb
#PBS -l select=4:ncpus=16:mem=4gb
#PBS -l walltime=03:00:00
###########################################################
ulimit -Hn
# number of processes
NPROC=64
echo $NPROC
CURRDIR=$PBS_O_WORKDIR
echo $CURRDIR
cd $CURRDIR
module load anaconda/2019.10
source activate py3
cat $PBS_NODEFILE
echo starting run in current directory $CURRDIR
echo " "
mpiexec -n $NPROC -hostfile $PBS_NODEFILE python $CURRDIR/test.py
echo "finished successfully"
Any idea?
MPI uses distributed memory, that is, if you have more data than fits in one process, you spread it over multiple processes, for instance on multiple computers. So "scattering" data often doesn't make sense: it assumes that all this too-much data actually fits on one process. For a true MPI program, your processes all create their own data, or read it from a file, but you never have all data in one place.
So if you're dealing with lots of data, then a scattering approach will of course run out of memory, but it's the wrong way to approach your problem to begin with. Rewrite your program and make it truly distributed memory parallel.
I am trying to test a script I have developed locally on an interactive HPC node, and I keep running in this strange issue that mclapply works only on a single core. I see several R processes spawned in htop (as many as the number of the cores), but they all occupy only one core.
Here is how I obtain the interactive node:
srun -n 16 -N 1 -t 5 --pty bash -il
Is there a setting I am missing? How can I make this work? What can I check?
P.S. I just tested and the other programs that rely on forking to do parallel processing (say pigz) are afflicted by the same issue as well. Those that rely on MPI and messaging work properly, it seems.
Yes, you are missing a setting. Try:
srun -N 1 -n 1 -c 16 -t 5 --pty bash -il
The problem is that you are running the parallel commands within a bash shell that is allocated on a single core, so the bash process is spawned on only one of the cores requested by srun.
Otherwise, you can first allocate your resources using salloc and once you obtain them run your actual command. For instance:
salloc -N 1 -n 1 -c 16 -t 5
srun pigz file.ext
Since the beginning of November, I'm stuck in to run a parallel job in a Linux cluster. I already search A LOT on the internet searching for information but I simply can't progress. When I start to search for parallelism in R using cluster I discovered the Rmpi. It looked quite simple, but now I don't now more what to do. I have a script to send my job:
#PBS -S /bin/bash
#PBS -N ANN_residencial
#PBS -q linux.q
#PBS -l nodes=8:ppn=8
cd $PBS_O_WORKDIR
source /hpc/modulos/bash/R-3.3.0.sh
export LD_LIBRARY_PATH=/hpc/nlopt-2.4.2/lib:$LD_LIBRARY_PATH
export CPPFLAGS='-I/hpc/nlopt-2.4.2/include '$CPPFLAGS
export PKG_CONFIG_PATH=/hpc/nlopt-2.4.2/lib/pkgconfig:$PKG_CONFIG_PATH
# OPENMPI 1.10 + GCC 5.3
source /hpc/modulos/bash/openmpi-1.10-gcc53.sh
mpiexec --mca orte_base_help_aggregate 0 -np 1 -hostfile ${PBS_NODEFILE} /hpc/R-3.3.0/bin/R --slave -f sunhpc_mpi.r
And this is the beginning of my R program:
library(caret)
library(Rmpi)
library(doMPI)
cl <- startMPIcluster()
registerDoMPI(cl)
So here is my questions:
1- Is this way I should initialize the processes (i.e. using starMPIcluster whitout a parameter and using at the command line -np 1)?
2- Why when I use this commands the MPI complains with it's frase?
An MPI process has executed an operation involving a call to the
"fork()" system call to create a child process....
OBS: He said that for all the 64 processes (because there are 8 nodes with 8 cpus and I'm creating 63 processes)
3- Why when I use this commands on a machine of 60 CPU's he just spawn two workers?
Finally, I got it!
To run a parallel program in R using the Rmpi in a cluster you need to configure the job script according to the system. Next on the command line:
mpiexec --mca orte_base_help_aggregate 0 -np 1 -hostfile ${PBS_NODEFILE} /hpc/R-3.3.0/bin/R --slave -f sunhpc_mpi.r
You have to modify to:
mpiexec -np NUM_PROC -hostfile ${PBS_NODEFILE} /hpc/R-3.3.0/bin/R --slave -f sunhpc_mpi.r
On the R code, you must not detail anything 'startMPIcluster()' So, the code will exactly as I wrote above.
I am trying to run a compiled program that is supposed to be running on multiple processors. But with the same data, sometimes this program runs in parallel and sometimes it won't (with the identical PBS script file!). I am suspecting that something is wrong with some of the compute nodes that won't let it run on parallel (I don't get to choose the compute node I want). How can I troubleshoot if this is a bug in the program or it is problem with the compute node?
As per the sys admin's adivce, I am using ulimit -s 100000, but this don't change anything. Also, this program is not an mpi program (runs only on a single node, with multiple processors).
The code that I run is as follows:
quorum_error_correct_reads -q 68 \
--contaminant=/data004/software/GIF/packages/masurca/2.3.0rc1/bin/../share/adapter.jf \
-m 1 -s 1 -g 1 -a 3 --thread=32 -w 10 -e 3 \
quorum_mer_db.jf aa.renamed.fastq ab.renamed.fastq ac.renamed.fastq ad.renamed.fastq ae.renamed.fastq af.renamed.fastq ag.renamed.fastq \
--no-discard -o pe.cor --verbose
Thanks for any advice you can offer. I will greatly appreciate your help!
PS: I don't have sudo access.
EDIT: I know it is supposed to be using multiple processors because, when I SSH into the node and do top -c I can see (above command) sometimes running like 3200 % CPU (all the time) and sometimes only 100 % CPU all the time. This is the only step involved and there are no other sub-process within this program. Also, I am using HPC, where I submit the job to a compute node, each with 32 procs, 512GB RAM.
I'm trying to install riak on my OSX 10.8.5, but when using the command riak-admin test it always fail. I can't find a solution for it!
Also using sudo riak-admin test doesn't help it.
I have installed riak(1.4.2) through brew.
>riak start
!!!!
!!!! WARNING: ulimit -n is 256; 4096 is the recommended minimum.
!!!!
>riak ping
pong
>riak-admin test
Failed to write test value: {error,timeout}%
I have installed riak(1.4.2) precompiled tarball using wget
>curl -O http://s3.amazonaws.com/downloads.basho.com/riak/1.4/1.4.2/osx/10.8/riak-1.4.2-OSX-x86_64.tar.gz
>tar xzvf riak-1.4.2-osx-x86_64.tar.gz
>cd riak-1.4.2
>bin/riak start
!!!!
!!!! WARNING: ulimit -n is 256; 4096 is the recommended minimum.
!!!!
>bin/riak ping
pong
>bin/riak-admin test
Failed to write test value: {error,timeout}%
I have install riak(1.4.1) precompiled tarball using wget
>curl -O http://s3.amazonaws.com/downloads.basho.com/riak/1.4/1.4.1/osx/10.8/riak-1.4.1-OSX-x86_64.tar.gz
>tar xzvf riak-1.4.1-osx-x86_64.tar.gz
>cd riak-1.4.1
>bin/riak start
!!!!
!!!! WARNING: ulimit -n is 256; 4096 is the recommended minimum.
!!!!
>bin/riak ping
pong
>bin/riak-admin test
Failed to read test value: {error,{insufficient_vnodes,0,need,1}}%
Solution
Following this procedure http://docs.basho.com/riak/... solved my issue.
It has to do with the Open Files Limit on mac OSX.
Before
To check the current limits on your Mac OS X system, run:
>launchctl limit maxfiles
maxfiles 256 unlimited
Edit (or create) /etc/launchd.conf
Edit (or create) /etc/launchd.conf and increase the limits. Add lines
that look like the following (using values appropriate to your
environment):
limit maxfiles 16384 32768
Restart the system
Save the file, and restart the system for the new limits to take
effect. After restarting, verify the new limits with the launchctl
limit command:
>launchctl limit maxfiles
maxfiles 16384 32768