Rscript not working in qsub cluster - r

I have two Rscripts named iHS.hist.R and Fst.hist.R. I know both scripts work. When I use the following commands in my directory in my ubuntu terminal I get a histogram plot for each script (two total if I do both scripts)
module load R
Rscript iHS.hist.R
or I could do Rscript Fst.hist.R
The point is I know they both work.
The problem is that each Rscript takes about 20 minutes to run because my data is pretty big. And unfortunately it's only going to get bigger. I have access to a cluster and I would like to make use of that. I have created two .sh scripts to send to the cluster with qsub but I am running into issues. Here is my iHS.his.sh script for my iHS.hist.R script.
#PBS -N iHS.plots
#PBS -S /bin/bash
#PBS -l walltime=2:00:00
#PBS -l nodes=1:ppn=8
#PBS -l mem=4gb
#PBS -o $HOME/${PBS_JOBNAME}.o${PBS_JOBID}.log
#PBS -e $HOME/${PBS_JOBNAME}.e${PBS_JOBID}.err
###############related commands
###edit it
#code in qsub
###############cut columns we don't need
###
cut -f1,2,3,4 /group/stranger-lab/ebeiter/test/SNPsnap_mdd_5_100/matched_snps_annotated.txt > /group/stranger-lab/ebeiter/test/SNPsnap_mdd_5_100/cut.matched_snps_annotated.txt
cut -f1,2 /group/stranger-lab/ebeiter/test/SNPsnap_mdd_5_100/input_snps_insufficient_matches.txt > /group/stranger-lab/ebeiter/test/SNPsnap_mdd_5_100/cut.input_snps_insufficient_matches.txt
###
###############only needed columns remain
cd /group/stranger-lab/ebeiter
module load R
Rscript iHS.hist.R
The cuts in the beginning are for setting up the data in the right format.
I have tried qsub iHS.hist.sh and it gives me a job. I check on it, and after about 10 minutes it finishes. So I'm assuming it's running my Rscript. I check the error file and it's empty. I check the log file and it does not give me the usual null device 1 that I get after my jpeg is completed in my Rscript. I don't get the output jpeg file for the Rscript when the cluster job is done. I do get the output jpeg file if I just did the Rscript on it's own like at the top of this. Any idea what is going on?

Related

Error reading in large file using shell/bash script?

I have a text file that has 185405149 lines and a header. I am reading in this file within this bash script:
#!/bin/bash
#PBS -N R_Job
#PBS -l walltime=4:00:00
#PBS -l vmem=20gb
module load R/4.2.1
cd filepath/
R --save -q -f script.R
Part of the script is below:
# import the gtex data
gtex_data <- read.table("/filepath/file.txt", header=TRUE)
I get the error: Error: cannot allocate vector of size 2.0 Gb
Execution halted.
It's got nothing to do with the directory/filepath. I suspect its to do with memory. Even after zipping the file e.g file.txt.gz and using the command:
gtex_data <- read.table(gzfile("/filepath/file.txt.gz"), header=TRUE)
It doesn't read the data.
I've tried with a smaller file e.g. reading the first 100 lines of file.txt and renaming it and loading it and it works fine.
I've even tried to increase vmem? Not sure what to do. I would be grateful for advice/help.
I've also checked the size of the file.
ls -lh file.txt
-rw-r--r-- 1 ... 107M Oct 26 16:50 file.txt

How to scatter chunks of data on cluster nodes without running out of memory

I wrote a code in Python which implements mpi4py to scatter chunks of data across the processors of a cluster. Each processor writes the given chunk of data into a .txt file, then all these .txt files are merged in one.
Everything is working as expected.
However, for very large .txt files, the cluster is complaining about memory:
mpiexec noticed that process ... rank ... on node ... exited on signal 9 (Killed)
I'm trying to set the parameters in the PBS file in a way which avoids this issue. So far, this is not working:
#!/bin/bash
#PBS -S /bin/bash
## job name and output file
#PBS -N test
#PBS -j oe
#PBS -o job.o
#PBS -V
###########################################################
# USER PARAMETERS
##PBS -l select=16:mpiprocs=1:mem=8000mb
#PBS -l select=4:ncpus=16:mem=4gb
#PBS -l walltime=03:00:00
###########################################################
ulimit -Hn
# number of processes
NPROC=64
echo $NPROC
CURRDIR=$PBS_O_WORKDIR
echo $CURRDIR
cd $CURRDIR
module load anaconda/2019.10
source activate py3
cat $PBS_NODEFILE
echo starting run in current directory $CURRDIR
echo " "
mpiexec -n $NPROC -hostfile $PBS_NODEFILE python $CURRDIR/test.py
echo "finished successfully"
Any idea?
MPI uses distributed memory, that is, if you have more data than fits in one process, you spread it over multiple processes, for instance on multiple computers. So "scattering" data often doesn't make sense: it assumes that all this too-much data actually fits on one process. For a true MPI program, your processes all create their own data, or read it from a file, but you never have all data in one place.
So if you're dealing with lots of data, then a scattering approach will of course run out of memory, but it's the wrong way to approach your problem to begin with. Rewrite your program and make it truly distributed memory parallel.

r + hpc + git question: submitting multiple jobs with different values for a parameter list [duplicate]

I am running R on a multiple node Linux cluster. I would like to run my analysis on R using scripts or batch mode without using parallel computing software such as MPI or snow.
I know this can be done by dividing the input data such that each node runs different parts of the data.
My question is how do I go about this exactly? I am not sure how I should code my scripts. An example would be very helpful!
I have been running my scripts so far using PBS but it only seems to run on one node as R is a single thread program. Hence, I need to figure out how to adjust my code so it distributes labor to all of the nodes.
Here is what I have been doing so far:
1) command line:
> qsub myjobs.pbs
2) myjobs.pbs:
> #!/bin/sh
> #PBS -l nodes=6:ppn=2
> #PBS -l walltime=00:05:00
> #PBS -l arch=x86_64
>
> pbsdsh -v $PBS_O_WORKDIR/myscript.sh
3) myscript.sh:
#!/bin/sh
cd $PBS_O_WORKDIR
R CMD BATCH --no-save my_script.R
4) my_script.R:
> library(survival)
> ...
> write.table(test,"TESTER.csv",
> sep=",", row.names=F, quote=F)
Any suggestions will be appreciated! Thank you!
-CC
This is rather a PBS question; I usually make an R script (with Rscript path after #!) and make it gather a parameter (using commandArgs function) that controls which "part of the job" this current instance should make. Because I use multicore a lot I usually have to use only 3-4 nodes, so I just submit few jobs calling this R script with each of a possible control argument values.
On the other hand your use of pbsdsh should do its job... Then the value of PBS_TASKNUM can be used as a control parameter.
This was an answer to a related question - but it's an answer to the comment above (as well).
For most of our work we do run multiple R sessions in parallel using qsub (instead).
If it is for multiple files I normally do:
while read infile rest
do
qsub -v infile=$infile call_r.pbs
done < list_of_infiles.txt
call_r.pbs:
...
R --vanilla -f analyse_file.R $infile
...
analyse_file.R:
args <- commandArgs()
infile=args[5]
outfile=paste(infile,".out",sep="")...
Then I combine all the output afterwards...
This problem seems very well suited for use of GNU parallel. GNU parallel has an excellent tutorial here. I'm not familiar with pbsdsh, and I'm new to HPC, but to me it looks like pbsdsh serves a similar purpose as GNU parallel. I'm also not familiar with launching R from the command line with arguments, but here is my guess at how your PBS file would look:
#!/bin/sh
#PBS -l nodes=6:ppn=2
#PBS -l walltime=00:05:00
#PBS -l arch=x86_64
...
parallel -j2 --env $PBS_O_WORKDIR --sshloginfile $PBS_NODEFILE \
Rscript myscript.R {} :::: infilelist.txt
where infilelist.txt lists the data files you want to process, e.g.:
inputdata01.dat
inputdata02.dat
...
inputdata12.dat
Your myscript.R would access the command line argument to load and process the specified input file.
My main purpose with this answer is to point out the availability of GNU parallel, which came about after the original question was posted. Hopefully someone else can provide a more tangible example. Also, I am still wobbly with my usage of parallel, for example, I'm unsure of the -j2 option. (See my related question.)

GNUPlot cannot be executed after mpirun command in PBS script

I have PBS command something like this
#PBS -N marcell_single_cell
#PBS -l nodes=1:ppn=1
#PBS -l walltime=20000:00:00
#PBS -e stderr.log
#PBS -o stdout.log
# Specific the shell types
#PBS -S /bin/bash
# Specific the queue type
#PBS -q dque
#uncomment this if you want to debug the process
#set -vx
cd $PBS_O_WORKDIR
ulimit -s unlimited
NPROCS=`wc -l < $PBS_NODEFILE`
#export PATH=$PBS_O_PATH
echo This job has allocated $NPROCS nodes
echo Cleaning old files...
rm -rf *.png *.plt *.log
echo Cleaning success
/opt/Lib/openmpi-2.1.3/bin/mpirun -np $NPROCS /scratch4/marcell/CellMLSimulator/bin/CellMLSimulator -ionmodel grandi2010 -solverType CVode -irepeat 4 -dt 0.01
gnuplot -p plotting.gnu
It got error something like this, thrown by the PBS error log.
/var/spool/torque/mom_priv/jobs/6265.node01.SC: line 28: gnuplot: command not found
I've already make sure that the path of GNUPlot is already been added to the PATH environment variable.
However, the strange part is, if I interchange the sequence of command, like gnuplot first and then mpirun, there isn't any error. I suspect that some commands after mpirun need some special configs, but I dunno how to do that
Already following this solution, but no avail.
sleep command not found in torque pbs but works in shell
EDITED:
it seems that the before and after mpirun still got error. and this is the which result:
which: no gnuplot in (/opt/intel/composer_xe_2011_sp1.9.293/bin/intel64:/opt/intel/composer_xe_2011_sp1.9.293/bin/intel64:/opt/pgi/linux86-64/9.0-4/bin:/opt/openmpi/bin:/usr/kerberos/bin:/prog/tools/grace/grace/bin:/home/prog/ansys_inc/v121/fluent/bin:/bin:/usr/bin:/opt/intel/composer_xe_2011_sp1.9.293/mpirt/bin/intel64:/opt/intel/composer_xe_2011_sp1.9.293/mpirt/bin/intel64:/scratch7/feber/jdk1.8.0_101:/scratch7/feber/code/apache-maven/bin:/usr/local/bin:/scratch7/cml/bin)
It's strange, since when I try to find the gnuplot, there is one in the /usr/local/bin
ls -l /usr/local/bin/gnuplot
-rwxr-xr-x 1 root root 3262113 Sep 18 2017 /usr/local/bin/gnuplot
moreover, if I run those commands without PBS, it seems executed as I expected:
/scratch4/marcell/CellMLSimulator/bin/CellMLSimulator -ionmodel grandi2010 -solverType CVode -irepeat 4 -dt 0.01
gnuplot -p plotting.gnu
It's very likely that your system has different "login/head nodes" and "compute nodes". This is a commonly used practice in many supercomputing clusters. While you build and launch your application from the head node, it gets executed on one or more compute nodes.
The compute nodes can have different hardware and software compared to the head nodes. In your case, gnuplot is installed only on the head node, as you can see from the different outputs of which gnuplot. To solve this, you have three approaches:
Request the system administrators to install gnuplot on the compute nodes.
Build and install your own version of gnuplot in a file-system accessible from the compute nodes. It could be your home directory or somewhere else depending on your cluster. In general, the filesystem where your application is will be available. In your case, anywhere under /scratch4/marcell/ would probably work.
Run gnuplot on the head node after the MPI jobs finish as a post-processing step. PBS/Torque does not provide a direct way to do this. You'll need to write a separate bash (not PBS) script to do this.

Going from multi-core to multi-node in R

I've gotten accustomed to doing R jobs on a cluster with 32 cores per node. I am now on a cluster with 16 cores per node. I'd like to maintain (or improve) performance by using more than one node (as I had been doing) at a time.
As can be seen from my dummy sell script and dummy function (below), parallelization on a single node is really easy. Is it similarly easy to extend this to multiple nodes? If so, how would I modify my scripts?
R script:
library(plyr)
library(doMC)
registerDoMC(16)
dothisfunctionmanytimes = function(d){
print(paste("my favorite number is",d$x,'and my favorite letter is',d$y))
}
d = expand.grid(1:1000,letters)
d_ply(.data=d,.fun=dothisfunctionmanytimes,.parallel=T)
Shell script:
#!/bin/sh
#PBS -N runR
#PBS -q normal
#PBS -l nodes=1:ppn=32
#PBS -l walltime=5:00:00
#PBS -j oe
#PBS -V
#PBS -M email
#PBS -m abe
. /etc/profile.d/modules.sh
module load R
#R_LIBS=/home/diag/opt/R/local/lib
R_LIBS_USER=${HOME}/R/x86_64-unknown-linux-gnu-library/3.0
OMP_NUM_THREADS=1
export R_LIBS R_LIBS_USER OMP_NUM_THREADS
cd $PBS_O_WORKDIR
R CMD BATCH script.R
(The shell script gets submitted by qsub script.sh)

Resources