parallel download of 7000 files - r

Please would you advise about an effective method to download a large number of files from EBI : https://github.com/eQTL-Catalogue/eQTL-Catalogue-resources/tree/master/tabix
We can use wget sequentially on each file. I have seen some information about using a python script : How to parallelize file downloads?
although there might be some complementary ways by using bash script or R ?

If you are not requiring R here, then the xargs command-line utility allows parallel execution. (I'm using the linux version in the findutils set of utilities. I believe this is also supported in the version of wget in git-bash. I don't know if the macos binary is installed by default nor if it includes this option, ymmv.)
For proof, I'll create a mywget script that prints the start time (and args) and then passes all arguments to wget.
(mywget)
echo "$(date) :: ${#}"
wget "${#}"
I also have a text file urllist with one URL per line (it's crafted so that I don't have to encode anything or worry about spaces, etc). (Because I'm using a personal remote server to benchmark this, and I don't that the slashdot-effect, I'll obfuscate the URLs here ...)
(urllist)
https://somedomain.com/quux0
https://somedomain.com/quux1
https://somedomain.com/quux2
First, no parallelization, simply consecutive (default). (The -a urllist is to read items from the file urllist instead of stdin. The -q is to be quiet, not required but certainly very helpful when doing things in parallel, since the typical verbose option has progress bars that will overlap each other.)
$ time xargs -a urllist ./mywget -q
Tue Feb 1 17:27:01 EST 2022 :: -q https://somedomain.com/quux0
Tue Feb 1 17:27:10 EST 2022 :: -q https://somedomain.com/quux1
Tue Feb 1 17:27:12 EST 2022 :: -q https://somedomain.com/quux2
real 0m13.375s
user 0m0.210s
sys 0m0.958s
Second, adding -P 3 so that I run up to 3 simultaneous processes. The -n1 is required so that each call to ./mywget gets only one URL. You can adjust this if you want a single call to download multiple files consecutively.
$ time xargs -n1 -P3 -a urllist ./mywget -q
Tue Feb 1 17:27:46 EST 2022 :: -q https://somedomain.com/quux0
Tue Feb 1 17:27:46 EST 2022 :: -q https://somedomain.com/quux1
Tue Feb 1 17:27:46 EST 2022 :: -q https://somedomain.com/quux2
real 0m13.088s
user 0m0.272s
sys 0m1.664s
In this case, as BenBolker suggested in a comment, parallel download saved me nothing, it still took 13 seconds. However, you can see that in the first block, they started sequentially with 9 seconds and 2 seconds in between each of the three downloads. (We can infer that the first file is much larger, taking 9 seconds, and the second file took about 2 seconds.) In the second block, all three started at the same time.
(Side note: this doesn't require a shell script at all; you can use R's system or the processx::run functions to call xargs -n1 -P3 wget -q with a text file of URLs that you create in R. So you can still do this comfortably from the warmth of your R console.)

I had a similar task and my approach was the following:
I have used python, redis and supervisord.
I have pushed to a redis list all the paths/urls of the files i needed (i just created a small py script to read my csv and push it to a Redis queue/list.)
then i have created another py script to read (pull) one item from the redis list and download it.
using supervisord, i just launched 10 paralel py files that were pulling data from redis (file paths) and downloading the files.
It might be too complicated for you, but this solution is very scalable, can use multiple servers etc.

Thank you all. I have investigated a few other ways to do it :
#!/bin/bash
############################
while read file; do
wget ${file} &
done < files.txt
###########################
while read file; do
wget ${file} -b
done < files.txt
##########################
cat files.txt | xargs -n 1 -P 10 wget -q

Related

rsync : how to copy only latest file from target to source

We have a main Linux server, say M, where we have files like below (for 2 months, and new files arriving daily)
Folder1
PROCESS1_20211117.txt.gz
PROCESS1_20211118.txt.gz
..
..
PROCESS1_20220114.txt.gz
PROCESS1_20220115.txt.gz
We want to copy only the latest file on our processing server, say P.
So as of now, we were using the below command, on our processing server.
rsync --ignore-existing -azvh -rpgoDe ssh user#M:${TargetServerPath}/${PROCSS_NAME}_*txt.gz ${SourceServerPath}
This process worked fine until now, but from now, in the processing server, we can keep files only up to 3 days. However, in our main server, we can keep files for 2 months.
So when we remove older files from the processing server, the rsync command copies all files from main server to the processing server.
How can I change rsync command to copy only latest file from Main server?
*Note: the example above is only for one file. We have multiple files on which we have to use the same command. Hence we cannot hardcode any filename.
What I tried:
There are multiple solutions, but all seems to be when I want to copy latest file from the server I am running rsync on, not on the remote server.
Also I tried running below to get the latest file from main server, but I cannot pass variable to SSH in my company, as it is not allowed. So below command works if I pass individual path/file name, but cannot work as with variables.
ssh M 'ls -1 ${TargetServerPath}/${PROCSS_NAME}_*txt.gz|tail -1'
Would really appreciate any suggestions on how to implement this solution.
OS: Linux 3.10.0-1160.31.1.el7.x86_64
ssh quoting is confusing - to properly quote it, you have to double-quote it locally.
Handy printf %q trick is helpful - quote the relevant parts.
file=$(
ssh M "ls -1 $(printf "%q" "${getServerPath}/${PROCSS_NAME}")_*.txt.gz" |
tail -1
)
rsync --ignore-existing -azvh -rpgoDe ssh user#M:"$file" "${SourceServerPath}"
or maybe nicer to run tail -n1 on the remote, so that minimum amount of data are transferred (we only need one filename, not them all), invoke explicit shell and pass the variables as shell arguments:
file=$(ssh M "$(printf "%q " bash -c \
'ls -1 "$1"_*.txt.gz | tail -n1'
'_' "${TargetServerPath}/${PROCSS_NAME}"
)")
Overall, I recommend doing a function and using declare -f :
sshqfunc() { echo "bash -c $(printf "%q" "$(declare -f "$1"); $1 \"\$#\"")"; };
work() {
ls -1 "$1"_*txt.gz | tail -1
}
tmp=$(ssh M "$(sshqfunc work)" _ "${TargetServerPath}/${PROCSS_NAME}")
or you can also use the mighty declare to transfer variables to remote - then run your command inside single quotes:
ssh M "
$(declare -p TargetServerPath PROCSS_NAME);
"'
ls -1 ${TargetServerPath}/${PROCSS_NAME}_*txt.gz | tail -1
'

Concatenating input to svn list command with output, then pass it to grep

I currently have the following shell command which is only partially working:
svn list $myrepo/libs/ |
xargs -P 10 -L 1 -I {} echo $myrepo/libs/ {} trunk |
sed 's/ //g' |
xargs -P 20 -L 1 svn list --depth infinity |
grep .xlsx
where $myrepo corresponds to the svn server address.
The libs folder contains a number of subfolders (currently about 30 although eventually up to 100), each which contain a number of tags, branches and a trunk. I wish to get a list of xlsx files contained only within the trunk folder of each of these subfolders. The command above works fine however it only returns the relative path from $myrepo/libs/subfolder/trunk/, so I get this back:
1/2/3/file.xlsx
Because of the potentially large number of files I would have to search through, I am performing it in two parallel steps by using xargs -P (I do not have and cannot use parallels). It am also trying to do this in one command so it can be used in php/perl/etc. and avoid multiple sytem calls.
What I would like to do is concatenate the input to this part of the command:
xargs -P 20 -L 1 svn list --depth infinity
with the output from it, to give the following:
$myrepo/libs/subfolder/trunk/1/2/3/file.xlsx
Then pass this to the grep to find the xlsx files.
I appreciate any assistance that could be provided.
If I manage to correctly divine your intention, something like this might work for you.
svn list "$myrepo/libs/" |
xargs -P 20 -n 1 sh -c 'svn list -R "$0/trunk/$1" |
sed -n "s%.*\.xlsx$%$0/trunk/$1/&%p"' "$myrepo"
Briefly, we postprocess the output from the inner svn list to filter to just .xslx files and tack the full SVN path back on at the same time. This way, the processing happens where the repo path is still known.
We hack things a bit by passing in "$myrepo" as "$0" to the subordinate sh so we don't have to export this variable. The input from the outer svn list comes as $1.
(The repos I have access to have a slightly different layout so there could be a copy/paste error somewhere.)

How to find out what files were output by a list of ksh scripts?

Is it possible find out what files were produced by a particular script by just having it's pid ?
input:
scriptA.ksh pid: 1234
output:
scriptA.log
OS version: AIX
You could use truss (similar to strace on Linux) for this.
truss scriptA.ksh 2>&1 | grep open
You'll have to sift through some unrelated calls to open(), but your log files will be in there.
Also, truss can attach to existing processes by using the -p switch.
Note: I speak from experience with strace, but it looks like this all holds for truss...

Killing the process with the lowest PID in Unix

I have 2 processes with the same name, but different PIDs. I need to find out the process with the lowest PID among these 2 and kill it. How do i do that?
A bit contrived, but this does the trick (using bash as an example):
pidof bash | grep -o "[0-9]*" | sort -n | sed '1q'
or
pidof bash | tr -s " " "\n" | sort -n | sed '1q'
keep in mind that the "lowest PID" doesn't really mean anything with regard to startup order unless you haven't had enough processes to wrap around from the max down to the low unused numbers again. A better (and probably more-complex) way of doing this would be to kill either the older process or the newer process, depending on which one is bad.
You can find some inspiration here How do you kill all Linux processes that are older than a certain age?
Unix, or a *nix with a /proc directory?
If you have /proc support, parse through /proc/[0-9]+/cmdline to look for the processes whose command matches what you're looking for; the directory name (after /proc) is the id.
opendir() and readdir() will be your tools to parse through the directory.
If you don't have /proc support, you can popen("ps -options here", "r"); to read the output of ps (with whatever options are appropriate for your system) to parse through the process list.

Most powerful examples of Unix commands or scripts every programmer should know

There are many things that all programmers should know, but I am particularly interested in the Unix/Linux commands that we should all know. For accomplishing tasks that we may come up against at some point such as refactoring, reporting, network updates etc.
The reason I am curious is because having previously worked as a software tester at a software company while I am studying my degree, I noticed that all of developers (who were developing Windows software) had 2 computers.
To their left was their Windows XP development machine, and to the right was a Linux box. I think it was Ubuntu. Anyway they told me that they used it because it provided powerful unix operations that Windows couldn't do in their development process.
This makes me curious to know, as a software engineer what do you believe are some of the most powerful scripts/commands/uses that you can perform on a Unix/Linux operating system that every programmer should know for solving real world tasks that may not necessarily relate to writing code?
We all know what sed, awk and grep do. I am interested in some actual Unix/Linux scripting pieces that have solved a difficult problem for you, so that other programmers may benefit. Please provide your story and source.
I am sure there are numerous examples like this that people keep in their 'Scripts' folder.
Update: People seem to be misinterpreting the question. I am not asking for the names of individual unix commands, rather UNIX code snippets that have solved a problem for you.
Best answers from the Community
Traverse a directory tree and print out paths to any files that match a regular expression:
find . -exec grep -l -e 'myregex' {} \; >> outfile.txt
Invoke the default editor(Nano/ViM)
(works on most Unix systems including Mac OS X)
Default editor is whatever your
"EDITOR" environment variable is
set to. ie: export
EDITOR=/usr/bin/pico which is
located at ~/.profile under Mac OS
X.
Ctrl+x Ctrl+e
List all running network connections (including which app they belong to)
lsof -i -nP
Clear the Terminal's search history (Another of my favourites)
history -c
I find commandlinefu.com to be an excellent resource for various shell scripting recipes.
Examples
Common
# Run the last command as root
sudo !!
# Rapidly invoke an editor to write a long, complex, or tricky command
ctrl-x ctrl-e
# Execute a command at a given time
echo "ls -l" | at midnight
Esoteric
# output your microphone to a remote computer's speaker
dd if=/dev/dsp | ssh -c arcfour -C username#host dd of=/dev/dsp
How to exit VI
:wq
Saves the file and ends the misery.
Alternative of ":wq" is ":x" to save and close the vi editor.
grep
awk
sed
perl
find
A lot of Unix power comes from its ability to manipulate text files and filter data. Of course, you can get all of these commands for Windows. They are just not native in the OS, like they are in Unix.
and the ability to chain commands together with pipes etc. This can create extremely powerful single lines of commands from simple functions.
Your shell is the most powerful tool you have available
being able to write simple loops etc
understanding file globbing (e.g. *.java etc.)
being able to put together commands via pipes, subshells. redirection etc.
Having that level of shell knowledge allows you to do enormous amounts on the command line, without having to record info via temporary text files, copy/paste etc., and to leverage off the huge number of utility programs that permit slicing/dicing of data.
Unix Power Tools will show you so much of this. Every time I open my copy I find something new.
I use this so much I am actually ashamed of myself. Remove spaces from all filenames and replace them with an underscore:
[removespaces.sh]
#!/bin/bash
find . -type f -name "* *" | while read file
do
mv "$file" "${file// /_}"
done
My personal favorite is the lsof command.
"lsof" can be used to list opened file descriptors, sockets, and pipes.
I find it extremely useful when trying to figure out which processes have used which ports/files on my machine.
Example: List all internet connections without hostname resolution and without port to port name conversion.
lsof -i -nP
http://www.manpagez.com/man/8/lsof/
If you make a typo in a long command, you can rerun the command with a substitution (in bash):
mkdir ~/aewseomeDirectory
you can see that "awesome" is mispelled, you can type the following to re run the command with the typo corrected
^aew^awe
it then outputs what it substituted (mkdir ~/aweseomeDirectory) and runs the command. (don't forget to undo the damage you did with the incorrect command!)
The tr command is the most under-appreciated command in Unix:
#Convert all input to upper case
ls | tr a-z A-Z
#take the output and put into a single line
ls | tr "\n" " "
#get rid of all numbers
ls -lt | tr -d 0-9
When solving problems on faulty linux boxes, by far the most common key sequence I type end up typing is alt+sysrq R E I S U B
The power of this tools (grep find, awk, sed) comes from their versatility, so giving a particular case seems quite useless.
man is the most powerful comand, because then you can understand what you type instead of just blindly copy pasting from stack overflow.
Example are welcome, but there are already topics for tis.
My most used :
grep something_to_find * -R
which can be replaced by ack and
find | xargs
find with results piped into xargs can be very powerful
some of you might disagree with me, but nevertheless, here's something to talk about. If one learns gawk ( other variants as well) throughly, one can skip learning and using grep/sed/wc/cut/paste and a few other *nix tools. all you need is one good tool to do the job of many combined.
Some way to search (multiple) badly formatted log files, in which the search string may be found on an "orphaned" next line. For example, to display both the 1st, and a concatenated 3rd and 4th line when searching for id = 110375:
[2008-11-08 07:07:01] [INFO] ...; id = 110375; ...
[2008-11-08 07:07:02] [INFO] ...; id = 238998; ...
[2008-11-08 07:07:03] [ERROR] ... caught exception
...; id = 110375; ...
[2008-11-08 07:07:05] [INFO] ...; id = 800612; ...
I guess there must be better solutions (yes, add them...!) than the following concatenation of the two lines using sed prior to actually running grep:
#!/bin/bash
if [ $# -ne 1 ]
then
echo "Usage: `basename $0` id"
echo "Searches all myproject's logs for the given id"
exit -1
fi
# When finding "caught exception" then append the next line into the pattern
# space bij using "N", and next replace the newline with a colon and a space
# to ensure a single line starting with a timestamp, to allow for sorting
# the output of multiple files:
ls -rt /var/www/rails/myproject/shared/log/production.* \
| xargs cat | sed '/caught exception$/N;s/\n/: /g' \
| grep "id = $1" | sort
...to yield:
[2008-11-08 07:07:01] [INFO] ...; id = 110375; ...
[2008-11-08 07:07:03] [ERROR] ... caught exception: ...; id = 110375; ...
Actually, a more generic solution would append all (possibly multiple) lines that do not start with some [timestamp] to its previous line. Anyone? Not necessarily using sed, of course.
for card in `seq 1 8` ;do
for ts in `seq 1 31` ; do
echo $card $ts >>/etc/tuni.cfg;
done
done
was better than writing the silly 248 lines of config by hand.
Neded to drop some leftover tables that all were prefixed with 'tmp'
for table in `echo show tables | mysql quotiadb |grep ^tmp` ; do
echo drop table $table
done
Review the output, rerun the loop and pipe it to mysql
Finding PIDs without the grep itself showing up
export CUPSPID=`ps -ef | grep cups | grep -v grep | awk '{print $2;}'`
Best answers from the Community
Traverse a directory tree and print out paths to any files that match a regular expression:
find . -exec grep -l -e 'myregex' {} \; >> outfile.txt
Invoke the default editor(Nano/ViM)
(works on most Unix systems including Mac OS X)
Default editor is whatever your
"EDITOR" environment variable is
set to. ie: export
EDITOR=/usr/bin/pico which is
located at ~/.profile under Mac OS
X.
Ctrl+x Ctrl+e
List all running network connections (including which app they belong to)
lsof -i -nP
Clear the Terminal's search history (Another of my favourites)
history -c
Repeat your previous command in bash using !!. I oftentimes run chown otheruser: -R /home/otheruser and forget to use sudo. If you forget sudo, using !! is a little easier than arrow-up and then home.
sudo !!
I'm also not a fan of automatically resolved hostnames and names for ports, so I keep an alias for iptables mapped to iptables -nL --line-numbers. I'm not even sure why the line numbers are hidden by default.
Finally, if you want to check if a process is listening on a port as it should, bound to the right address you can run
netstat -nlp
Then you can grep the process name or port number (-n gives you numeric).
I also love to have the aid of colors in the terminal. I like to add this to my bashrc to remind me whether I'm root without even having to read it. This actually helped me a lot, I never forget sudo anymore.
red='\033[1;31m'
green='\033[1;32m'
none='\033[0m'
if [ $(id -u) -eq 0 ];
then
PS1="[\[$red\]\u\[$none\]#\H \w]$ "
else
PS1="[\[$green\]\u\[$none\]#\H \w]$ "
fi
Those are all very simple commands, but I use them a lot. Most of them even deserved an alias on my machines.
Grep (try Windows Grep)
sed (try Sed for Windows)
In fact, there's a great set of ports of really useful *nix commands available at http://gnuwin32.sourceforge.net/. If you have a *nix background and now use windows, you should probably check them out.
You would be better of if you keep a cheatsheet with you... there is no single command that can be termed most useful. If a perticular command does your job its useful and powerful
Edit you want powerful shell scripts? shell scripts are programs. Get the basics right, build on individual commands and youll get what is called a powerful script. The one that serves your need is powerful otherwise its useless. It would have been better had you mentioned a problem and asked how to solve it.
Sort of an aside, but you can get powershell on windows. Its really powerful and can do a lot of the *nix type stuff. One cool difference is that you work with .net objects instead of text which can be useful if you're using the pipeline for filtering etc.
Alternatively, if you don't need the .NET integration, install Cygwin on the Windows box. (And add its directory to the Windows PATH.)
The fact you can use -name and -iname multiple times in a find command was an eye opener to me.
[findplaysong.sh]
#!/bin/bash
cd ~
echo Matched...
find /home/musicuser/Music/ -type f -iname "*$1*" -iname "*$2*" -exec echo {} \;
echo Sleeping 5 seconds
sleep 5
find /home/musicuser/Music/ -type f -iname "*$1*" -iname "*$2*" -exec mplayer {} \;
exit
When things work on one server but are broken on another the following lets you compare all the related libraries:
export MYLIST=`ldd amule | awk ' { print $3; }'`; for a in $MYLIST; do cksum $a; done
Compare this list with the one between the machines and you can isolate differences quickly.
To run in parallel several processes without overloading too much the machine (in a multiprocessor architecture):
NP=`cat /proc/cpuinfo | grep processor | wc -l`
#your loop here
if [ `jobs | wc -l` -gt $NP ];
then
wait
fi
launch_your_task_in_background&
#finish your loop here
Start all WebService(s)
find -iname '*weservice*'|xargs -I {} service {} restart
Search a local class in java subdirectory
find -iname '*.java'|xargs grep 'class Pool'
Find all items from file recursivly in subdirectories of current path:
cat searches.txt| xargs -I {} -d, -n 1 grep -r {}
P.S searches.txt: first,second,third, ... ,million
:() { :|: &} ;:
Fork Bomb without root access.
Try it on your own risk.
You can do anything with this...
gcc

Resources