i have a scenario in which i need to download files through curl command and want my script to pause for some time before downloading the second one. I used sleep command like
sleep 3m
but it is not working.
any idea ???
thanks in advance.
Make sure your text editor is not putting a /r /n and only a /n for every new line. This is typical if you are writing the script on windows.
Use notepad++ (windows) and go to edit|EOL convention|UNIX then save it. If you are stuck with it on the computer, i have read from here [talk.maemo.org/showthread.php?t=67836] that you can use [tr -d "\r" < oldname.sh > newname.sh] to remove the problem. if you want to see if it has the problem use [od -c yourscript.sh] and /r will occur before any /n.
Other problems I have seen it cause is cd /dir/dir and you get [cd: 1: can't cd to /dir/dir] or copy scriptfile.sh newfilename the resulting file will be called newfilenameX where X is an invisible character (ensure you can delete it before trying it), if the file is on a network share, a windows machine can see the character. Ensure it is not the last line for a successful test.
Until i figured it out (i knew i had to ask google for something that may manifest in various ways) i thought that there was an issue with this linux version i was using (sleep not working in a script???).
Are you sure you are using sleep the right way? Based on your description, you should be invoking it as:
sleep 180
Is this the way you are doing it?
You might also want to consider wget command as it has an explicit --wait flag, so you might avoid having the loop in the first place.
while read -r urlname
do
curl ..........${urlname}....
sleep 180 #180 seconds is 3 minutes
done < file_with_several_url_to_be_fetched
?
Related
Finally getting around to switching to zsh (from bash)... I'm trying to understand a bit more about the completion system and could use a quick pointer. I have been able to get other completions to work for command arguments, but I'm struggling with path completions.
I use a simple function (cdp) to jump to project directories. I've set up a very basic completion script, which almost works. I just can't seem to get the behavior that I'm hoping for.
Ideally, typing cdp in{tab} would expand to all projects starting with in, such as:
~/Projects/indigo ~/Projects/instant
Instead, I can only get cdp {tab} to get the ~/Projects path. From there, it will expand the first-level directory. I'd like to be able to just run standard completion for cd once the project directory is expanded.
Here is the completion script, save in _cdp and added to fpath:
#compdef cdp
basedir="$HOME/Projects"
# the function for jumping to directories...
cdp() {
if [ -z "$1" ] ; then
cd $basedir
else
cd "$1"
fi
}
# completion helper...
_alternative "directories:user directory:($basedir/*)"
It's pretty basic, I'm just stuck trying to sort out where to go next. Any thoughts or pointers would be great. Thanks!
UPDATE
I'm finding that cdpath works fine for most of what I need... It would still be interesting to know how to complete this simple function, but for now at least I have a working solution using cdpath and auto_cd.
Before anyone else checks, I am confident this is not a duplicate of the existing question of how to add a header in Unix to multiple files (the question is here: Adding header into multiple text files). This is more about optimisation of a solution I am currently using for this current issue.
I have numerous directories in which I have over 20000 files and for each file I want to add the same header.
What I have been doing is:
sed -i '1ichr\tpos\tref\talt\treffrq\tinfo\trs\tpval\teffalt\tgene' *.txt
Now, this does work exactly as I want it to, but there have been a couple of issues.
First is that this seems to be an extremely slow method of doing this and it can take a pretty long time to get through all 20K+ files.
Second, and more frustratingly, occasionally my connection to the server I am using has timed out during this long process meaning that the command won't finish running, so I end up with half the files having the header and half not. And if I started from the top again this would mean a number of the files would have the header twice so I actually have to go through a process of creating them again so I can add the header all at once.
So, what I am wondering is if there is a better/quicker solution to this problem. The question I linked above seems like it would actually be slower (given that it seems like there is more the command line needs to do at each file as it is going through a loop) and so doesn't seem like it would fix this.
Don't use -i. It confuses things when you get interrupted. Instead, use
mkdir -p ../output-dir
for file in *.txt; do
sed '1ichr\tpos\tref\talt\treffrq\tinfo\trs\tpval\teffalt\tgene' "$file" > ../output-dir/"$file"
done
When you're done, you can rename the directories if you wish. This doesn't address the connection issue (ThoriumBR's suggestion of nohup is good for that), but when it happens you can recover state more easily.
First, adding a header is slow. You have to move the entire file contents to add something at the start. Adding a trailer would be very fast.
Second, use nohup:
nohup - run a command immune to hangups, with output to a non-tty
Using nohup sed -i '1ichr\tpos\tref\talt\treffrq\tinfo\trs\tpval\teffalt\tgene' *.txt will keep the command running on the background even if the server times you out.
This is probably obvious, but Google seems to have let me down. I need to create a zero byte file with arbitrary names on Unix (AIX, ksh). What is a good command that will do this. Something I can script obviously.
Just to be clear Im not doing anything stupid. This is a script to generate certain testing scenarios. (Checking proper behavior when handling 0-byte files.)
I figured it out.
touch $filename will do it.
Save your keystrokes and the unnecessary process, redirection will do:
$ > file
$ ls
file
Use dd:
dd if=/dev/null of=zero.out bs=0 count=0
Am appending the standard output and error of the shell script execution on a unix bok like shown below
/home/mydir/shellScript.sh >> /home/mydir/shellScript.log 2>&1
Now am wondering a way to keep logs going back as much as say 30 days else the log file size will keep on increasing.
Would appreciate if anyone can provide recommendations around the same.
This kind of thind is generally done with a tool such as logrotate.
For example, with Apache's logs, I've seen it used to :
Once per day, move the current file to another (to have one log file per day), gzipping the resulting file of the day before
Delete the archived file that were more than 1 week old
So, I suppose you might be able to use it to get what you're asking.
Is this a long-running script (e.g. daemon)? Or does it do something then exits quickly? You could dynamically build the log file's name based on today's date, so a new file gets generated any time the date changes:
#/bin/sh
now=`date +%F`
/home/mydir/shellScript.sh >> /home/mydir/shellScript-$now.log 2>&1
previous=`date --date='30 days ago' +%F`
rm -f /home/mydir/shellScript-$previous.log 2>&1
(added stale log removal).
Pascal MARTIN is correct - it is a simple matter to put a configuration file into /etc/logrotate.d, or add an entry onto the end of the file /etc/logrotate, as logrotate is included stock in most UNIX systems. It is a very easy-to-understand configuration file that takes roughly 5 min. at a man page to understand. I recommend it as the easiest and most maintainable solution.
There's not a lot of context to your problem included.
I agree with both of the offered solutions.
I would also point you to my 2 rather long-winded ;-) discourses on naming and managing logfiles.
Bash piping output and input of a program
command line wisdom for 2 panel file manager user
I hope these help.
I'm trying to use the Unix command paste, which is like a column-appending form of cat, and came across a puzzle I've never known how to solve in Unix.
How can you use the outputs of two different programs as the input for another program (without using temporary files)?
Ideally, I'd do this (without using temporary files):
./progA > tmpA;
./progB > tmpB; paste tmpA tmpB
This seems to come up relatively frequently for me, but I can't figure out how to use the output from two different programs (progA and progB) as input to another without using temporary files (tmpA and tmpB).
For commands like paste, simply using paste $(./progA) $(./progB) (in bash notation) won't do the trick, because it can read from files or stdin.
The reason I'm wary of the temporary files is that I don't want to have jobs running in parallel to cause problems by using the same file; ensuring a unique file name is sometimes difficult.
I'm currently using bash, but would be curious to see solutions for any Unix shell.
And most importantly, am I even approaching the problem in the correct way?
Cheers!
You do not need temp files under bash, try this:
paste <(./progA) <(./progB)
See "Process Substitution" in the Bash manual.
Use named pipes (FIFOs) like this:
mkfifo fA
mkfifo fB
progA > fA &
progB > fB &
paste fA fB
rm fA fB
The process substitution for Bash does a similar thing transparently, so use this only if you have a different shell.
Holy moly, I recent found out that in some instances, you can get your process substitution to work if you set the following inside of a bash script (should you need to):
set +o posix
http://www.linuxjournal.com/content/shell-process-redirection
From link:
"Process substitution is not a POSIX compliant feature and so it may have to be enabled via: set +o posix"
I was stuck for many hours, until I had done this. Here's hoping that this additional tidbit will help.
Works in all shells.
{
progA
progB
} | paste