wget not writing to my specified directory - always goes to root - directory

Can someone tell me what is wrong with wget statement I am running from cron?
wget -N --header="If-Modified-Since: `date -r testing.zip -P /home/test/public_html/resources/ --utc --rfc-2822 2>/dev/null || date --utc --rfc-2822 --date='1 week ago'`" http://www.test.com/files/zz666/testing.zip
The file gets retrieved OK, but writes to the /home directory and does not write to the /home/test/public_html/resources/ . The file will already exist in the target directory, so not sure if it is an overwrite issue? I have tried with a / on the end and without.
The date on the file that exist is always a week behind the file being downloaded.
Any help and advice appreciated.

I think you have the -P parameter inside the header calculation:
wget -N -P /home/test/public_html/resources/ --header="If-Modified-Since: `date -r /home/test/public_html/resources/testing.zip --utc --rfc-2822 2>/dev/null || date --utc --rfc-2822 --date='1 week ago'`" http://www.test.com/files/zz666/testing.zip
There is no -P option in date command ...

The reason for using wget is to pick up the latest scientiamobile.com WURFL.zip file. After contacting their support they confirmed their original wget documentation was wrong and they have corrected it. Having changed mine accordingly I can confirm it works perfectly now. The wget instruction is as follows:
WURFL_DIR=/home/test/public_html/resources; wget -N -P "$WURFL_DIR" --header="If-Modified-Since: $(date -r $WURFL_DIR/wurfl.zip --utc --rfc-2822 2>/dev/null || date --utc --rfc-2822 --date='1 week ago')" http://www.scientiamobile.com/wurfl/xxxx/wurfl.zip
Hope this helps someone else.
Regards, Chris.

Related

Using xargs to create a remote directory

I want to use the xargs command to read the standard output from my date command. The following pipe works, creating a directory "2019-12-03" in the current directory.
date "+%Y-%m-%d" -r ../IDNumber/IDNumber.txt | xargs mkdir
What I would like it to do, however, is use the standard output from the data command to make a directory in a remote location, username#archivalstorage.university.edu:/remote/folder/path/, resulting in username#archivalstorage.university.edu:/remote/folder/path/2019-12-03.
Running the following command:
date "+%Y-%m-%d" -r ../IDNumber/IDNumber.txt | xargs mkdir -p username#archivalstorage.university.edu:/remote/folder/path/
This command does not give any kind of error, but no folder is actually created in the remote location I have specified.
mkdir doesn't support the remote directory. Use ssh instead, with command substitute:
ssh user#host 'cd /path/to/dir && mkdir `date +%Y-%m-%d`'
or with xargs:
date +%Y-%m-%d | xargs -I _ ssh user#host 'cd /tmp && mkdir _'

compadd failure during optparse-applicative zsh completion script

So I'm not exactly sure whether this is something wrong with optparse-applicative's script or if I'm using it wrong.
In the optparse-applicative readme, it states that programs are made available with automatic completion scripts, with options for zsh. For my program setup:
$> setup --zsh-completion-script `which setup`
Outputs:
#compdef setup
local request
local completions
local word
local index=$((CURRENT - 1))
request=(--bash-completion-enriched --bash-completion-index $index)
for arg in ${words[#]}; do
request=(${request[#]} --bash-completion-word $arg)
done
IFS=$'\n' completions=($( /Users/anrothan/.local/bin/setup "${request[#]}" ))
for word in $completions; do
local -a parts
# Split the line at a tab if there is one.
IFS=$'\t' parts=($( echo $word ))
if [[ -n $parts[2] ]]; then
if [[ $word[1] == "-" ]]; then
local desc=("$parts[1] ($parts[2])")
compadd -d desc -- $parts[1]
else
local desc=($(print -f "%-019s -- %s" $parts[1] $parts[2]))
compadd -l -d desc -- $parts[1]
fi
else
compadd -f -- $word
fi
done
I'm running the following in my zshrc (I use oh-my-zsh, but I removed it and this still happens in a bare-minimum config with only a small PATH addition to get the setup script).
autoload -U +X compinit && compinit
autoload -U +X bashcompinit && bashcompinit
source <(setup --zsh-completion-script `which setup`)
I get the following error several times:
/dev/fd/11:compadd:24: can only be called from completion function
I've run compinit, and the completion script seems to look right to me, and I've looked around but I can't seem to figure out why this error is happening...
You don't need to source zsh-completion scripts, they just need to be added to your fpath parameter.
So just place the output of setup --zsh-completion-script $(which setup) in a file call _setup in $HOME/.config/zsh/completions.
fpath=($HOME/.config/zsh/completions $fpath)
autoload -U compinit && compinit

Download all files of a particular type from a website using wget stops in the starting url

The following did not work.
wget -r -A .pdf home_page_url
It stop with the following message:
....
Removing site.com/index.html.tmp since it should be rejected.
FINISHED
I don't know why it only stops in the starting url, do not go into the links in it to search for the given file type.
Any other way to recursively download all pdf files in an website. ?
It may be based on a robots.txt. Try adding -e robots=off.
Other possible problems are cookie based authentication or agent rejection for wget.
See these examples.
EDIT: The dot in ".pdf" is wrong according to sunsite.univie.ac.at
the following cmd works for me, it will download pictures of a site
wget -A pdf,jpg,png -m -p -E -k -K -np http://site/path/
This is certainly because of the links in the HTML don't end up with /.
Wget will not follow this has it think it's a file (but doesn't match your filter):
page
But will follow this:
page
You can use the --debug option to see if it's the actual problem.
I don't know any good solution for this. In my opinion this is a bug.
In my version of wget (GNU Wget 1.21.3), the -A/--accept and -r/--recursive flags don't play nicely with each other.
Here's my script for scraping a domain for PDFs (or any other filetype):
wget --no-verbose --mirror --spider https://example.com -o - | while read line
do
[[ $line == *'200 OK' ]] || continue
[[ $line == *'.pdf'* ]] || continue
echo $line | cut -c25- | rev | cut -c7- | rev | xargs wget --no-verbose -P scraped-files
done
Explanation: Recursively crawl https://example.com and pipe log output (containing all scraped URLs) to a while read block. When a line from the log output contains a PDF URL, strip the leading timestamp (25 characters) and tailing request info (7 characters) and use wget to download the PDF.

mutt command with multiple attachments in single mail unix

My requirement is to attach all the .csv files in a folder and send them in a single mail.
Here is what have tried,
mutt -s "subject" -a *.csv -- abc#gmail.com < subject.txt
The above command is not working (It's not recognizing multiple files) and throwing the error
Error sending message, child exited 67 (User unknown.).
Could not send the message.
Then I tried using multiple -a option as follows,
mutt -s "subject" -a aaa.csv -a bbb.csv -- abc#gmail.com < subject.txt
This works as expected.
But this is not feasible for 100 files for example. I should be able use it with file mask (as like *.csv to take all csv files). Is there is any way we can use like *.csv in single command?
Thanks
Mutt doesn't support such syntax, but it doesn't mean it's impossible. You just have to build the mutt command.
mutt -s "subject" $( printf -- '-a %q ' *.csv ) ...
The command in $( ... ) produces something like this:
-a aaa.csv -a bbb.csv -a ...
Here is the example of sending multiple files using a single command -
mutt -s "Subject" -i "Mail_body text" email_id#abc.com -c email_cc_id#abc.com -a attachment1.pdf -a attachment2.pdf
At the end of the command line use -a for the attachment .
Some linux system have attachment size limit . Mostly it support less size .
I'm getting backslash( \ ) Additionally
Daily_Batch_Status{20131003}.PDF
Daily_System_Monitoring{20131003}.PDF
printf -- '-a %q ' *.PDF
-a Daily_Batch_Status \ {20131003 \ }.PDF -a Daily_System_Monitoring \ {20131003 \ }.PDF
#!/bin/bash
from="me#address.com"
to="target#address.com"
subject="pdfs $(date +%B) $(date +%Y)"
body="You can find the pdfs from $(date +%B) $(date +%Y)"
# here comes the attachments
mutt -s "$subject" $( printf -- ' -a %q' $PWD/*.pdf ) -- $to <<EOF
Dear Mr and Ms,
$(echo $body)
$(cat ~/.signature)
EOF
but it does not work with escape characters in file name like "\[5\]" which can come in MacOs.
I created as a script and collect needed PDFs in a folder and just run the script from that location. So monthly reports are sent... it does not matter how many pdfs (number can vary) but also there should be no white space.

Determining age of a file in shell script

G'day,
I need to see if a specific file is more than 58 minutes old from a sh shell script. I'm talking straight vanilla Solaris shell with some POSIX extensions it seems.
I've thought of doing a
touch -t YYYYMMDDHHmm.SS /var/tmp/toto
where the timestamp is 58 minutes ago and then doing a
find ./logs_dir \! -newer /var/tmp/toto -print
We need to postprocess some log files that have been retrieved from various servers using mirror. Waiting for the files to be stable is the way this team decides if the mirror is finished and hence that day's logs are now complete and ready for processing.
Any suggestions gratefully received.
cheers,
I needed something to test age of a specific file, to not re-download too often. So using GNU date and bash:
# if file's modtime hour is less than current hour:
[[ $(date +%k -r GPW/mstall.zip) -lt $(date +%k) ]] && \
wget -S -N \
http://bossa.pl/pub/metastock/mstock/mstall.zip \
Update--this version works much better for me, and is more accurate and understandable:
[[ $(date +%s -r mstall.zip) -lt $(date +%s --date="77 min ago") ]] && echo File is older than 1hr 17min
The BSD variant (tested on a Mac) is:
[[ $(stat -f "%m" mstall.zip) -lt $(date -j -v-77M +%s) ]] && echo File is older than 1hr 17min
You can use different units in the find command, for example:
find . -mtime +0h55m
Will return any files with modified dates older than 55 minutes ago.
This is now an old question, sorry, but for the sake of others searching for a good solution as I was...
The best method I can think of is to use the find(1) command which is the only Un*x command I know of that can directly test file age:
if [ "$(find $file -mmin +58)" != "" ]
then
... regenerate the file ...
fi
The other option is to use the stat(1) command to return the age of the file in seconds and the date command to return the time now in seconds. Combined with the bash shell math operator working out the age of the file becomes quite easy:
age=$(stat -c %Y $file)
now=$(date +"%s")
if (( (now - age) > (58 * 60) ))
then
... regenerate the file ...
fi
You could do the above without the two variables, but they make things clearer, as does use of bash math (which could also be replaced). I've used the find(1) method quite extensively in scripts over the years and recommend it unless you actually need to know age in seconds.
A piece of the puzzle might be using stat. You can pass -r or -s to get a parseable representation of all file metadata.
find . -print -exec stat -r '{}' \;
AFAICR, the 10th column will show the mtime.
Since you're looking to test the time of a specific file you can start by using test and comparing it to your specially created file:
test /path/to/file -nt /var/tmp/toto
or:
touch -t YYYYMMDDHHmm.SS /var/tmp/toto
if [/path/to/file -nt /var/tmp/toto]
...
You can use ls and awk to get what you need as well. Awk has a c-ish printf that will allow you to format the columns any way you want.
I tested this in bash on linux and ksh on solaris.
Fiddle with options to get the best values for your application. Especially "--full-time" in bash and "-E" in ksh.
bash
ls -l foo | awk '{printf "%3s %1s\n", $6, $7}'
2011-04-19 11:37
ls --full-time foo | awk '{printf "%3s %1s\n", $6, $7}'
2011-04-19 11:37:51.211982332
ksh
ls -l bar | awk '{printf "%3s %1s %s\n", $6, $7, $8}'
May 3 11:19
ls -E bar | awk '{printf "%3s %1s %s\n", $6, $7, $8}'
2011-05-03 11:19:23.723044000 -0400

Resources