My requirement is to attach all the .csv files in a folder and send them in a single mail.
Here is what have tried,
mutt -s "subject" -a *.csv -- abc#gmail.com < subject.txt
The above command is not working (It's not recognizing multiple files) and throwing the error
Error sending message, child exited 67 (User unknown.).
Could not send the message.
Then I tried using multiple -a option as follows,
mutt -s "subject" -a aaa.csv -a bbb.csv -- abc#gmail.com < subject.txt
This works as expected.
But this is not feasible for 100 files for example. I should be able use it with file mask (as like *.csv to take all csv files). Is there is any way we can use like *.csv in single command?
Thanks
Mutt doesn't support such syntax, but it doesn't mean it's impossible. You just have to build the mutt command.
mutt -s "subject" $( printf -- '-a %q ' *.csv ) ...
The command in $( ... ) produces something like this:
-a aaa.csv -a bbb.csv -a ...
Here is the example of sending multiple files using a single command -
mutt -s "Subject" -i "Mail_body text" email_id#abc.com -c email_cc_id#abc.com -a attachment1.pdf -a attachment2.pdf
At the end of the command line use -a for the attachment .
Some linux system have attachment size limit . Mostly it support less size .
I'm getting backslash( \ ) Additionally
Daily_Batch_Status{20131003}.PDF
Daily_System_Monitoring{20131003}.PDF
printf -- '-a %q ' *.PDF
-a Daily_Batch_Status \ {20131003 \ }.PDF -a Daily_System_Monitoring \ {20131003 \ }.PDF
#!/bin/bash
from="me#address.com"
to="target#address.com"
subject="pdfs $(date +%B) $(date +%Y)"
body="You can find the pdfs from $(date +%B) $(date +%Y)"
# here comes the attachments
mutt -s "$subject" $( printf -- ' -a %q' $PWD/*.pdf ) -- $to <<EOF
Dear Mr and Ms,
$(echo $body)
$(cat ~/.signature)
EOF
but it does not work with escape characters in file name like "\[5\]" which can come in MacOs.
I created as a script and collect needed PDFs in a folder and just run the script from that location. So monthly reports are sent... it does not matter how many pdfs (number can vary) but also there should be no white space.
Related
I am trying to create a script which detects if files in a directory have not UTF-8 characters and if they do, grab the file type of that particular file and perform the iconv operation on it.
The code is follows
find <directory> |sed '1d'><directory>/filelist.txt
while read filename
do
file_nm=${filename%%.*}
ext=${filename#*.}
echo $filename
q=`grep -axv '.*' $filename|wc -l`
echo $q
r=`file -i $filename|cut -d '=' -f 2`
echo $r
#file_repair=$file_nm
if [ $q -gt 0 ]; then
iconv -f $r -t utf-8 -c ${file_nm}.${ext} >${file_nm}_repaired.${ext}
mv ${file_nm}_repaired.${ext} ${file_nm}.${ext}
fi
done< <directory>/filelist.txt
While running the code, there are several files that turn into 0 byte files and .bak gets appended to the file name.
ls| grep 'bak' | wc -l
36
Where am I making a mistake?
Thanks for the help.
It's really not clear what some parts of your script are supposed to do.
Probably the error is that you are assuming file -i will output a string which always contains =; but it often doesn't.
find <directory> |
# avoid temporary file
sed '1d' |
# use IFS='' read -r
while IFS='' read -r filename
do
# indent loop body
file_nm=${filename%%.*}
ext=${filename#*.}
# quote variables, print diagnostics to stderr
echo "$filename" >&2
# use grep -q instead of useless wc -l; don't enter condition needlessly; quote variable
if grep -qaxv '.*' "$filename"; then
# indent condition body
# use modern command substitution syntax, quote variable
# check if result contains =
r=$(file -i "$filename")
case $r in
*=*)
# only perform decoding if we can establish encoding
echo "$r" >&2
iconv -f "${r#*=}" -t utf-8 -c "${file_nm}.${ext}" >"${file_nm}_repaired.${ext}"
mv "${file_nm}_repaired.${ext}" "${file_nm}.${ext}" ;;
*)
echo "$r: could not establish encoding" >&2 ;;
esac
fi
done
See also Why is testing “$?” to see if a command succeeded or not, an anti-pattern? (tangential, but probably worth reading) and useless use of wc
The grep regex is kind of mysterious. I'm guessing you want to check if the file contains non-empty lines? grep -qa . "$filename" would do that.
I wrote a script in R that has several arguments. I want to iterate over 20 directories and execute my script on each while passing in a substring from the file path as my -n argument using sed. I ran the following:
find . -name 'xray_data' -exec sh -c 'Rscript /Users/Caitlin/Desktop/DeMMO_Pubs/DeMMO_NativeRock/DeMMO_NativeRock/R/scipts/dataStitchR.R -f {} -b "{}/SEM_images" -c "{}/../coordinates.txt" -z ".tif" -m ".tif" -a "Unknown|SEM|Os" -d "overview" -y "overview" --overview "overview.*tif" -p FALSE -n "`sed -e 's/.*DeMMO.*[/]\(.*\)_.*[/]xray_data/\1/' "{}"`"' sh {} \;
which results in this error:
ubs/DeMMO_NativeRock/DeMMO_NativeRock/R/scipts/dataStitchR.R -f {} -b "{}/SEM_images" -c "{}/../coordinates.txt" -z ".tif" -m ".tif" -a "Unknown|SEM|Os" -d "overview" -y "overview" --overview "overview.*tif" -p FALSE -n "`sed -e 's/.*DeMMO.*[/]\(.*\)_.*[/]xray_data/\1/' "{}"`"' sh {} \;
sh: command substitution: line 0: syntax error near unexpected token `('
sh: command substitution: line 0: `sed -e s/.*DeMMO.*[/](.*)_.*[/]xray_data/1/ "./DeMMO1/D1T3rep_Dec2019_Ellison/xray_data"'
When I try to use sed with my pattern on an example file path, it works:
echo "./DeMMO1/D1T1exp_Dec2019_Poorman/xray_data" | sed -e 's/.*DeMMO.*[/]\(.*\)_.*[/]xray_data/\1/'
which produces the correct substring:
D1T1exp_Dec2019
I think there's an issue with trying to use single quotes inside the interpreted string but I don't know how to deal with this. I have tried replacing the single quotes around the sed pattern with double quotes as well as removing the single quotes, both result in this error:
sed: RE error: illegal byte sequence
How should I extract the substring from the file path dynamically in this case?
To loop through the output of find.
while IFS= read -ru "$fd" -d '' files; do
echo "$files" ##: do whatever you want to do with the files here.
done {fd}< <(find . -type f -name 'xray_data' -print0)
No embedded commands in quotes.
It uses a random fd just in case something inside the loop is eating/slurping stdin
Also -print0 delimits the files with null bytes, so it should be safe enough to handle spaces tabs and newlines on the path and file names.
A good start is always put an echo in front of every commands you want to do with the files, so you have an idea what's going to be executed/happen just in case...
This is the solution that ultimately worked for me due to issues with quotes in sed:
for dir in `find . -name 'xray_data'`;
do sampleID="`basename $(dirname $dir) | cut -f1 -d'_'`";
Rscript /Users/Caitlin/Desktop/DeMMO_Pubs/DeMMO_NativeRock/DeMMO_NativeRock/R/scipts/dataStitchR.R -f "$dir" -b "$dir/SEM_images" -c "$dir/../coordinates.txt" -z ".tif" -m ".tif" -a "Unknown|SEM|Os" -d "overview" -y "overview" --overview "overview.*tif" -p FALSE -n "$sampleID";
done
When downloading a file using curl, how would I follow a link location and use that for the output filename (without knowing the remote filename in advance)?
For example, if one clicks on the link below, you would download a filenamed "pythoncomplete.vim." However using curl's -O and -L options, the filename is simply the original remote-name, a clumsy "download_script.php?src_id=10872."
curl -O -L http://www.vim.org/scripts/download_script.php?src_id=10872
In order to download the file with the correct filename you would have to know the name of the file in advance:
curl -o pythoncomplete.vim -L http://www.vim.org/scripts/download_script.php?src_id=10872
It would be excellent if you could download the file without knowing the name in advance, and if not, is there another way to quickly pull down a redirected file via command line?
The remote side sends the filename using the Content-Disposition header.
curl 7.21.2 or newer does this automatically if you specify --remote-header-name / -J.
curl -O -J -L $url
The expanded version of the arguments would be:
curl --remote-name --remote-header-name --location $url
If you have a recent version of curl (7.21.2 or later), see #jmanning2k's answer.
I you have an older version of curl (like 7.19.7 which came with Snow Leopard), do two requests: a HEAD to get the file name from response header, then a GET:
url="http://www.vim.org/scripts/download_script.php?src_id=10872"
filename=$(curl -sI $url | grep -o -E 'filename=.*$' | sed -e 's/filename=//')
curl -o $filename -L $url
If you can use wget instead of curl:
wget --content-disposition $url
I wanted to comment to jmanning2k's answer but as a new user I can't, so I tried to edit his post which is allowed but the edit was rejected saying it was supposed to be a comment. sigh
Anyway, see this as a comment to his answer thanks.
This seems to only work if the header looks like filename=pythoncomplete.vim as in the example, but some sites send a header that looks like filename*=UTF-8' 'filename.zip' that one isn't recognized by curl 7.28.0
I wanted a solution that worked on both older and newer Macs, and the legacy code David provided for Snow Leopard did not behave well under Mavericks. Here's a function I created based on David's code:
function getUriFilename() {
header="$(curl -sI "$1" | tr -d '\r')"
filename="$(echo "$header" | grep -o -E 'filename=.*$')"
if [[ -n "$filename" ]]; then
echo "${filename#filename=}"
return
fi
filename="$(echo "$header" | grep -o -E 'Location:.*$')"
if [[ -n "$filename" ]]; then
basename "${filename#Location\:}"
return
fi
return 1
}
With this defined, you can run:
url="http://www.vim.org/scripts/download_script.php?src_id=10872"
filename="$(getUriFilename $url)"
curl -L $url -o "$filename"
Please note that certain malconfigured webservers will serve the name using "Filename" as key, where RFC2183 specifies it should be "filename". curl only handles the latter case.
I had the same Problem like John Cooper. I got no filename but a Location File name back. His answer also worked but are 2 commands.
This oneliner worked for me....
url="https://download.mozilla.org/?product=firefox-latest-ssl&os=linux64&lang=de";url=$(curl -L --head -w '%{url_effective}' $url 2>/dev/null | tail -n1) ; curl -O $url
Stolen and added some stuff from
https://unix.stackexchange.com/questions/126252/resolve-filename-from-a-remote-url-without-downloading-a-file
An example using the answer above for Apache Archiva artifact repository to pull latest version. The curl returns the Location line and the filename is at the end of the line. Need to remove the CR at end of file name.
url="http://archiva:8080/restServices/archivaServices/searchService/artifact?g=com.imgur.backup&a=snapshot-s3-util&v=LATEST"
filename=$(curl --silent -sI -u user:password $url | grep Location | awk -F\/ '{print $NF}' | sed 's/\r$//')
curl --silent -o $filename -L -u user:password $url
instead of applying grep and other Unix-Fu operations, curl ships with a builtin "Write Out" option variable[1] specifically for such a case, e.g.
$ curl -OJsL "http://www.vim.org/scripts/download_script.php?src_id=10872" -w "%{filename_effective}"
pythoncomplete.vim
[1] https://everything.curl.dev/usingcurl/verbose/writeout#available-write-out-variables
Using the solution proposed above, I wrote this helper function curl2file.
[UPDATED]
function curl2file() {
url=$1
url=$(curl -o /dev/null -L --head -w '%{url_effective}' $url 2>/dev/null | tail -n1) ; curl -O $url
}
Usage:
curl2file https://cloud.tsinghua.edu.cn/f/4666d28af98a4e63afb5/?dl=1
I would like to extract the first part of this hostname testsrv1
from testsrv1.main.corp.loc.domain.com in UNIX, within a shell script.
What command can I use? It would be anything before the first period .
Do you have the server name in a shell variable? Are you using a sh-like shell? If so,
${SERVERNAME%%.*}
will do what you want.
You can use cut:
echo "testsrv1.main.corp.loc.domain.com" | cut -d"." -f1
To build upon pilcrow's answer, no need for new variable, just use inbuilt $HOSTANME.
echo $HOSTNAME-->my.server.domain
echo ${HOSTNAME%%.*}-->my
Tested on two fairly different Linux's.
2.6.18-371.4.1.el5, GNU bash, version 3.2.25(1)-release (i386-redhat-linux-gnu)
3.4.76-65.111.amzn1.x86_64, GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)
try the -s switch:
hostname -s
I use command cut, awk, sed or bash variables
Operation
Via cut
[flying#lempstacker ~]$ echo "testsrv1.main.corp.loc.domain.com" | cut -d. -f1
testsrv1
[flying#lempstacker ~]$
Via awk
[flying#lempstacker ~]$ echo "testsrv1.main.corp.loc.domain.com" | awk -v FS='.' '{print $1}'
testsrv1
[flying#lempstacker ~]$
Via sed
[flying#lempstacker ~]$ echo "testsrv1.main.corp.loc.domain.com" | sed -r 's#([^.]*).(.*)#\1#g'
testsrv1
[flying#lempstacker ~]$
Via Bash Variables
[flying#lempstacker ~]$ hostName='testsrv1.main.corp.loc.domain.com'
[flying#lempstacker ~]$ echo ${hostName%%.*}
testsrv1
[flying#lempstacker ~]$
You could have used "uname -n" to just get the hostname only.
You can use IFS to split text by whichever token you want. For domain names, we can use the dot/period character.
#!/usr/bin/env sh
shorthost() {
# Set IFS to dot, so that we can split $# on dots instead of spaces.
local IFS='.'
# Break up arguments passed to shorthost so that each domain zone is
# a new index in an array.
zones=($#)
# Echo out our first zone
echo ${zones[0]}
}
If this is in your script then, for instance, you'll get test when you run shorthost test.example.com. You can adjust this to fit your use case, but knowing how to break the zones into the array is the big thing here, I think.
I wanted to provide this solution, because I feel like spawning another process is overkill when you can do it easily and completely within your shell with IFS. One thing to watch out for is that some users will recommend doing things like hostname -s, but that doesn't work in the BSD userland. For instance, MacOS users don't have the -s flag, I don't think.
Assuming the variable $HOSTNAME exists, so try echo ${HOSTNAME%%.*} to get the top-most part of the full-qualified hostname. Hope it helps.
If interested, the hint is from the below quoted partial /etc/bashrc on a REHL7 host:
if [ -e /etc/sysconfig/bash-prompt-screen ]; then
PROMPT_COMMAND=/etc/sysconfig/bash-prompt-screen
else
PROMPT_COMMAND='printf "\033k%s#%s:%s\033\\" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/~}"'
fi
;; ... ```
G'day,
I need to see if a specific file is more than 58 minutes old from a sh shell script. I'm talking straight vanilla Solaris shell with some POSIX extensions it seems.
I've thought of doing a
touch -t YYYYMMDDHHmm.SS /var/tmp/toto
where the timestamp is 58 minutes ago and then doing a
find ./logs_dir \! -newer /var/tmp/toto -print
We need to postprocess some log files that have been retrieved from various servers using mirror. Waiting for the files to be stable is the way this team decides if the mirror is finished and hence that day's logs are now complete and ready for processing.
Any suggestions gratefully received.
cheers,
I needed something to test age of a specific file, to not re-download too often. So using GNU date and bash:
# if file's modtime hour is less than current hour:
[[ $(date +%k -r GPW/mstall.zip) -lt $(date +%k) ]] && \
wget -S -N \
http://bossa.pl/pub/metastock/mstock/mstall.zip \
Update--this version works much better for me, and is more accurate and understandable:
[[ $(date +%s -r mstall.zip) -lt $(date +%s --date="77 min ago") ]] && echo File is older than 1hr 17min
The BSD variant (tested on a Mac) is:
[[ $(stat -f "%m" mstall.zip) -lt $(date -j -v-77M +%s) ]] && echo File is older than 1hr 17min
You can use different units in the find command, for example:
find . -mtime +0h55m
Will return any files with modified dates older than 55 minutes ago.
This is now an old question, sorry, but for the sake of others searching for a good solution as I was...
The best method I can think of is to use the find(1) command which is the only Un*x command I know of that can directly test file age:
if [ "$(find $file -mmin +58)" != "" ]
then
... regenerate the file ...
fi
The other option is to use the stat(1) command to return the age of the file in seconds and the date command to return the time now in seconds. Combined with the bash shell math operator working out the age of the file becomes quite easy:
age=$(stat -c %Y $file)
now=$(date +"%s")
if (( (now - age) > (58 * 60) ))
then
... regenerate the file ...
fi
You could do the above without the two variables, but they make things clearer, as does use of bash math (which could also be replaced). I've used the find(1) method quite extensively in scripts over the years and recommend it unless you actually need to know age in seconds.
A piece of the puzzle might be using stat. You can pass -r or -s to get a parseable representation of all file metadata.
find . -print -exec stat -r '{}' \;
AFAICR, the 10th column will show the mtime.
Since you're looking to test the time of a specific file you can start by using test and comparing it to your specially created file:
test /path/to/file -nt /var/tmp/toto
or:
touch -t YYYYMMDDHHmm.SS /var/tmp/toto
if [/path/to/file -nt /var/tmp/toto]
...
You can use ls and awk to get what you need as well. Awk has a c-ish printf that will allow you to format the columns any way you want.
I tested this in bash on linux and ksh on solaris.
Fiddle with options to get the best values for your application. Especially "--full-time" in bash and "-E" in ksh.
bash
ls -l foo | awk '{printf "%3s %1s\n", $6, $7}'
2011-04-19 11:37
ls --full-time foo | awk '{printf "%3s %1s\n", $6, $7}'
2011-04-19 11:37:51.211982332
ksh
ls -l bar | awk '{printf "%3s %1s %s\n", $6, $7, $8}'
May 3 11:19
ls -E bar | awk '{printf "%3s %1s %s\n", $6, $7, $8}'
2011-05-03 11:19:23.723044000 -0400