unix sendmail attchment not working - unix

I have below script but it sends email without any attachment. What is wrong?
sendmail /A "/home/dd/data/list.txt" "dd#gmail.com" -t << EOF
To:dd#gmail.com
Subject:List of ids
This is the message
[new line]
Everything else works as expected. Thanks.

The here document is not completed.
sendmail /A "/home/dd/data/list.txt" "dd#gmail.com" -t << -EOF
To:dd#gmail.com
Subject:List of ids
This is the message
EOF
try -EOF so the trailing EOF does not need to be in the leftmost column.
Try this, I just tested it:
/usr/sbin/sendmail -tv me#myplace.com <<%%
Subject: test of sendmail
This is the note
$(uuencode attachment.file newname.txt)
%%
I did not have time to get back to this. email address goes on line 1

Try the script below:
#!/bin/sh
# send/include list.txt file after "here document" (email headers + start of email body)
cat - "/home/dd/data/list.txt" | /usr/sbin/sendmail -i -- "dd#gmail.com" <<END
To: dd#gmail.com
Subject: List of ids
This is the message
END

Related

Unix grep multiple patterns every hour

Suppose, there are 4 different types of patterns(errors) in a log each may occur time to time. Eg: "timeout exception", "ldap error "," db error "," error four". Can any one place provide me a script about:- how to grep multiple patterns in a log every hour and if the script finds the any pattern then it should send alert to me only once, no duplicate alerts. Please help me. Thank you
#!/bin/bash
while true; do
export ERRORS=`cat YOUR_LOG_FILE | grep -e "(timeout exception)|(ldap error)|(db error)|(error four)"
if [ $ERRORS ]; then
# sendmail or any other kind of "alert" you prefer.
echo $ERRORS | sendmail "your#email.com"
fi
sleep 1h
done
Make a crontab entry that will run once an hour. That entry can call your script:
logfile=/path/to/logfile/application.out
function send_alert {
# Some sendmail or other tool to send your alert using the args
printf "I want to alert about %s" "$*"
}
# Solution only announcing errors without sending them
grep -qE "timeout exception|ldap error|db error|error four" ${logfile} &&
send_alert "grep found something"
# Solution sending number of errorlines
errorlinecount=$(grep -c "timeout exception|ldap error|db error|error four" )
if [ ${errorcount} -gt 0 ]; then
send_alert "grep found ${errorcount} disturbing lines"
fi

Parameter value truncated from second word onwards in ksh

I am calling a function in commonfuncs to send email as below:
#!/usr/bin/ksh
. commonfuncs
emailsend 'test mail' 'body of the mail' 'abc.efg#domain.com'
the function is as below:
function emailsend
{
esubject=$1
etext=$2
etolist=$3
efromid="from.id#domain.com"
echo $etext >email.txt
cat email.txt | mailx -r $efromid -s $esubject $etolist
}
The email is sent fine. But the subject is send only as test instead of test mail. Tried with double quotes too but no use.
Try to wrap with double quotes parameters of the malix command:
cat email.txt | mailx -r "$efromid" -s "$esubject" "$etolist"
because the space character is a delimiter of the command line arguemnts.

Error Handling while running sqlplus from shell scripts

I need to validate whether DB connection is success/failure.
This is my code
report=`sqlplus -S /nolog << EOF
WHENEVER OSERROR EXIT 9;
WHENEVER SQLERROR EXIT SQL.SQLCODE;
connect <<username>>/<<Password>>#hostname:port
set linesize 1500
set trimspool on
set verify off
set termout off
set echo off
set feedback off
set heading on
set pagesize 0
spool extract.csv
<<My SQL Query>>
spool off;
exit;
EOF`
I have tried the below option based on the thread Managing error handling while running sqlplus from shell scripts but its picking the first cell value rather than the connection status.
if [ $report != 0 ]
then
echo "Connection Issue"
echo "Error code $sql_return_code"
exit 0;`enter code here`
fi
Please advise.
I needed something similar but executed it a bit differently.
First, I have list.txt which contains the databases that I would like to test. I am using wallet connections but this could be edited to hold username/password.
list.txt:
DB01 INSTANCE1.SCHEMA1
DB02 INSTANCE2.SCHEMA2
DB03 INSTANCE3.SCHEMA3
DB04 INSTANCE4.SCHEMA4
I have OK.sql which contains the query that I want to run on each database.
OK.sql:
select 'OK' from dual;
exit
Last, I user test.sh to read list.txt, attempt to connect and run OK.sql on each line, and record the result in (drumroll) result.txt.
test.sh:
. /etc/profile
rm result.txt
while read -r name wallet; do
echo "BEGIN-"$name
if (sqlplus -S /#$wallet #OK.sql < /dev/null | grep -e 'OK'); then
echo $name "GOOD" >> result.txt
else
echo $name "BAD" >> result.txt
fi
echo "END-"$name
done < list.txt
After the run check your result.txt.
result.txt:
DB01 BAD
DB02 GOOD
DB03 GOOD
DB04 GOOD
I hope this helps.

Output shell script to log file

I have a scheduled unix script that I want to log the output of. I am unable to edit the cron file due to the user interface restrictions, and I am unable to add >> logfile to the command. Is there something I can add within the script itself to send the output to a log?
{
printf poo
#Do not change
PRINTF=/usr/bin/printf
MSMTP=/usr/local/bin/msmtp
MSMTPCONF=/var/etc/msmtp.conf
#Can be changed
FROM="nas4free#usinfosec.com"
TO="dpatino#usinfosec.com"
MDIR="CaseData"
SUBJECT="$MDIR Backup Report"
} > /mnt/support/logs/$SUBJECT.log
#BODY="$(cat /mnt/support/logs/test.log)"
#$PRINTF "From:$FROM\nTo:$TO\nSubject:$SUBJECT\n\n$BODY" | $MSMTP --file=$MSMTPCONF -t
Try
#!/bin/bash
exec > /tmp/myLog.log 2>&1
set -x
The log shows:
+ echo 'Hello World!'
Hello World!
One way is to wrap your script in braces and redirect the output as shown below:
#!/bin/bash
{
# script contents here
echo running script
} > logfile
Append following line in starting of your script
log_file_path="/tmp/output.log"
log() { while IFS='' read -r line; do echo "$line" >> "$log_file_path"; done; };
exec > >(tee >(log)) 2>&1
Your script with modification
PRINTF=/usr/bin/printf
MSMTP=/usr/local/bin/msmtp
MSMTPCONF=/var/etc/msmtp.conf
FROM="nas4free#usinfosec.com"
TO="dpatino#usinfosec.com"
MDIR="CaseData"
SUBJECT="$MDIR Backup Report"
{
printf poo
} > /mnt/support/logs/$SUBJECT.log
#BODY="$(cat /mnt/support/logs/test.log)"
#$PRINTF "From:$FROM\nTo:$TO\nSubject:$SUBJECT\n\n$BODY" | $MSMTP --file=$MSMTPCONF -t

Executing SQL statement in ASEISQL with UNIX scripts

Since I am new to unix scripting. I am running a SQL statement in ASE ISQL, and if SQL statement gives some result then I need to mail that result to a particular users. And if SQL is not returning any result then mail should not be sent.
The Sample Script I have wriiten is:
#!/bin/ksh
isql -U$DBO -S$DSQUERY -D$DBNAME -P$PASSWORD << END
go
select * from 'Table'
go
if (##rowcount !=0)
mailx -s "Hello" XYZ#gmail.com
END
Please let me know where I am going wrong?
I think you need to capture the output of the SQL into a shell variable, and then test the result before sending the email, roughly like:
#!/bin/ksh
num=$(isql -U$DBO -S$DSQUERY -D$DBNAME -P$PASSWORD << END
select count(*) from 'Table'
go
END
)
if [ "$num" -gt 0 ]
then mailx -s "Hello" XYZ#gmail.com < /dev/null
fi
I am assuming that the isql program will only print the number and not any headings or other information. If it is more verbose, then you have to do a more sensitive test.
Note, too, that COUNT(*) is quicker and more accurately what you're after than your 'select everything and count how many rows there were' version.
Actually my problem is if my SQL statement is returning any result then only that resultset should be sent in a mail.
Then I'd use:
#!/bin/ksh
tmp=${TMPDIR:-/tmp}/results.$$
trap "rm -f $tmp; exit 1" 0 1 2 3 13 15
isql -U$DBO -S$DSQUERY -D$DBNAME -P$PASSWORD << END > $tmp
select * from 'Table'
go
END
if [ -s $tmp ]
then mailx -s "Hello" XYZ#gmail.com < $tmp || exit 1
fi
rm -f $tmp
trap 0
exit 0
This captures the results in a file. If the file is not empty (-s) then it sends the file as the body of an email. Please change the subject to something more meaningful. Also, are you sure it is a good idea to send corporate email to a Gmail account?

Resources