UC4 Appworx V8 - Send spool file in attachment - unix

I got a Korn Shell that calls a SQL/PLUS to check and spool data into a .CSV file in UNIX.
This KShell is working fine on Unix, it creates the file and Return 0.
Launching the Job from UC4 AppWorx i want him to Attach the Spooled File in UNIX in the Notification sended by the Job when he Finish.
I want this to work this way:
1º I Launch the Job
2º It checks the data, if data is founded then it creates a file in /tmp directory in UNIX with the .CSV extension.
3º When the job finishes he send me an email with Spool File (.CSV) in Unix.
Is there any way? How can i make this?
Thanks.

You'll need to create an chain in Appworx (the docs should be able to walk you through it). This chain will have 1 or more jobs.
First, you don't need a k script to call SQL/PLUS. You can invoke SQL/PLUS directly. Write the script as a .sql file (it can include sqlplus directives, sql, and PL/SQL as needed). Set the job as "program type" AWSQLP. Point it at the .sql script that you have made available to Appworx.
The sqlplus script can use logic to determine if it should create a file. If it should, it can write out files directly (though getting proper .csv files from it can be a pain).
Then, attach a notification to the job, and the notification object should be set to do an email attachment. You'll have to use the "pattern" type, and put in a full filepath to the csv. Substitution variables can be used if you want a new filename each invocation.
Depending on your version, some of these options can be moved around a bit (We just upgraded last year, UC4 no longer owns it). Click on the help menu and do to the documentation entry... it's not the best in the world, but far from the worst.

first of all thanks for answering.
I usually create a JOB lets say, SEM_CHECK_THINGS, with 1 prompt defined on UC4 that runs a Query in the Database to check if the table, test_table (Example) got data, to do this i use select decode(count(*),0, 'N', 'Y') from test_table;
This job also executes a simple KShell in unix:
KShell Content:
echo "Job Name: $1"
echo "Job Control Flag: $2"
jobName=$1
jobFlag=$2
echo "Job ${jobName} started ..." >> $logFile
date >> $logFile
if [[ ${jobFlag} == "Y" ]]; then
echo "Job ${jobName} executed successfully with data found." >> $logFile
echo "Job ${jobName} executed successfully with data found."
exit 1
else
echo "Job ${jobName} finished with no data found." >> $logFile
echo "Job ${jobName} finished with no data found."
exit 0
fi
I usually force "ABORT" by using Exit 1 if data is found to request another Job that will execute an .SQL that will spool the data from the test_table.
whenever sqlerror exit sql.sqlcode
whenever sqlerror exit 1
prompt this is a test
set echo off
set trimspool on
set trimout off
set linesize 1500
set feedback on
set newpage none
SET HEADING OFF
set und off
set pagesize 10000
alter session set nls_date_format = 'dd-MON-yyyy HH24:MI:SS';
spool &1
SELECT 'PREV_RESULTSET;LAST_RESULTSET;NR_COUNT' FROM DUAL
UNION ALL
SELECT PREV_RESULTSET||';'||LAST_RESULTSET||';'||COUNT(1) NR_COUNT
FROM SEM_REPORT_PEDIDOS
GROUP BY PREV_RESULTSET, LAST_RESULTSET;
spool off
exit 1
By using the Spool &1 and by "Hardcoding" the file testspool.csv in the "Other Output" option in the Notification of that Job i managed to do this, to receive an email with the content i need/want from that table.
But i want really want its to make this in one single job, make a validation, if data is found then Spool and attach .CSV file to the email notification sent by that job.

Related

Checking network availability of PC and of specific file via CMD

I have list with some PC hostnames and I would like to simple check if PCs are online in local network and check if the specific file is in that PC.
I made some "program" via CMD. But it's lazy and it takes too long to check few PCs on network.
Example of command for first PC(workstation):
::this first command will check if PC is online and it will save workstation's hostname to result.txt file for next used command.
wmic /FAILFAST:ON /user: "admin" /password:"123456" /node:"Workstation1.subdomain.domain" computersystem get "Name" | more >>result.txt
::this second command will check if specific file (for example: AcroRd32.exe) not exist and it will save result to result.txt if it is not exist. Problem is that the this part is executing too long if PC is offline.
if not exist "\\Workstation1.subdomain.domain\c$\Program Files (x86)\Adobe\Acrobat Reader DC\Reader\AcroRd32.exe" echo File NOT EXIST! | more >>result.txt
Output from result.txt should looks like this:
Worskation1.subdomain.domain
File NOT EXIST!
or
Worskation1.subdomain.domain
(empty line)
1. is it possible to make second command more faster?
or
is it possible to solve this through another way in CMD?
is CMD suitable for this job?
another solution?
There is a simple way.
The first command wmic /FAILFAST:ON /user: "admin" /password:"123456" /node:"Workstation1.subdomain.domain" computersystem get "Name" will exit with status code 0 if the network is up and the workstation is up (and your credentials are corrects), anything else if there is a problem.
So you just have to check the exit code :
::this first command will check if PC is online and it will save workstation's hostname to result.txt file for next used command.
set LOCALV_WORKSTATIONUP=0
( wmic /FAILFAST:ON /user: "admin" /password:"123456" /node:"Workstation1.subdomain.domain" computersystem get "Name" | more >>result.txt ) && set LOCALV_WORKSTATIONUP=1
IF "%LOCALV_WORKSTATIONUP%" == "1" (
::this second command will check if specific file (for example: AcroRd32.exe) not exist and it will save result to result.txt if it is not exist. Problem is that the this part is executing too long if PC is offline.
if not exist "\\Workstation1.subdomain.domain\c$\Program Files (x86)\Adobe\Acrobat Reader DC\Reader\AcroRd32.exe" echo File NOT EXIST! | more >>result.txt
)
If you are using delayed expansion, take care to check for !LOCALV_WORKSTATIONUP! instead of %LOCALV_WORKSTATIONUP% :
::this first command will check if PC is online and it will save workstation's hostname to result.txt file for next used command.
set LOCALV_WORKSTATIONUP=0
( wmic /FAILFAST:ON /user: "admin" /password:"123456" /node:"Workstation1.subdomain.domain" computersystem get "Name" | more >>result.txt ) && set LOCALV_WORKSTATIONUP=1
IF "!LOCALV_WORKSTATIONUP!" == "1" (
::this second command will check if specific file (for example: AcroRd32.exe) not exist and it will save result to result.txt if it is not exist. Problem is that the this part is executing too long if PC is offline.
if not exist "\\Workstation1.subdomain.domain\c$\Program Files (x86)\Adobe\Acrobat Reader DC\Reader\AcroRd32.exe" echo File NOT EXIST! | more >>result.txt
)

Write to one output file from a few parallel LSF bsub jobs, avoiding writing at the same time

I have developed a code that composed of two files:
An 'envelop bash file', which does a few things and writes to a log-file, and then at some point runs into a for loop in which within it it executes one job at a time using bsub.
And 'an internal bash file', which gets as input the name of the log-file (in addition to other input values that necessary for its execution), and executes process X (using the input values it received from the 'envelop file'.
Once process X is finished, the 'internal script' writes to the log-file that process X (with its specific serial number) has been completed.
Since the for-loop of the envelop file loops 10 times, there are at least 10 parallel processes that being executed and run in parallel, and they all being executed with bsub given the SAME log-file name. The idea is that they would all report to the same log-file once they completed their execution of Process X.
The general procedure works well, and in each case process X is being executed, and the log-file accumulates as required all the notifications regarding the completion of process X. However, in some incidences we see that the writing to the log-file get disturbed and output lines of two parallel runs are running into each other.
I would like to lock the log-file in such manner that would allow it to receive text only from one parallel run at a time. The idea is to avoid cases where the text becomes mixed due to two processes that write by chance to the log-file exactly at the same time.
Here is the part of my envelop file which call to the bsub (I reduced the content to the minimum necessary):
for ((i=1;i<=$batchesnumber; i++));
do
bsub -J $SerialName -q normal "bash FetchFasta.bash $genome_fa ${SerialFileName}".bed" $logfile"
done
Here is the part of my internal file that echo to the log-file:
(
echo "~~~~~~~~~~~~~~~~~~"
echo "^^^^^^^^^^^^^^^^^^"
echo -n "Completed running "; bedtools -version
echo "bedtools getfasta -s -fi $genome_fasta -bed $mySerialFile -fo ${mySerialFile%.*}".fa" "
echo "Run's completion time is: $timedate"
echo -e "~~~~~~~~~~~~~~~~~~\n"
) >> $logfile
I would appreciate any useful solution!
There's a couple of ways I can think of going about this:
Have each job write its output to a different file (use $LSB_JOBID inside each job to name the file). Then use another "cleanup" job to concatenate all of the ouptut into a single file. You can use job dependencies (bsub -w) to make sure the cleanup job runs after all the other jobs are done.
Implement a lock inside your "internal" job to make sure only one of them writes to a file at a time. This is a lot simpler than it might sound, one way to do it is to have each job try to create the same directory with mkdir before writing to the file and then delete the directory after its done. If they fail to create the directory it's because another one of the jobs got to it first and is currently writing to the file.
Here's a snippet illustrating #2 in bash:
# Try to get the lock every second
while ! mkdir lock &> /dev/null ; do
sleep 1
done
# Got the lock, write to the logfile
echo blahblahblah >> $logfile
# Release the lock
rmdir lock
I should mention an important caveat here though: if one of your jobs dies while it's "holding the lock" (say someone sends it a kill signal at the wrong time) then it'll never remove the directory and all the other jobs won't be able to create it, so they'll just keep sleeping forever.

Unix Script Exit Status History

There is a existing script which run daily. Is it possible to track return code of scripts after its completion to track that it was successfully finished or not in its previous run? I don't want to modify the existing script.
After the script was executed, its exit code is stored in the $? variable, which you could save to a file. For example
/path/to/script.sh
echo $? >> /path/to/script.log
You probably want to save the date too:
/path/to/script.sh
result=$?
echo $(date) $result >> /path/to/script.log

shell script email notification upon success or failure

I'm writing a shell script that basically transfers data files through SFTP to a database server and then invokes a pl/sql procedure which loads the data from those files (external tables) into internal database tables.
I've been doing some research on effective exception handling in shell scripts and it appears the set -e option can be used to terminate a script with an error whenever any command in the script runs which returns a non-zero exitcode.
So, my plan is to have a script which contains all of the processing that needs to get done (SFTP, moving/deleting files, calling pl/sql procedure, etc...) and to include set -e at the top of the script. I also plan to redirect output to a log file in this script.
Then, I plan to have another script that calls the main processing script and then emails the log that gets produced with either a "Success" or "Failure" indicator in the subject of the email.
Are there any "gotchas" that any of you can foresee in this approach or does this seem reasonable?
Sounds reasonable.
One thing you may also do make it one command and less scripts:
someSFTPscipt &> somelogfile.txt; if [ $? -eq 0 ]; then echo "Success"; else echo "Failure"; fi
someSFTPscipt &> somelogfile.txt; redirects the output of the script to a logfile
if [ $? -eq 0 ]; then echo "Success"; else echo "Failure"; fi checks whether it succeeded (returned 0) or failed (any other non-zero value). Simply replace the echo with your mail commands.
Thanks for all of the feedback on this.
I ended up going with this "wrapper" shell that calls the main processing shell. A cron is going to launch this daily in my particular case.
Comments are certainly welcome if this can be improved.
#!/bin/sh
################################################################################
# Author : Zack Macomber #
# Date : 02/22/2012 #
# Description: Calls main_process.sh and emails results of the process. #
# Also appends to master log file #
################################################################################
# Flag any errors that occur during processing
set -e
# Set newly created files to "rw" for everyone
umask 111
#############
# VARIABLES #
#############
EMAIL_RECIPIENTS=my_email#some_domain.com
MAIN_DIR=/scripts/
#############
# FUNCTIONS #
#############
send_email()
{
uuencode results.log results.log | \
mailx -s "DATA_LOAD $1 - consult attached log for details" $EMAIL_RECIPIENTS
}
################
# MAIN PROCESS #
################
cd $MAIN_DIR
sh main_process.sh > results.log && send_email SUCCESS || send_email FAILURE
cat results.log >> pub_data_load.log
exit 0

KSH: Block two process from running at the same time

I have two process that running at random time and I want to force them not to ever run at the same time due to reader-writer problem. My thought is whenever a process run, I create a LOCK file, both process has a logic of checking whether a LOCK exist. If LOCK is existed, then sleep for bit and wake up and check it again. Here is a small piece of it
if [[ ! -f ${INPUT_DIR}/LOCK ]]
then
#Create LOCK file
cat /dev/null > ${INPUT_DIR}/LOCK
retcode=${?}
if [[ ${retcode} -ne 0 ]]
then
echo `date` "Error in creating LOCK file by processA.sh - Error code: " ${retcode} >> ${CORE_LOG}
exit
fi
echo `date` "LOCK turns on by processA.sh" >> ${CORE_LOG}
...
rm ${INPUT_DIR}/LOCK
fi
Howver this does not QUITE stop the two process from running at the same time. There are rare time when both process would get pass the first IF checking if the log exist (if both process get invoke at the same time and there was no LOCK exist, very likely that it will get pass the first IF statement), both try to create a LOCK file, since cat /dev/null > ${INPUT_DIR}/LOCK will not generate an error, even when LOCK is already exist. Is there a solution to this?
For the main versions of unix, the preferred solution is to use a lock directory, I would assume this is true for linux, but I haven't had to test it recently.
Creating a directory is an atomic process, and only 1 of the processes will succeed, assuming that you are making a static name like /bin/mkdir -p /tmp/myProjWorkSpace/LOCK. If you need to have information embedded in your lock, then you need a file, and you need sepqrate subdirs per process, possibly add the processID (.$$) to the dir name.
I hope this helps.

Resources