Trigger a shell script on receiving an email - unix

How can we trigger a shell script on an unix server through an email with particular subject?

procmail allows you to act on incoming mails, including filtering and starting external commands.
Some useful links:
general procmail documentation: http://pm-doc.sourceforge.net/doc/
start a shell command as a procmail rule: http://porkmail.org/era/procmail/mini-faq.html#rtfm
Just in case the link goes down, this is the link from the second point from above:
Q: How can I run an arbitrary Perl or shell script on all or selected
incoming mail?
A: Install Procmail. Read the manual pages (there are several). Thank
you.
:0 * conditions, if any | your-script-here
The conditions, in their simplest form, are regular expressions to
match against the header of each incoming mail message. Correction:
Even simpler, you can leave out the condition lines completely if you
want to do your action (in this case, run a shell script)
unconditionally.
More-complicated conditions can also be exit codes of other shell
scripts or programs, or tests against the full body of the message, or
against Procmail variables (Procmail's variables are also exported to
the environment of subprocesses, so they are essentially environment
variables. There are details about this later in this FAQ.)
Actions can also be to save the message to a folder (appended to a
Unix mailbox file, or written to a new file in a directory) or to
forward the message to one or more other addresses. Finally, the
action can be a nested block of more "recipes," as these
condition-action mappings are called in Procmail jargon, to try if the
outer condition is met. The procmailrc(5) manual page has the full
scoop.
Obviously, you are not restricted to Perl or shell scripts. Anything
you can run from a Unix command prompt can be run from Procmail, in
principle, although running interactive programs doesn't usually make
much sense.

More general, but to my mind less useful than Wim's procmail suggestion: You can even just point your .forward at an executable with "|scrip.sh".

You could in theory, by writing a program to monitor/poll the incoming email server and check the subject line using standard POP3 protocol, if the subject line has a particular trigger keywords, invoke the shell script... This is the order of approach that would suit... there may be an open source solution already out there...
Using sockets, connect to the incoming email server by IP and port (usually 25), non-blocking that is not to seize up and chew up CPU time, within a thread looping forever
List the emails using the POP3 protocol
Pull down the headers via POP3 protocol and do a regexp on the subject line
If the regexp matches the subject line, issue a trigger perhaps a system call to invoke the shell script

Related

Does a call to BPXBATCH from JCL use the priority of the batch job or is priority in OMVS independent?

I am calling a shell script that does some processing from JCL using BPXBATCH like this:
//STEP2 EXEC PGM=BPXBATCH,
// PARM='SH PATHTOSCRIPT.SH MYARGUMENT'
The JCL has the service class with the highest priority. However, the shell script enters in a queue waiting for resources. Sometimes it runs quickly, and other times waits a lot of time for resources. The priority of the JCL seems to be independent of the shell script. I read maybe using the "nice" command in Unix would increase the priority of the shell script.
I want to be sure first, that the priority of a JCL from z/OS doesn't affect the priority of Unix process that was called from that JCL through BPXBATCH. I cannot find any documentation about it.
Short Answer
To answer your question first: BPXBATCH runs in one address space, and the shell runs in a second address space. Commands issues by the shell may run in the same address space as the shell, or may run in more additional address spaces.
The BPXBATCH address space has got a service class, and the shell address space(s) has got a service class, probably a different one. Each service class has its own performance goal, and this tells the system how to manage that work.
Detailed Answer
The z/OS workload manager (WLM) is responsible to assign work to a service classes when it is presented the new work. Service classes specify performance goals, and importance levels, not priorities. WLM manages all work in the system according to is performance goal based on the importance of the goal.
There are a couple (workload management) subsystems, that may start new work. Examples of such subsystems are
JES, which manage batch work, i.e. batch jobs.
TSO, which manages interactive TSO user work (TSO login).
OMVS, which manages forked, and non-locally spawned z/OS UNIX work.
STC, which manages started job workload.
This list is not complete; I listed only the subsystems that I need to answer the question.
When JES2/3 receives a job that shall run on the system, it presents some job attributes to WLM, and WLM assigns the job to a service class. It does so using WLM classification rules for subsystem type JES, and the attributes given.
Everything that runs in this job, i.e. in the job's address space will be managed towards the performance goal of the sercive class assigned. This includes z/OS UNIX work that is run in this very address space, i.e. work that is not started via UNIX fork(), or non-local spawn().
When a z/OS UNIX process starts an new process via fork(), or via non-local spawn(), this new work is handled by the WLM subsystem OMVS. The OMVS subsystem presents some attributes of the new process to WLM, and WLM assigns the process to a service class. It does so using WLM classification rules for subsystem type OMVS, and the attributes given. This kind of work is always runs in a separate, new address space.
BPXBATCH starts the (first) UNIX command it is told via PARM=, or //STDPARM, as a new process using either fork(), or spawn(). The spawn() may be a local spawn(), or a non-local spawn(). Which one is done depends on many factors, too complex to explain here.
The important point here is, when running BPXBATCH with PARM='SH ...', the shell proces will always run in a separate, new address space and will be classified via WLM subsystem OMVS.
The result is BPXBATCH is running in one address space with its service class, and the shell is run in a second address space with its service class. The service classes may be the same, but usually they are different WLM defintions with different performance goals.
As a starter, have a look at z/OS MVS Planning: Workload Management
nice() on z/OS UNIX
nice() has no effect on z/OS UNIX, unless the system has been setup to support it. There is parameter PRIORITYGOAL(...) in BPXPRMxx parmlib member to setup a list of up to 40 WLM service classes that will be used in conjunction with nice(). I have never heard of anyone having set this parameter.
See z/OS MVS Initialization & Tuning Reference for details about BPXPRMxx member

How are stdin and stdout made unique to the process?

Stdin and stdout are single files that are shared by multiple processes to take in input from the users. So how does the OS make sure that only the input give to a particular program is visible in the stdin for than program?
Your assumption that stdin/stdout (while having the same logical name) are shared among all processes is wrong at best.
stdin/stdout are logical names for open files that are forwarded (or initialized) by the process that has started a given process. Actually, with the standard fork-and-exec pattern the setup of those may occur already in the new process (after fork) before exec is being called.
stdin/stdout usually are just inherited from parent. So, yes there exist groups of processes that share stdin and/or stdout for a given filenode.
Also, as a filedescriptor may be a side of a pipe, you need not have file from a filesystem (or a device node) linked to any of the well known standard channels (you also should include stderr into your considerations).
The normal way of setup is:
the parent (e.g. your shell) is calling fork
the forked process (child) is setting up environment, standard I/O channels and anything else.
the child then executes exec to overlay the process with the target image to be executed.
When setting up: it either will keep the existing channels or replace them with new ones e.g. creating a pipe and linking the endpoints appropriately. (To be honest, creating the pipe need to happen before the fork in that simplified description)
This way, most of the process have their own I/O channels.
Nevertheless, multiple processes may write into a channel they are connected to (have a valid filedescriptor to). When reading each junk of data (usually lines with terminals or blocks with files) is being read by a single reader only. So if you have several (running) processes reading from a terminal as stdin, only one will read your typing, while the other(s) will not see this typing at all.

vxWorks kernel shell abilities

I have a car navigation system installed in my car and I figured out that it's running vxWorks 6.9.3.
What I'm trying to achieve is to change some hidden settings of the nav-system.
Small introduction: Nav system have ability to connect to internet via Bluetooth. I setup small web-server the only thing it can do is detect IP address of client. I opened that web-site from head unit browser and detected ip address of head unit. Than I'm able to scan for opened network ports of it.
It turned out that it has 23 port open. And I'm able to telnet there.
It didn't required any password or login and it report operation system info: Windriver vxWorks 6.9.3
I can run various commands here, inspect filesystem, etc.
But I don't know how I can change something. I even found the way to transfer files from USB-key from and to device.
I found that all settings which I want to change are stored in .sqlite files. Some of them are gzipped and have .inf file with check-sums. Algorithm of check-sum calculation is proprietary so I can't transfer .sqlite files from device to usb-key, change something, than gzip and calculate new check-sum.
I think OS can somehow interact with .sqlite files in-memory without ungzip them.
So, is there any ways to open sqlite shell on device using vxWorks kernel shell?
If yes, that would be perfect and enough to achieve anything I want.
If this can't be achieved, can somebody give me some advice of what possibilities I have from vxWorks kernel shell?
The commands available on the VxWorks shell depend on the loaded applications and the kernel itself. From the shell you can call all "public functions" loaded by VxWorks. You enter the function call in a C-like syntax and the shell parses the arguments pushes them onto the stack and jumps to the address of the function just like a normal function call in C.
A helpful function to check if a funtion exists is lkup "foo" which will lists all functions containing "foo" in their name (case sensitive!). But it doesn't tell you anything about the requested parameters. If you are not passing all parameters to the function via the shell, the intepreter pushes some zeroes onto the stack before executing the function call. This may lead to very strange results and may even damage your system (depending on the function)...
If you're able to load a program you may want to use the functions of symLib to iterate all symbols of the VxWorks sysSymTbl.

How to render a remote ncurses console?

I wanna write a remote console, working like a telnet server. User is able to use telnet to log into the server, then write some commands to do some works.
A good example for this is the console of router os. What I'm confusing right now is, I can accept user's input, do someting then print some texts back, but I wanna use ncurses to make the console has more features(such as "cmd auto-complete", syntax color...), so how can I do that? Because the console is in user side, if the server calls ncurses APIs it'll just change stuffs on server...
Maybe this is a stupid question but I'm really newbie on this. Any suggestions are appreciated.
This is more difficult than you might think.
You need to understand how terminals work - they use special control sequences for e.g. moving the cursor or color output. This is described by a terminfo file which is terminal-specific. Ncurses translates API calls (e.g. move cursor to a certain position) to such control sequences using terminfo.
Since the terminal (nowadays xterm, gnome-terminal, screen, tmux, etc) is on the client side, you have to pass the type of terminal from the client to the server. That's why e.g. ssh passes this information from the ssh client to the server (try echo $TERM in your ssh session - it might be 'linux' if you are logged in via the console, or 'xterm', if you are using X and an xterm). Also, you better have the respective terminfo available on the server.
Another piece of the puzzle is pseudo terminals. As nowadays relatively few people use serial terminals, their semantics are emulated so that applications and libraries (e.g. curses and its friends) originally developed for serial consoles keep working. This is achieved via pseudo terminals - these are like pipes, a master and a slave device communicates, anything written on one side comes out on the other side. For a login process, getty, for example, can just use one side of a pty device and think it's a serial line - your server program must handle the other side of the pty, sending everything it gets from the pty to your client via the network.
Terminal emulators also use ptys, type tty into your terminal, and you'll get something like /dev/pts/9 if you're using a terminal emulator. On the other side of the pty it's usually your shell, communicating with your terminal emulator via the pty.
Your client program can more or less just use standard input and standard output. If your terminal information is correct, the rest will be handled by your terminal emulator, just pass anything you receive from your server program to stdout, and send anything you read from stdin to your server program.
Hopefully I haven't left out any important detail. Good luck!
It is possible to have ncurses operate on streams other than stdin and stdout. Call newterm() before initscr() to set the input and output file handles for ncurses.
But you will need to know what sort of terminal is on the remote end of the connection (ssh and telnet both have mechanisms for communicating this to the server) and you will also want a fall back to a non-ncurses interface in case the remote end is not a supported terminal type (or if you can't determine the terminal type).

ftp quote site list of available options- Execute commands

I was looking for list of available options for ftp quote site which will allow to do lot of stuff like executing commands on the target system. Like the below one can be used to submit jobs on mainframes.
quote site filetype=jes
put filetoexecute.jcl
I know that there are similar options for unix environment as well. Is there any list of available options for this quote command for unix, mainframes (windows also if available) environments.
Also below is an extra question based on these.
Is there any way to execute ca7 commands from this ftp? If not I was looking into one rexx example which will execute the ca7 commands passed as arguments. But this is failing with "CA-7 RECEIVER NOT FOUND" error.
PARSE UPPER ARG COMMAND
ADDRESS CA7 COMMAND
SAY 'RC=' RC
X=QUEUED()
SAY 'QUEUED() =' X
DO I=1 TO X
PULL LINE
LINE2=SUBSTR(LINE,2)
SAY LINE2
END
Also the below command is failing with the same failure.
ADDRESS CA7 "'LQ,SEQ=JOB,JOB=*'"
I have checked ca7 manual and don't know how to make sure that CA7 environment is configured to execute the above commands.
Can you please help?
The IBM FTP server supports HELP SITE; this gives you all of the operands that the SITE command supports. You can issue STAT to get the current SITE values.
This is specific to the IBM FTP server. Each FTP server is different, and they may or may not implement the SITE command. I suggest looking at the doc for the FTP server to find if they support SITE and STAT, or similar.

Resources