How to transfer an XFB file using command BTOPUT in unix server - unix

We have one .sh file which contains all the configurations.
We have something like this,
export MARK_REMOTE_NODE= (server name)
The requirement is we have to send the same file to two different servers.Is it possible to transfer the same XFB file to different REMOTE_NODE or servers in UNIX??
When i was searching i got to know that BTOPUT transfers are one file at a time to one Partner.So can anyone tell me how to transfer file to 2 different servers?

XFB already has a hard job matching different operating- and filesystems with optional compression and retry mechanism. You want some logic what will happen when 1 transfer fails (only send second when first succeeds, shoot-and-forget, always try to send both and trust your incident management to catch the errors thrown by your monitoring, wait for async transfer for time depending on filesize,..).
I wouldn't trust the XFB options and just make a loop in your script doing exactly what you want. The additional advantage is that a migration to another communication tool will be easier.
while read -r targethost; do
# You need a copy, since xfb will rename and delete the file
cp outputfile outputfile.${targethost}
my_send_xfb ${targethost} outputfile.${targethost}
# optional check result posting the file in the queue
if [ $? -ne 0 ]; then
echo "Xfb not ready or configured for ${targethost}"
# Perhaps break / send alert / ..
fi
done < myhosts

Related

Change authorization level when initializing sas batch job

I run SAS batch jobs on a UNIX server and usually encounter the problem that I cannot overwrite sas datasets in batch that have been created by my user locally without changing the authorization level of each file in Windows. Is it possible to signon using my user id and password when initializing the batch job to enable me to get full authorization (to my own files) in batch?
Another issue is that I don't have authorization to run UNIX commands using PIPE on a local remote session on the server and can hence not terminate my own sessions. It is on the other hand possible to run PIPE in batch, but this only allows me to terminate batch jobs so I also wonder if it is possible to run a pipe command in batch using my id and password as the batch user does not have authorizatio to cancel "local remote sessions" on my user?
Example code for terminating process:
%let processid = 6938710;
%let unixcmd = "kill &processid";
%PUT executing &unixcmd;
filename unixcmd pipe &unixcmd.;
there's a good and complete answer to your first point in the following SAS support page.
You can use the umask Unix command to specify the default file permission policy used for the permanent datasets created during a SAS session (be it batch or not).
If you are lauching a Unix script which invokes a SAS batch session you can put a umask command just before the sas execution.
Otherwise you can adopt a more permanent solution including the umask command in one of the places specified in the above SAS support article.
You are probably interested in something like:
umask 002
This will assign a rw-rw-r-- file permission to all new datasets.

UNIX - Stopping a custom service

I created a client-server application and now I would like to deploy it.
While development process I started the server on a terminal and when I wanted to stop it I just had to type "Ctrl-C".
Now want to be able to start it in background and stop it when I want by just typing:
/etc/init.d/my_service {stop|stop}
I know how to do an initscript, but the problem is how to actually stop the process ?
I first thought to retrieve the PID with something like:
ps aux | grep "my_service"
Then I found a better idea, still with the PID: Storing it on a file in order to retrieve it when trying to stop the service.
Definitely too dirty and unsafe, I eventually thought about using sockets to enable the "stop" process to tell the actual process to shut down.
I would like to know how this is usually done ? Or rather what is the best way to do it ?
I checked some of the files in the init.d and some of them use PID files but with a particular command "start-stop-daemon". I am a bit suspicious about this method which seems unsafe to me.
If you have a utility like start-stop-daemon available, use it.
start-stop-daemon is flexible and can use 4 different methods to find the process ID of the running service. It uses this information (1) to avoid starting a second copy of the same service when starting, and (2) to determine which process ID to kill when stopping the service.
--pidfile: Check whether a process has created the file pid-file.
--exec: Check for processes that are instances of this executable
--name: Check for processes with the name process-name
--user: Check for processes owned by the user specified by username or uid.
The best one to use in general is probably --pidfile. The others are mainly intended to be used in case the service does not create a PID file. --exec has the disadvantage that you cannot distinguish between two different services implemented by the same program (i.e. two copies of the same service). This disadvantage would typically apply to --name also, and, additionally, --name has a chance of matching an unrelated process that happens to share the same name. --user might be useful if your service runs under a dedicated user ID which is used by nothing else. So use --pidfile if you can.
For extra safety, the options can be combined. For example, you can use --pidfile and --exec together. This way, you can identify the process using the PID file, but don't trust it if the PID found in the PID file belongs to a process that is using the wrong executable (it's a stale/invalid PID file).
I have used the option names provided by start-stop-daemon to discuss the different possibilities, but you need not use start-stop-daemon: the discussion applies just as well if you use another utility or do the matching manually.

Sending emails asynchronously: spool, queue, and cronjob/daemon

I want to send emails asynchronously for faster and lighter http responses, but I'm struggling with many new concepts.
For example, the documentation talks about spool. It says I should use spool with a file, and then send emails with a command. But how should I be running that command? If I set a cronjob to execute that command every 1 minute (the minimum available in cron), users will have to wait an average of 30 secs for their emails to be sent (eg, the registration email).
So I thought of using a queue instead. I'm already using RabbitMQBundle for image processing (eg, thumbnail creation). But I only use this one periodically, so it is consumed from within a cronjob.
Maybe I should create a daemon that is always waiting for new messages to arrive the email queue and deliver them ASAP?
The solution is to send every email to a queue, and then consume that queue with a service. My service is very simple, it just takes items out of the queue, where each item is an array with from, to, body, etc., and sends that email. I'm using Thumper which makes Rabbit easier to use: github.com/videlalvaro/Thumper . And I make sure the service is always up using 'sv' (from Runit): smarden.org/runit/sv.8.html . You can use any other service or daemon manager you like.
I have the same problem as you had. How you finally solved your problem?
For the moment I run a little script in the crontab in order to run in loop:
<?php
include('/var/www/vendor/symfony/symfony/src/Symfony/Component/Filesystem/LockHandler.php');
use Symfony\Component\Filesystem\LockHandler;
$lock = new LockHandler('mailer:loop');
if ($lock->lock()) {
system('cd /var/www && php app/console swiftmailer:spool:send');
sleep(1);
$lock->release();
shell_exec('cd /var/www && php LoopMailer.php > /dev/null 2>/dev/null &');
}
It's not very clean but it does his job.
You need 2 services one for spooling message and other for send instant emails. Check this

How to check for existence of Unix System Services files

I'm running batch Java on an IBM mainframe under JZOS. The job creates 0 - 6 ".txt" outputs depending upon what it finds in the database. Then, I need to convert those files from Unix to MVS (ebcdic) and I'm using OCOPY command running under IKJEFT01. However, when a particular output was not created, I get a JCL error and the job ends. I'd like to check for the presence or absence of each file name and set a condition code to control whether the IKJEFT01 steps are executed, but don't know what to use that will access the Unix file pathnames.
I have resolved this issue by writing a COBOL program to check the converted MVS files and set return codes to control the execution of subsequent JCL steps. The completed job is now undergoing user acceptance testing. Perhaps it sounds like a kludge, but it does work and I'm happy to share this solution.
The simplest way to do this in JCL is to use BPXBATCH as follows:
//EXIST EXEC PGM=BPXBATCH,
// PARM='pgm /bin/cat /full/path/to/USS/file.txt'
//*
// IF EXIST.RC = 0
//* do whatever you need to
// ENDIF
If the file exists, the step ends with CC 0 and the IF succeeds. If the file does not exist, you get a non-zero CC (256, I believe), and the IF fails.
Since there is no //STDOUT DD statement, there's no output written to JES.
The only drawback is that it is another job step, and if you have a lot of procs (like a compile/assemble job), you can run into the 255 step limit.

Any method for going through large log files?

// Java programmers, when I mean method, I mean a 'way to do things'...
Hello All,
I'm writing a log miner script to monitor various log files at my company, It's written in Perl though I have access to Python and if I REALLY need to, C (though my company doesn't like binary files). It needs to be able to go through the last 24 hours, take the log code and check it if we should ignore or email the appropriate people (me). The script would run as a cron job on Solaris servers. Now here is what I had in mind (this is only pseudo-ish... and badly written pesudo)
main()
{
$today = Get_Current_Date();
$yesterday = Subtract_One_Day($today);
`grep $yesterday '/path/to/log' > /tmp/log` # Get logs from previous day
`awk '{print $X}' > /tmp/log_codes`; # Get Log Code
SubRoutine_to_Compare_Log_Codes('/tmp/log_codes');
}
Another thought was to load the log file into memory and read it in there... that is all fine and dandy except for a two small problems.
These servers are production servers and serve a couple million customers...
The Log files average 3.3GB (which are logs for about two days)
So not only would grep take a while to go through each file, but It would use up CPU and Memory in the process which need to be used elsewhere. And loading into memory a 3.3GB file is not of the wisest ideas. (At least IMHO). Now I had a crazy idea involving assembly code and memory locations but I don't know SPARC assembly sooo flush that idea.
Anyone have any suggestions?
Thanks for reading this far =)
Possible solutions: 1) have the system start a new log file every midnight -- this way you could mine the finite-size log file of the previous day at a reduced priority; and 2) modify the logging system so that it automatically extracts certain messages for further processing on the fly.

Resources