We have a BOX scheduled in Autosys. If the BOX gets triggered at the scheduled time, all the PDFs generated out of one of the steps is not getting copied but the job is also not failing. When we are HOLDING the box and running step by step all outputs are getting copied.
A good troubleshooting step would be to either add in a sleep/delay step of a short time between the generation of the files and the downstream jobs.
A better way might be to use a file trigger or file watcher that will only let the below steps proceed if the files are all there (you can trigger on number of files or whatever stat is appropriate).
If your copy step is a simple copy command without any validation (like copy abc_file_*.pdf) then it wouldn't have any trouble copying whatever files it sees, even if not as many as you intend.
Related
I have a server which runs multiple jar file at the same time as of now.
Currently we just make a bat file, call the java -jar xxxx.jar program, and the window is pop-ed up on the screen so we know which to terminate when we'd like to turn one of them off.
But as we progress we prefer those program to run at the background hence we'd prefer to use javaw -jar xxxx.jar instead.
However when we open up the task manager all it shows is many javaw.exe processes, without telling us which jar file its associated to.
Is there any parameter we can specify when we start javaw, so there's some indication on task manager's process list?
There is an official product named Process Explorer that can do what you want.
I am new to Autosys, and looking for a way to achieve reverse of file watching
I am looking for a job similar to file watcher, which keeps on running till the file is present, and will only pass if the file is not present. The dependent job will only if the file is not present.
there are few
1) I am not sure if I can achieve this with fileWatcher.
2) Does FileWatcher job stops running after it finds the file,
3) is there any way to negate the success condition for filewatcher job.
Or if anyone can provide me some good extensive document on FileWatcher, that would be a help too.
Thanks
You cannot achieve this with filewatcher job alone.
Filewatcher jobs stops running and goes to Success state as soon as it finds the file in the defined path. There is no way to negate its Success state.
This is so as its assumed that such functionalities can be easily implemented by scripts.
You can achieve what you want by batch script(Windows) or Shell Script(Unix/Linux). A script can be triggered by the Autosys job which checks file presence at place you intend, then sleeps for some time ( say 20 secs) checks again, and sends exit code 0 if it finally doesn't find the file, or some other exit code if after certain checks file didnt move eventually.
You can keep downstream jobs depended on this Autosys job as per requirement.
Let me know if more clarification is needed on this.
I have been trying to run a script automatically using the steps that I found online.
I am trying to run the following R script called AUTO.R
Here is what the script contains:
library(quantmod)
obs <- last(Ad(getSymbols("SPY", auto.assign=FALSE)))
saveRDS(obs, "SAMPLE.rds")
When I build the application it prints Workflow completed
I believe all is well until the time comes to run the script. The alarm pop-up in my desktop is displayed from Calendar but nothing runs. After a few minutes the folder where the .rds file should be saved does not contain anything.
Two suggested changes:
Your Automator task should be more like just /usr/local/bin/Rscript --vanilla /Users/rimeallthetime/Desktop/AUTO.R
You should explicitly set the path in saveRDS; i.e. saveRDS(obs, "/Users/rimeallthetime/Desktop/SAMPLE.rds")
Honestly, though, you should at least make a ~/bin dir (i.e. a directory called bin under your home directory, so in your case /Users/rimeallthetime/bin and put both the workflow and R script in there, and I'd also suggest creating another directory for output files vs the desktop.
UPDATE
I just let the calendar event run and this is really a crude way to automate what you want to do. You'd be better off in the long run using launchd, that way it's fully automated and requires no human intervnention at all (but you may need to adjust your script to send you a notification or "append" to the rds file).
I'm using mmap to load a big file with just with READ-ONLY access.
It's expected, that a cron job overwrites this file, daily once with updated content.
My query here is that how would my executable re mmap the updated file to get to the updated content?
Do I need to call mmap again? How would my executable know at what time the file was updated?
What's the usual recommended ways and options available with tradeoffs?
If the cron job just opens the file and overwrites the data in it, the new data should be immediately reflected in your mapped memory. If the cron job creates a new file, writes the data there, and then calls rename() to move the new file on top of the old, you need to close the old file and reopen to get the new data. This is often done to avoid data corruption in case of a power failure while rewriting the file.
As for how you get notified, there are several possibilities. The easiest might be to have the cron job just send a signal (e.g. SIGUSR1) to your process. You can then react to the signal and do your work. Otherwise, you could use inotify (on Linux) to monitor your file for writes.
Another option is to periodically poll the file's mtime to detect changes. Personally, I'd avoid that route though, as it seems rather hacky and inelegant.
I'm creating a script to process files provided to us by our users. Everything happens within the same UNIX system (running on Solaris 10)
Right now our design is this
User places file into upload directory
Script placed on cron to run every 10 minutes.
Script looks for files in upload directory, processes them, deletes immediately afterward
For historical/legacy reasons, #1 can't change. Also, deleting the file after processing is a requirement.
My primary concern is concurrency. It is very likely that the situation will arise where the analysis script runs while an input file is still being written to. In this case, data will be lost and this (obviously) unacceptable.
Since we have no control over the user's chosen means of placing the input file, we cannot require them to obtain a file lock. As I understand, file locks are advisory only on UNIX. Therefore a user must choose to adhere to them.
I am looking for advice on best practices for handling this problem. Thanks
Obviously all the best solutions involve the client providing some kind of trigger indicating that it has finished uploading. That could be a second file, an atomic move of the file to a processing directory after writing it to a stage directory, or a REST web service. I will assume you have no control over your clients and are unable or unwilling to change anything about them.
In that case, you still have a few options:
You can use a pretty simple heuristic: check the file size, wait 5 seconds, check the file size. If it didn't change, it's probably good to go.
If you have super-user privileges, you can use lsof to determine if anyone has this file open for writing.
If you have access to the thing that handles upload (HTTP, FTP, a setuid script that copies files?) you can put triggers in there of course.