Autosys - re-run failed jobs in a sequence [closed] - autosys

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
There are 50 Autosys jobs that runs in sequence.
Now, if a job in the sequence is failed. Then I am looking for a way to manually start the failed job.
It could be easily done by calling sendevent command. But, in Production environment, we have to raise WR for that.
So, how to restart failed job manually without sendevent command?
Possible solution is to make each job dependent on a File Watcher job. But, this way, we have to create File Watcher for each job. Is there any better approach.

The only way you can manually start a job is by sendevent. But if you want to skip that, then you can do one thing. But for this, you need to have access to either one of this.
Updating data in a database table.
Create a file in unix.
If you have this, then you can create a job, which runs every 5or 10 minutes. This job will kick off a shell script, say startJobs.sh.
In the shell script, you read a file, say jobsToStart.txt, which contains list of jobs to start. Then you use a sendevent in this script with the job name from the file.
Now once this script is deployed to production, you just need to put a job name in the file jobsToStart.txt and the script will start the jobs when the it runs.
Another way would be similar, but instead of putting data in a file, you can put data in a database table. The shell script will read that table to find the job name.
Personally, I would suggest to put n_retrys=1 to automatically restart the job and if it fails again, have the support team do it. There is a reason for access restriction and if you feel that you need to be able to do it on your own, then you have to present your case to business to give you access.

Related

How to make sure that a process on Gunicorn3 runs continuously? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 months ago.
Improve this question
I have a flask app deployed on a Ubuntu server. I use Nginx, and Gunicorn3. I know from this StackOverflow post that one of the correct ways to have the app running continuously on the server is to use something like:
gunicorn3 app:application --workers 3 --bind=127.0.0.1:1001 --daemon
But to be completely safe, since there are many other processes running on that server, I would like to find a way to check whether this process IS indeed running, and if it is NOT running (for whatever reasons) then to start it again.
In addition to that question, to make the app working at reboot, I use the following cronjob:
#reboot bash ~/restart_processes.sh
Where the .sh file executes the command line given above for starting Gunicorn3. Is this good practice or is there a better way to achieve the same result?
Thank you!
Kind regards,
I always use to deploy it in production with supervisorctl + nginx. Check this tutorial . You can simply start, restart or stop with a command.

Git "Pull" from within RStudio [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am setting up an R project in RStudio (server version, if it makes a difference). I cloned the whole project from my Github account using git clone.
Problem:
Now I would now like to add a script that will do a Git pull command every time the user runs it in Rstudio.
The idea being that the user always have the most recent versions of all files in their project.
Goal
This seems easy enough to do using the graphical user interface (Click Git, pull branches) but I could not figure out how to do it in written code, something like
github_pull(branch)
I.e. it should replicate the git pull command I run in the terminal, but within the R script. Thus avoiding switching to the terminal. Is this possible?
The devtools package has a command of that name but it seems to be doing something different. I could not find anything on here or in the RStudio help either -- any pointers are much appreciated!
Solution
Based on #Mir Henglin 's spot-on solution below, here is what worked for me:
system("git pull")
However, this only worked if I initially cloned my repository using the ssh link (rather than https), as described here
See ?system and ?shell. These functions allow one to run shell commands from within R. I imagine you could call git pull pretty easily using those.
EDIT: Here is an example:
system('pwd')
/Users/mirhenglin/projects/R/
And
system('git pull --help')

How to set up / configure R for a dev, stage, and production environment on one server? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
A colleague needs to set up a dev, stage and production environment on our server and he is asking what this means for how to run our R codes. Frankly, I do not know. My intuitive solution would be to have three different servers, so the installed R packages do not conflict. But for now, the environments should be on the same server. I am not sure how to achieve this. How can we run several versions of a package side by side? For example, using different .libPaths to be able to host different packages side by side?
What is the proper way to set this up?
PS. I hope I expressed myself clear enough, as I do not have any experience with this stuff.
Every GNU program allows you to prefix its installation (and more, as eg a suffix or prefix appended to the executable).
We use that in the 'how to build R-devel script' I posted years and and still use, directly and eg in the Dockerfile scripts for Rocker.
This generalizes easily. Use different configurations (with/without (memory) profiling, UBSAN, ...) and/or versions to you content, place them next to each other in /opt/R or /usr/local/lib/R or ... and just use them as every R installation has its own separate file tree. One easy way to access them differently is via $PATH, another is to just have front-end scripts (or shell aliases) R-prod, R-qa, R-dev, etc pp
You will have to think through of you want a common .libPaths() (for, say, common dependencies) or whether you want to reinstall all libraries for each. The latter is the default.

AWS EC2 Rstudio Server Error Occured During Transmission [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 7 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
After over a month, I have managed to piece together how to setup an AWS EC2 server. It has been very hard to upload files as there are very conservative (size) limits when done via the upload button in Rstudio Server. The error message when this is attempted is "Unexpected empty response from server".
I am not unique in this respect e.g. Trouble Uploading Large Files to RStudio using Louis Aslett's AMI on EC2
I have managed to use the following commands through putty and this has allowed me to upload files via either filezilla or winscp.
sudo chown -R ubuntu /home/rstudio
sudo chmod -R 755 /home/rstudio
Once I use these commands and log out, I can no longer access rstudio on the instances in future logins. I can relogin to my instances via my browser, but I get the error message:
Error Occurred During Transmission
Everything is fine other than once I use Putty I lose browser access to my instances.
I think this is because the command is change of ownership or similar. Should I be using a different command?
If I don't use a command I cannot connect between filezilla/winscp and the instance.
If anyone is thinking of posting a comment that this should be closed as it is a hardware issue, I don't have a problem with hardware. I am interested in the correct coded commands.
Thank you :)
Ok so eventually I realised what was going on here. The default home directory size for AWS is less than 8-10GB regardless of the size of your instance. As this as trying to upload to home then there was not enough room. An experienced linux user would not have fallen into this trap, but hopefully any other windows users new to this who come across this problem will see this. If you upload into a different drive on the instance then this can be solved. As the Louis Aslett Rstudio AMI is based in this 8-10GB space then you will have to set your working directory outside this, the home directory. Not intuitively apparent from Rstudio server interface. Whilst this is an advanced forum and this is a rookie error I am hoping no one deletes this question as I spent months on this and I think someone else will too.
Don't change the rights of /home/rstudio unless you know what you are doing, this may cause unexpected issues (and it actually does cause issues in your case). Instead, copy the files with filezilla or winscp to a temporary file (let say /tmp), then ssh to your instance with putty and move the file to the rstudio directory with sudo (e.g sudo mv /tmp/myfile /home/rstudio).

How do I access data from my personal computer on my AMI instance running RStudio server [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I have recently set up RStudio on an AMI ec2 instance using the process generously laid out by Louis Aslet from his website. But in an embarrassing turn of events I can't access the data I need because it resides on my personal computer. I am new to cloud computing and have 0 functional knowledge of Linux, but I do know SQL, and R well. Any help or suggestions would be greatly appreciated.
Have you tried the "Upload" button in the "Files" window of Rstudio?
use scp in terminal.
To put files from your remote server
Example: if the files are located locally in ~/mylocalfolder and you want to put them in /home/rstudio/mydata you would execute in terminal:
scp ~/mylocalfolder/*.csv ubuntu#<your address>:/home/rstudio/myData/
Note that if you want to access them under a different user, eg, rstudio, you need to change owners on the files. Use chown
To grab data from your remote server
Example: if the files are located on /home/rstudio/mydata and you want to put them locally in ~/mylocalfolder you would use
scp ubuntu#<your address>:/home/rstudio/myData*.Rda ~/mylocalfolder
I use the RStudio AMI all the time and what works for me is to use Dropbox. I can't remember exactly how I did it but I think I may have started the shell from within RStudio and installed Dropbox from the command line.
This link has a little more info:
http://www.louisaslett.com/RStudio_AMI/#comment-1041983219

Resources