AutoSys API - list jobs between RunEndTime - autosys

I need to list jobs between specific RunEndTime. I was trying the following but didn't receive anything from the server. (there are RunEndTimes jobs > 2010)
curl -u user:pass 'https://autosys-apiserver:9999/AEWS/job-run-info?filter=runEndTime>2010-02-25T10:52' -k
Following is working as expected
curl -u user:pass 'https://autosys-apiserver:9999/AEWS/job-run-info?filter=runEndTime==2010-02-25T10:52' -k
other question is how to run a query to list all jobs between two RunEndTimes ?

Related

AWS Code Deploy - Script at specified location: scripts/validate_service.sh failed with exit code 1

My deployments fail on last step Validate Service with error message:
The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems.
Events log
No lines are selected.
My validate_service.sh contain
#!/bin/bash
# verify we can access our webpage successfully
curl -v --silent localhost:80 2>&1 | grep Welcome
Can someone advice what should I change ?
Script return value matters. Yours looks good to me. I just added couple of seconds to wait until application starts up.
In case you use bash -x together with pipeline of commands, you better add shopt -s pipefail so all pipeline fails when one of the commands fails.
Checkout my script:
#!/bin/bash
sleep 5
curl http://localhost:3009 | grep Welcome

Getting output of curl command in a variable

I need to fetch data out of a cloud platform. The process to export data is 2 step:
First make a post call with the username/password details. This will return xml output with a jobid in the response.
Fetch the jobid from the first response and use this jobid, concatenate it to get a new url and then make a get call (execute curl again) using this new url, I will then get data in a json response.
What I did:
I am able to make the first API call and get the jobID. Next,I concatenated this jobId to get new url and saved complete curl statement in a variable (lets call the variable cmd_second_api_call). This variable 'cmd_second_api_call' contains the complete curl statement that I need to execute.
So I did a out=$($cmd_second_api_call), as I want to execute the second curl statement and store the output in a variable.
Problem:
When I execute out=$($cmd_second_api_call), I see out variable as empty. I verified $cmd_second_api_call actually outputs the curl command perfectly. If I execute the output of $cmd_second_api_call on my command prompt I see the output. What am I missing here? How to get the curl output in a variable?
Thanks!
r=$(curl -k -u user:password static_url -d <data I need to pass>)
jobid=$(echo $r | sed -n 's:.*<jobid>\(.*\)<\/jobid>.*:\1:p')
second_url="abc.com/${jobid}/result/ --get -d output=json"
cmd_second_api_call="curl -u user:password -k ${second_url}"
out=$($cmd_second_api_call)
echo $out
Putting a command in a variable or using variables without quotes can be dangerous.
I suggested
out=$(curl -u user:password -k ${second_url})
# or
out=$(curl -u user:password -k abc.com/${jobid}/result/ --get -d output=json)
# and
echo "$out"
Somehow this helped, together with a sleep 5 between both curl calls. You wouldn't expect a lag at the remote server between returning a valid jobid and enabling the interface for that jobid. Perhaps some kind of "defence" against unauthorized calls with random jobid's.

Get status job in cdsw

I have some R and python scripts in CDSW "Cloudera-Data-Science-Workbench". I create a shell script to run this with curl -v -XPOST.
How to get the status of a job from the API CDSW?
Hi it's been a while since this question was posted but hopefully the answer can still be useful to someone :)
After you run:
curl -v -XPOST http://cdsw.example.com/api/v1/projects/<$USERNAME>/<$PROJECT_NAME>/jobs/<$JOB_ID>/start --user "API_KEY:" --header "Content-type: application/json"
You should be able to see in the output a URL that looks like this:
http://cdsw.example.com/api/v1/projects/<$USERNAME>/<$PROJECT_NAME>/dashboards/<$ID>
So then you can use it to retrieve the job status for example with piping the status using jq (or without it so you can also see the status in the output as well as other stuff returned):
curl -v http://cdsw.example.com/api/v1/projects/<$USERNAME>/<$PROJECT_NAME>/dashboards/<$ID> --user "API_KEY:" | jq '.status'

containerized nginx log rotation with logrotate

Nginx doesn't have native log rotation, so an external tool, such as logrotate, is required. Nginx presents a challenge in that the logs have to be reopened post rotation. You can send a USR1 signal to it if the pid is available in /var/run.
But when running in a docker container, the pid file is missing in /var/run (and the pid actually belongs to the host, since it is technically a host process).
If you don't reopen the logs, nginx doesn't log anything at all, though it continues to function otherwise as web server, reverse proxy, etc.
You can get the process id from the Pid attribute using docker inspect and use kill -USR1 {pid} to have nginx reopen the logs.
Here's the /etc/logrotate.d/nginx file I created:
/var/log/nginx/access.log
{
size 2M
rotate 10
missingok
notifempty
compress
delaycompress
postrotate
docker inspect -f '{{ .State.Pid }}' nginx | xargs kill -USR1
endscript
}
If you want to run logrotate in a dedicated container (e.g to rotate both nginx logs and Rails' file log) rather than on the host machine, here's how I did it. The trickiest part by far was as above, getting the reload signals to nginx, Rails, etc so that they would create and log to fresh logfiles post-rotation.
Summary:
put all the logs on a single shared volume
export docker socket to the logrotate container
build a logrotate image with logrotate, cron, curl, and jq
build logrotate.conf with postrotate calls using docker exec API as detailed below
schedule logrotate using cron in the container
The hard part:
To get nginx (/etcetera) to reload thus connect to fresh log files, I sent exec commands to the other containers using Docker's API via socket. It expects a POST with the command in JSON format, to which it responds with an exec instance ID. You then need to explicitly run that instance.
An example postrotate section from my logrotate.conf file:
postrotate
exec_id=`curl -X POST --unix-socket /var/run/docker.sock \
-H "Content-Type: application/json" \
-d '{"cmd": ["nginx", "-s", "reopen"]}' \
http:/v1.41/containers/hofg_nginx_1/exec \
| jq -r '.Id'`
curl -X POST --unix-socket /var/run/docker.sock \
-H "Content-Type: application/json" \
-d '{"Detach": true}' \
http:/v1.41/exec/"$exec_id"/start
endscript
Commentary on the hard part:
exec_id=`curl -X POST --unix-socket /var/run/docker.sock \
This is the first of two calls to curl, saving the result into a variable to use in the second. Also don't forget to (insecurely) mount the socket into the container, '/var/run/docker.sock:/var/run/docker.sock'
-H "Content-Type: application/json" \
-d '{"cmd": ["nginx", "-s", "reopen"]}' \
Docker's API docs say the command can be a string or array of strings, but it only worked for me as an array of strings. I used the nginx command line tool, but something like 'kill -SIGUSR1 $(cat /var/run/nginx.pid)' would probably work too.
http:/v1.41/containers/hofg_nginx_1/exec \
I hard-coded the container name, if you're dealing with something more complicated you're probably also using a fancier logging service
| jq -r '.Id'`
The response is JSON-formatted, I used jq to extract the id (excuse me, 'Id') to use next.
curl -X POST --unix-socket /var/run/docker.sock \
-H "Content-Type: application/json" \
-d '{"Detach": true}' \
The Detach: true is probably not necessary, just a placeholder for POST data that was handy while debugging
http:/v1.41/exec/"$exec_id"/start
Making use of the exec instance ID returned by the first curl to actually run the command.
I'm sure it will evolve (say with error handling), but this should be a good starting point.

Salt-Api doen't include unsuccessful queries

I have installed salt 2016.3.2 on several machines and also configured salt-api to send commands using REST calls. It works fine but I have a minor problem.
When I call a function from an execution module (say test.ping) I can see disconnected minions ids in the CLI output but the same command using salt-api doesn't include failed minions ids.
For example:
salt '*' test.ping
linux1:
True
win1:
Minion did not return. [Not connected]
curl -b cookies.in -sSk http://localhost:8000 -H 'Accept: application/json' -d client=local -d tgt='*' -d fun='test.ping'
{"return" : [{"linux1" : true}]}
How can I see all nodes regardless of success or failure in the REST result? (including "win1" in the above example)
Thanks.

Resources