Salt-Api doen't include unsuccessful queries - salt-stack

I have installed salt 2016.3.2 on several machines and also configured salt-api to send commands using REST calls. It works fine but I have a minor problem.
When I call a function from an execution module (say test.ping) I can see disconnected minions ids in the CLI output but the same command using salt-api doesn't include failed minions ids.
For example:
salt '*' test.ping
linux1:
True
win1:
Minion did not return. [Not connected]
curl -b cookies.in -sSk http://localhost:8000 -H 'Accept: application/json' -d client=local -d tgt='*' -d fun='test.ping'
{"return" : [{"linux1" : true}]}
How can I see all nodes regardless of success or failure in the REST result? (including "win1" in the above example)
Thanks.

Related

AutoSys API - list jobs between RunEndTime

I need to list jobs between specific RunEndTime. I was trying the following but didn't receive anything from the server. (there are RunEndTimes jobs > 2010)
curl -u user:pass 'https://autosys-apiserver:9999/AEWS/job-run-info?filter=runEndTime>2010-02-25T10:52' -k
Following is working as expected
curl -u user:pass 'https://autosys-apiserver:9999/AEWS/job-run-info?filter=runEndTime==2010-02-25T10:52' -k
other question is how to run a query to list all jobs between two RunEndTimes ?

Can I have conflict in Riak with using a vclock getting by a get

I want to know if I can have conflict in this scenario :
#!/usr/bin/env bash
curl -XPUT -d '{"bar":"baz"}' \
-H "Content-Type: application/json" \
http://127.0.0.1:8098/riak/obj/1
response=$(curl -I http://127.0.0.1:8098/riak/obj/1 | grep 'X-Riak-Vclock:' | egrep -o ' .*$')
curl -v -XPUT -d '{"bar":"foo"}' \
-H "Content-Type: application/json" \
-H "X-Riak-Vclock: $response" \
http://127.0.0.1:8098/riak/obj/1
In some words :
First I have no object for the key 1 I put the {"bar":"baz"} value with the PUT of the http api.
Then, I read the value with a get. And I store the vclock in variable.
And finaly I put a new value {"bar":"foo"} for the key 1
Is there a case where I can have {"bar":"baz"} for the key 1 ? If Riak has a conflict, it will be resolve with vclock ?
Thanks !
It depends how your Riak database is configured, either globally or if you changed the default configuration of the bucket you're using. If you keep the default config, your second PUT (with the vclock) might:
- fail, if someone updated the key behind your back (rare), and the vclock data you have is already obsolete. You need to re-read the value and update it. Best is to have a retry mechanism.
- fail, if the write consistency constrains you have is too strict, and too many nodes are down (rare). Usually the default read and write config are sane.
- succeed, if the vclock data is still valid for this key (most of the time)
In case it succeeds, it might be that the network topology was in a split-brain situation. In this case, Riak will solve the issue itself using v-clock data.

Nginx Plus API back end server drain mode

I am working with Nginx Plus API and trying to put the backend servers in drain mode and while doing this via curl getting below error ,
"method":"PATCH","error":{"status":415,"text":"json error","code":"JsonError"},
format which I followed :
.\curl.exe -u username -X PATCH -d '{"drain":true}' baseserverurl
have to use as -X PATCH -d '{\"drain\":true}'

containerized nginx log rotation with logrotate

Nginx doesn't have native log rotation, so an external tool, such as logrotate, is required. Nginx presents a challenge in that the logs have to be reopened post rotation. You can send a USR1 signal to it if the pid is available in /var/run.
But when running in a docker container, the pid file is missing in /var/run (and the pid actually belongs to the host, since it is technically a host process).
If you don't reopen the logs, nginx doesn't log anything at all, though it continues to function otherwise as web server, reverse proxy, etc.
You can get the process id from the Pid attribute using docker inspect and use kill -USR1 {pid} to have nginx reopen the logs.
Here's the /etc/logrotate.d/nginx file I created:
/var/log/nginx/access.log
{
size 2M
rotate 10
missingok
notifempty
compress
delaycompress
postrotate
docker inspect -f '{{ .State.Pid }}' nginx | xargs kill -USR1
endscript
}
If you want to run logrotate in a dedicated container (e.g to rotate both nginx logs and Rails' file log) rather than on the host machine, here's how I did it. The trickiest part by far was as above, getting the reload signals to nginx, Rails, etc so that they would create and log to fresh logfiles post-rotation.
Summary:
put all the logs on a single shared volume
export docker socket to the logrotate container
build a logrotate image with logrotate, cron, curl, and jq
build logrotate.conf with postrotate calls using docker exec API as detailed below
schedule logrotate using cron in the container
The hard part:
To get nginx (/etcetera) to reload thus connect to fresh log files, I sent exec commands to the other containers using Docker's API via socket. It expects a POST with the command in JSON format, to which it responds with an exec instance ID. You then need to explicitly run that instance.
An example postrotate section from my logrotate.conf file:
postrotate
exec_id=`curl -X POST --unix-socket /var/run/docker.sock \
-H "Content-Type: application/json" \
-d '{"cmd": ["nginx", "-s", "reopen"]}' \
http:/v1.41/containers/hofg_nginx_1/exec \
| jq -r '.Id'`
curl -X POST --unix-socket /var/run/docker.sock \
-H "Content-Type: application/json" \
-d '{"Detach": true}' \
http:/v1.41/exec/"$exec_id"/start
endscript
Commentary on the hard part:
exec_id=`curl -X POST --unix-socket /var/run/docker.sock \
This is the first of two calls to curl, saving the result into a variable to use in the second. Also don't forget to (insecurely) mount the socket into the container, '/var/run/docker.sock:/var/run/docker.sock'
-H "Content-Type: application/json" \
-d '{"cmd": ["nginx", "-s", "reopen"]}' \
Docker's API docs say the command can be a string or array of strings, but it only worked for me as an array of strings. I used the nginx command line tool, but something like 'kill -SIGUSR1 $(cat /var/run/nginx.pid)' would probably work too.
http:/v1.41/containers/hofg_nginx_1/exec \
I hard-coded the container name, if you're dealing with something more complicated you're probably also using a fancier logging service
| jq -r '.Id'`
The response is JSON-formatted, I used jq to extract the id (excuse me, 'Id') to use next.
curl -X POST --unix-socket /var/run/docker.sock \
-H "Content-Type: application/json" \
-d '{"Detach": true}' \
The Detach: true is probably not necessary, just a placeholder for POST data that was handy while debugging
http:/v1.41/exec/"$exec_id"/start
Making use of the exec instance ID returned by the first curl to actually run the command.
I'm sure it will evolve (say with error handling), but this should be a good starting point.

what does -v and -k mean in the curl?

I read this whole page
http://conqueringthecommandline.com/book/curl#cha-3_footnote-1
and I didn't see any -v or -k options for cURL
I have this curl request:
curl -v -k --user "bla/test#bla.com:BlaBla" \
"theUrlToTheServer" | xmllint --format - > something.xml
I started by trying to understand what do -v and -k mean, but I couldn't understand them, may you help please
-k, --insecure
(SSL) This option explicitly allows curl to perform "insecure" SSL connections and transfers. All SSL connections are attempted to be made secure by using the CA certificate bundle installed by default. This makes all connections considered "insecure" fail unless -k, --insecure is used.
See this online resource for further details: https://curl.haxx.se/docs/sslcerts.html
-v - verbose
That means print everything while executing.

Resources