Nginx Plus API back end server drain mode - nginx

I am working with Nginx Plus API and trying to put the backend servers in drain mode and while doing this via curl getting below error ,
"method":"PATCH","error":{"status":415,"text":"json error","code":"JsonError"},
format which I followed :
.\curl.exe -u username -X PATCH -d '{"drain":true}' baseserverurl

have to use as -X PATCH -d '{\"drain\":true}'

Related

containerized nginx log rotation with logrotate

Nginx doesn't have native log rotation, so an external tool, such as logrotate, is required. Nginx presents a challenge in that the logs have to be reopened post rotation. You can send a USR1 signal to it if the pid is available in /var/run.
But when running in a docker container, the pid file is missing in /var/run (and the pid actually belongs to the host, since it is technically a host process).
If you don't reopen the logs, nginx doesn't log anything at all, though it continues to function otherwise as web server, reverse proxy, etc.
You can get the process id from the Pid attribute using docker inspect and use kill -USR1 {pid} to have nginx reopen the logs.
Here's the /etc/logrotate.d/nginx file I created:
/var/log/nginx/access.log
{
size 2M
rotate 10
missingok
notifempty
compress
delaycompress
postrotate
docker inspect -f '{{ .State.Pid }}' nginx | xargs kill -USR1
endscript
}
If you want to run logrotate in a dedicated container (e.g to rotate both nginx logs and Rails' file log) rather than on the host machine, here's how I did it. The trickiest part by far was as above, getting the reload signals to nginx, Rails, etc so that they would create and log to fresh logfiles post-rotation.
Summary:
put all the logs on a single shared volume
export docker socket to the logrotate container
build a logrotate image with logrotate, cron, curl, and jq
build logrotate.conf with postrotate calls using docker exec API as detailed below
schedule logrotate using cron in the container
The hard part:
To get nginx (/etcetera) to reload thus connect to fresh log files, I sent exec commands to the other containers using Docker's API via socket. It expects a POST with the command in JSON format, to which it responds with an exec instance ID. You then need to explicitly run that instance.
An example postrotate section from my logrotate.conf file:
postrotate
exec_id=`curl -X POST --unix-socket /var/run/docker.sock \
-H "Content-Type: application/json" \
-d '{"cmd": ["nginx", "-s", "reopen"]}' \
http:/v1.41/containers/hofg_nginx_1/exec \
| jq -r '.Id'`
curl -X POST --unix-socket /var/run/docker.sock \
-H "Content-Type: application/json" \
-d '{"Detach": true}' \
http:/v1.41/exec/"$exec_id"/start
endscript
Commentary on the hard part:
exec_id=`curl -X POST --unix-socket /var/run/docker.sock \
This is the first of two calls to curl, saving the result into a variable to use in the second. Also don't forget to (insecurely) mount the socket into the container, '/var/run/docker.sock:/var/run/docker.sock'
-H "Content-Type: application/json" \
-d '{"cmd": ["nginx", "-s", "reopen"]}' \
Docker's API docs say the command can be a string or array of strings, but it only worked for me as an array of strings. I used the nginx command line tool, but something like 'kill -SIGUSR1 $(cat /var/run/nginx.pid)' would probably work too.
http:/v1.41/containers/hofg_nginx_1/exec \
I hard-coded the container name, if you're dealing with something more complicated you're probably also using a fancier logging service
| jq -r '.Id'`
The response is JSON-formatted, I used jq to extract the id (excuse me, 'Id') to use next.
curl -X POST --unix-socket /var/run/docker.sock \
-H "Content-Type: application/json" \
-d '{"Detach": true}' \
The Detach: true is probably not necessary, just a placeholder for POST data that was handy while debugging
http:/v1.41/exec/"$exec_id"/start
Making use of the exec instance ID returned by the first curl to actually run the command.
I'm sure it will evolve (say with error handling), but this should be a good starting point.

How can I resolve the error "certificate subject name does not match target host name"?

curl -X GET --header 'Accept: application/json' --header 'Authorization: Bearer 90d2c018-73d1-324b-b121-a162cf870ac0' 'https://172.17.0.1:8243/V1.0.2/stock/getNA?name=te'
The terminal prompted
"curl: (51) SSL: certificate subject name (localhost) does not match target host name '172.17.0.1'
"
However, after I changed the "172.17.0.1" to "localhost", it worked and I got the result.
Why? Is there a wrong configuration somewhere?
Meanwhile, there isn't any log information in file http_access.log.
When SSL handshake happens client will verify the server certificate. In the verification process client will try to match the Common Name (CN) of certificate with the domain name in the URL. if both are different host name verification will fail. In your case certificate has CN as local host and when you try to invoke using IP address, it fails. When you create the cert you can have single host name / multiple host name / wild card host name as CN value
For more details, see:
Fixing Hostname Verification
What is the SSL Certificate Common Name?
CN of the default WSO2 certificate is localhost. Therefore you have to use localhost as the hostname when you send requests. Otherwise, the hostname verification fails.
If you want to use any other hostname, you should generate a certificate with that hostname, as Jena has mentioned.
I actually had this problem and found a fix:
I was requesting a URI like 'http://some.example', but the variable for HTTPS was set to '1'
I had this problem when trying to pull from a Git directory after I'd added a new SSH key and my Git repository moved.
In the fray, Git's CN got confused. The solution for me was to delete the git directory and re-clone it via SSH. As the other users hinted at, you can't change the CN of a website's certificate, so you'll have to change the setting on your computer that has the wrong CN, or avoid using HTTPS (and use SSH like I did).
As others have hinted, this is failing because the TLS negotiation checks that the cert matches the hostname in the URL.
What's new is that curl now supports this scenario via a connect-to option. So, if your curl is sufficiently new (v7.18.1) this should work:
curl -X GET 'https://localhost/V1.0.2/stock/getNA?name=te' \
--header 'Authorization: Bearer 90d2c018-73d1-324b-b121-a162cf870ac0' \
--header 'Accept: application/json' \
--connect-to localhost:443:172.17.0.1:8243
Credit: https://stackoverflow.com/a/50279590/1662031
Similarly you may be able to leverage curls resolve option:
curl -X GET 'https://localhost:8243/V1.0.2/stock/getNA?name=te' \
--header 'Authorization: Bearer 90d2c018-73d1-324b-b121-a162cf870ac0' \
--header 'Accept: application/json' \
--resolve localhost:443:172.17.0.1

what does -v and -k mean in the curl?

I read this whole page
http://conqueringthecommandline.com/book/curl#cha-3_footnote-1
and I didn't see any -v or -k options for cURL
I have this curl request:
curl -v -k --user "bla/test#bla.com:BlaBla" \
"theUrlToTheServer" | xmllint --format - > something.xml
I started by trying to understand what do -v and -k mean, but I couldn't understand them, may you help please
-k, --insecure
(SSL) This option explicitly allows curl to perform "insecure" SSL connections and transfers. All SSL connections are attempted to be made secure by using the CA certificate bundle installed by default. This makes all connections considered "insecure" fail unless -k, --insecure is used.
See this online resource for further details: https://curl.haxx.se/docs/sslcerts.html
-v - verbose
That means print everything while executing.

How do you upload a file to a Milton WebDAV server using curl?

When I try to curl using the -T option, I get an empty reply:
$ curl --digest -u me:pwd -H "Content-Type:application/xml" -T test.xml http://localhost:8085/
curl: (52) Empty reply from server
Anyone know the incantation? The server works fine when connecting to it from the WebDAV client built into MacOSX.
By default curl sends Expect: Continue, but unfortunately java web containers don't play nicely with the Expect header. The simplest answer is to instruct curl not to send that header:
curl --digest -u a2 -H "Content-Type:application/xml" -H "Expect:" -T TestPBE-workspace.xml http://localhost:8080/users/a2/files2/
The better solution is to make expect:continue work, but from the research i've done it appears that depends on what web container you're using.

how to access GSA API

I have a Google Search Appliance and am trying to access the API to download the Event Logs. I am using curl through cygwin on the Windows 7 command line.
I am able to get an authentication token using
curl -X POST -d Email=username -d Passwd=password "http://ip.ad.dr.ess:8443/accounts/ClientLogin"
My problem is that when I attempt to retrieve the even logs themselves:
curl -k -X GET -d query=User -H "Content-type: application/atom+xml" -H "Authorization: GoogleLogin Auth=e73265ce254f7c4afbcbee1743a56e81" "http://10.29.5.5:8000/feeds/logs/eventLog"
curl says that it cannot reach the host:
curl: (7) couldn't connect to host
Any help in getting this to work is greatly appreciated.
Check if you can telnet to the IP-address. If you get a timeout there you should check firewalls etc.
C:\> telnet 10.29.5.5 8000
This looks like a more generic network problem than a problem with either the GSA or Curl.
The problem turned out to be the configuration with my GSA. The GSA is configured to require communication over HTTPS. Because all GSAs default to HTTP traffic over port 8000 and HTTPS traffic over 8443, my problem was solved by sending the following:
curl -k -X GET -d query=User -H "Content-type: application/atom+xml" -H "Authorization: GoogleLogin Auth=e73265ce254f7c4afbcbee1743a56e81" "https://ip.ad.dr.ess:8443/feeds/logs/eventLog"

Resources