what does -v and -k mean in the curl? - http

I read this whole page
http://conqueringthecommandline.com/book/curl#cha-3_footnote-1
and I didn't see any -v or -k options for cURL
I have this curl request:
curl -v -k --user "bla/test#bla.com:BlaBla" \
"theUrlToTheServer" | xmllint --format - > something.xml
I started by trying to understand what do -v and -k mean, but I couldn't understand them, may you help please

-k, --insecure
(SSL) This option explicitly allows curl to perform "insecure" SSL connections and transfers. All SSL connections are attempted to be made secure by using the CA certificate bundle installed by default. This makes all connections considered "insecure" fail unless -k, --insecure is used.
See this online resource for further details: https://curl.haxx.se/docs/sslcerts.html
-v - verbose
That means print everything while executing.

Related

Nagios Alert returns "NRPE: Unable to read output" Command: check_service!httpd

I have installed Nagios on Redhat with the following configurations:
/usr/local/nagios/etc/static/commands.cfg
define command {
command_name check_service
command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c check_service -a $ARG1$
}
When I try to run it manually:
if i try to use the following syntax, I get error:
/usr/local/nagios/libexec/check_nrpe -H 10.111.55.92 -c check_service -a check_http
NRPE: Unable to read output
not using nope:
/usr/local/nagios/libexec/check_http -H 10.111.55.92
HTTP OK: HTTP/1.1 200 OK - 4298 bytes in 0.024 second response time |time=0.024462s;;;0.000000 size=4298B;;;0
I am consistently getting Nagios Email notifications:
HOST: Proxy (Dev) i-01aa24242424d7
IP: 10.111.55.92
Service: Apache Running
Service State: UNKNOWN
Attempts: 3/3
Duration: 0d 9h 28m 49s
Command: check_service!httpd
\More Details:
NRPE: Unable to read output
Not sure how I can use nrpe with check_service to check http
Just. running the check_nrpe with check_http displays the version of installed nope
/usr/local/nagios/libexec/check_nrpe -H 10.111.55.92 -a check_http
NRPE v3.2.1
/usr/local/nagios/etc/nrpe.cfg
command[check_users]=/usr/local/nagios/libexec/check_users -w 10 -c 15
command[check_load]=/usr/local/nagios/libexec/check_load -w 15,10,5 -c 30,25,20
command[check_root_disk]=/usr/local/nagios/libexec/check_disk -w 20% -c 10% -p /
command[check_zombie_procs]=/usr/local/nagios/libexec/check_procs -w 10 -c 15 -s Z
command[check_total_procs]=/usr/local/nagios/libexec/check_procs -w 500 -c 750
command[check_total_procs]=/usr/local/nagios/libexec/check_procs -w 500 -c 750
command[check_ping]=/usr/local/nagios/libexec/check_ping $ARG1$
command[check_http]=/usr/local/nagios/libexec/check_http
# LINUX DEFAULT
command[check_service]=/bin/sudo -n /bin/systemctl status -l $ARG1$
# GLUSTER CHECKS
command[check_glusterdata]=/usr/local/nagios/libexec/check_disk -w 20% -c 10% -p /gluster
# GITLAB CHECKS
command[gitlab_ctl]=/bin/sudo -n /bin/gitlab-ctl status $ARG1$
command[gitlab_rake]=/bin/sudo -n /bin/gitlab-rake gitlab:check
command[check_gitlabdata]=/usr/local/nagios/libexec/check_disk -w 20% -c 10% -p /var/opt/gitlab
# OPENSHIFT CHECKS
command[check_openshift_pods]=/usr/local/nagios/libexec/check_pods
File: /usr/local/nagios/etc/nagios.cfg
cfg_dir=/usr/local/nagios/etc/static
You seem to be confusing two plugins. check_service will just check a service is running locally. Try calling it like this:
/usr/local/nagios/libexec/check_nrpe -H 10.111.55.92 -c check_service -a httpd
I'd hesitate to use the check_service command you have in there though. Giving nrpe access to run systemctl with sudo seems dangerous to me.
check_http is an http client. It will actually connect to an http server and check a given URI. It can check status codes and do all sorts of things.
It looks like in your nrpe.cfg you didn't include any arguments to check_http. It will just print its help message if you call it like that, I don't think it will check the local machine.
Note that when you call check_http above manually, you supply -H. That -H is not passed through automatically, you need to provide arguments to your check_http command in nrpe.cfg.
Change the line:
command[check_http]=/usr/local/nagios/libexec/check_http
To something like:
command[check_http]=/usr/local/nagios/libexec/check_http -H 127.0.0.1
And it should work better assuming your http is listening on localhost.
You probably don't want to call check_http via nrpe like this though. Let your nagios server call check_http out to the remote machine.

Is it possible to execute a bash script with root permission from NGINX and get the output?

I'm running Nginx under Openresty build so Lua scripting is enabled. I want to create a URI location (which will be secured with SSL +authentication in addition to IP whitelisting) which allows webhooks calls from authorized sources to execute bash scripts on the server using root permission. e.g.
https://someserver.com/secured/exec?script=script.sh&param1=uno&param2=dos
NGINX would use the 'script' and 'param#' GET request arguments to execute "script.sh uno dos" in a shell. It captures the script output and bash return code (if that's possible).
I understand the security implications of running NGINX as root and running arbitrary commands but as mentioned access to the URI would be secured.
Is this possible via native NGINX modules or maybe Lua scripting? Any sample code to get me started?
Thank you.
There is another possible solution which won't need extra nginx lua plugins. This is using socat. You start a socat on port 8080 which on every connection executes a bash script
socat TCP4-LISTEN:8080,reuseaddr,fork EXEC:./test.sh
test.sh
#!/bin/bash
recv() {
echo "< $#" >&2;
}
read line
line=${line%%$'\r'}
recv "$line"
read -r REQUEST_METHOD REQUEST_URI REQUEST_HTTP_VERSION <<<"$line"
declare -a REQUEST_HEADERS
while read -r line; do
line=${line%%$'\r'}
recv "$line"
# If we've reached the end of the headers, break.
[ -z "$line" ] && break
REQUEST_HEADERS+=("$line")
done
eval $(echo $REQUEST_URI | awk -F? '{print $2}' | awk -F'&' '{for (i=1;i<=NF;i++) print $i}')
cat <<END1
HTTP/1.1 200 OK
Content-Type: plain/text
REQUEST_METHOD=$REQUEST_METHOD
REQUEST_URI=$REQUEST_URI
REQUEST_HTTP_VERSION=$REQUEST_HTTP_VERSION
REQUEST_HEADERS=$REQUEST_HEADERS
script=$script
param1=$param1
param2=$param2
END1
And test on curl is as below
$ curl "localhost:8080/exec?script=test2.sh&param1=abc&param2=def"
REQUEST_METHOD=GET
REQUEST_URI=/exec?script=test2.sh&param1=abc&param2=def
REQUEST_HTTP_VERSION=HTTP/1.1
REQUEST_HEADERS=Host: localhost:8080
script=test2.sh
param1=abc
param2=def
So you can easily use this for a proxy_pass in nginx.
If you need see complete server in bash using socat, have a look at https://github.com/avleen/bashttpd/blob/master/bashttpd

containerized nginx log rotation with logrotate

Nginx doesn't have native log rotation, so an external tool, such as logrotate, is required. Nginx presents a challenge in that the logs have to be reopened post rotation. You can send a USR1 signal to it if the pid is available in /var/run.
But when running in a docker container, the pid file is missing in /var/run (and the pid actually belongs to the host, since it is technically a host process).
If you don't reopen the logs, nginx doesn't log anything at all, though it continues to function otherwise as web server, reverse proxy, etc.
You can get the process id from the Pid attribute using docker inspect and use kill -USR1 {pid} to have nginx reopen the logs.
Here's the /etc/logrotate.d/nginx file I created:
/var/log/nginx/access.log
{
size 2M
rotate 10
missingok
notifempty
compress
delaycompress
postrotate
docker inspect -f '{{ .State.Pid }}' nginx | xargs kill -USR1
endscript
}
If you want to run logrotate in a dedicated container (e.g to rotate both nginx logs and Rails' file log) rather than on the host machine, here's how I did it. The trickiest part by far was as above, getting the reload signals to nginx, Rails, etc so that they would create and log to fresh logfiles post-rotation.
Summary:
put all the logs on a single shared volume
export docker socket to the logrotate container
build a logrotate image with logrotate, cron, curl, and jq
build logrotate.conf with postrotate calls using docker exec API as detailed below
schedule logrotate using cron in the container
The hard part:
To get nginx (/etcetera) to reload thus connect to fresh log files, I sent exec commands to the other containers using Docker's API via socket. It expects a POST with the command in JSON format, to which it responds with an exec instance ID. You then need to explicitly run that instance.
An example postrotate section from my logrotate.conf file:
postrotate
exec_id=`curl -X POST --unix-socket /var/run/docker.sock \
-H "Content-Type: application/json" \
-d '{"cmd": ["nginx", "-s", "reopen"]}' \
http:/v1.41/containers/hofg_nginx_1/exec \
| jq -r '.Id'`
curl -X POST --unix-socket /var/run/docker.sock \
-H "Content-Type: application/json" \
-d '{"Detach": true}' \
http:/v1.41/exec/"$exec_id"/start
endscript
Commentary on the hard part:
exec_id=`curl -X POST --unix-socket /var/run/docker.sock \
This is the first of two calls to curl, saving the result into a variable to use in the second. Also don't forget to (insecurely) mount the socket into the container, '/var/run/docker.sock:/var/run/docker.sock'
-H "Content-Type: application/json" \
-d '{"cmd": ["nginx", "-s", "reopen"]}' \
Docker's API docs say the command can be a string or array of strings, but it only worked for me as an array of strings. I used the nginx command line tool, but something like 'kill -SIGUSR1 $(cat /var/run/nginx.pid)' would probably work too.
http:/v1.41/containers/hofg_nginx_1/exec \
I hard-coded the container name, if you're dealing with something more complicated you're probably also using a fancier logging service
| jq -r '.Id'`
The response is JSON-formatted, I used jq to extract the id (excuse me, 'Id') to use next.
curl -X POST --unix-socket /var/run/docker.sock \
-H "Content-Type: application/json" \
-d '{"Detach": true}' \
The Detach: true is probably not necessary, just a placeholder for POST data that was handy while debugging
http:/v1.41/exec/"$exec_id"/start
Making use of the exec instance ID returned by the first curl to actually run the command.
I'm sure it will evolve (say with error handling), but this should be a good starting point.

How do you upload a file to a Milton WebDAV server using curl?

When I try to curl using the -T option, I get an empty reply:
$ curl --digest -u me:pwd -H "Content-Type:application/xml" -T test.xml http://localhost:8085/
curl: (52) Empty reply from server
Anyone know the incantation? The server works fine when connecting to it from the WebDAV client built into MacOSX.
By default curl sends Expect: Continue, but unfortunately java web containers don't play nicely with the Expect header. The simplest answer is to instruct curl not to send that header:
curl --digest -u a2 -H "Content-Type:application/xml" -H "Expect:" -T TestPBE-workspace.xml http://localhost:8080/users/a2/files2/
The better solution is to make expect:continue work, but from the research i've done it appears that depends on what web container you're using.

how to access GSA API

I have a Google Search Appliance and am trying to access the API to download the Event Logs. I am using curl through cygwin on the Windows 7 command line.
I am able to get an authentication token using
curl -X POST -d Email=username -d Passwd=password "http://ip.ad.dr.ess:8443/accounts/ClientLogin"
My problem is that when I attempt to retrieve the even logs themselves:
curl -k -X GET -d query=User -H "Content-type: application/atom+xml" -H "Authorization: GoogleLogin Auth=e73265ce254f7c4afbcbee1743a56e81" "http://10.29.5.5:8000/feeds/logs/eventLog"
curl says that it cannot reach the host:
curl: (7) couldn't connect to host
Any help in getting this to work is greatly appreciated.
Check if you can telnet to the IP-address. If you get a timeout there you should check firewalls etc.
C:\> telnet 10.29.5.5 8000
This looks like a more generic network problem than a problem with either the GSA or Curl.
The problem turned out to be the configuration with my GSA. The GSA is configured to require communication over HTTPS. Because all GSAs default to HTTP traffic over port 8000 and HTTPS traffic over 8443, my problem was solved by sending the following:
curl -k -X GET -d query=User -H "Content-type: application/atom+xml" -H "Authorization: GoogleLogin Auth=e73265ce254f7c4afbcbee1743a56e81" "https://ip.ad.dr.ess:8443/feeds/logs/eventLog"

Resources