How can I run a command like shutdown when my Nginx receives an HTTP request?
I read about writing modules for this, but I find it overkilling, I just need to shutdown the computer when an HTTP request is answered.
There is an impressively creative solution to implement some kind of shell webhooks I found recently on GitHub (all credits goes to the author).
It consist of three parts.
A systemd unit to run a socat utility listening the loopback network interface (don't open used TCP port to the outer world in any case!):
[Unit]
Description=socat-based cmd api
After=network.target
[Service]
User=nobody
Group=nobody
ExecStart=/usr/bin/socat UDP-LISTEN:50333,fork,bind=127.0.0.1 EXEC:/opt/bin/shell-webhook
Restart=always
[Install]
WantedBy=multi-user.target
A shell script to be placed somewhere (the above unit assumes it to be the /opt/bin/shell-webhook):
#!/bin/bash
LOG=/var/log/shell-webhook.log
declare -a DATA
read -a DATA
cmd=${DATA[4]}
case ${cmd} in
reboot)
echo "Executing reboot" >> ${LOG}
shutdown -r now
;;
shutdown)
echo "Executing shutdown" >> ${LOG}
shutdown -P now
;;
*)
echo "Unknown command $cmd" >> ${LOG}
;;
esac
An nginx-relevant configuration part (consider preventing access to this location using some kind of access control - allow/deny rules, basic auth, etc.):
log_format shellwebhook '$cmd\n';
server {
...
location ~* ^/cmd/(\w+)$ {
set $cmd $1;
return 200 "$cmd";
access_log syslog:server=127.0.0.1:50333,facility=local7,tag=nginx,severity=info,nohostname shellwebhook;
}
}
Related
I a having trouble nginx 1.10.3 and rtmp module exec_xxx commands. I have nginx.conf
with the following code.
rtmp {
...
server {
...
application dash { #creates a rtmp application
exec_options on;
exec_pull /bin/bash /usr/local/nginx/conf/ping.sh pull;
exec_push /bin/bash /usr/local/nginx/conf/ping.sh push;
exec_static /bin/bash /usr/local/nginx/conf/ping.sh static;
exec_publish /bin/bash /usr/local/nginx/conf/ping.sh publish;
...
I can read the rtmp DASH video externally that I publish internally from local host. So I know the conf file is working. I can also verify the directives are active with 'sudo -i nginx -T | grep exec_'. But the ping.sh (shown below) is not being executed.
#!/bin/bash
touch ./test.txt
/bin/echo "got message 1=$1 2=$2 3=$3"
/bin/echo "got message 1=$1 2=$2 3=$3" >>/usr/local/nginx/conf/exec_log.txt
The ping.sh command works when executed from ~/nginx/. How can I tell why the exec_pull static and other 'exec_xxx' commands are not working?
I'm running Nginx under Openresty build so Lua scripting is enabled. I want to create a URI location (which will be secured with SSL +authentication in addition to IP whitelisting) which allows webhooks calls from authorized sources to execute bash scripts on the server using root permission. e.g.
https://someserver.com/secured/exec?script=script.sh¶m1=uno¶m2=dos
NGINX would use the 'script' and 'param#' GET request arguments to execute "script.sh uno dos" in a shell. It captures the script output and bash return code (if that's possible).
I understand the security implications of running NGINX as root and running arbitrary commands but as mentioned access to the URI would be secured.
Is this possible via native NGINX modules or maybe Lua scripting? Any sample code to get me started?
Thank you.
There is another possible solution which won't need extra nginx lua plugins. This is using socat. You start a socat on port 8080 which on every connection executes a bash script
socat TCP4-LISTEN:8080,reuseaddr,fork EXEC:./test.sh
test.sh
#!/bin/bash
recv() {
echo "< $#" >&2;
}
read line
line=${line%%$'\r'}
recv "$line"
read -r REQUEST_METHOD REQUEST_URI REQUEST_HTTP_VERSION <<<"$line"
declare -a REQUEST_HEADERS
while read -r line; do
line=${line%%$'\r'}
recv "$line"
# If we've reached the end of the headers, break.
[ -z "$line" ] && break
REQUEST_HEADERS+=("$line")
done
eval $(echo $REQUEST_URI | awk -F? '{print $2}' | awk -F'&' '{for (i=1;i<=NF;i++) print $i}')
cat <<END1
HTTP/1.1 200 OK
Content-Type: plain/text
REQUEST_METHOD=$REQUEST_METHOD
REQUEST_URI=$REQUEST_URI
REQUEST_HTTP_VERSION=$REQUEST_HTTP_VERSION
REQUEST_HEADERS=$REQUEST_HEADERS
script=$script
param1=$param1
param2=$param2
END1
And test on curl is as below
$ curl "localhost:8080/exec?script=test2.sh¶m1=abc¶m2=def"
REQUEST_METHOD=GET
REQUEST_URI=/exec?script=test2.sh¶m1=abc¶m2=def
REQUEST_HTTP_VERSION=HTTP/1.1
REQUEST_HEADERS=Host: localhost:8080
script=test2.sh
param1=abc
param2=def
So you can easily use this for a proxy_pass in nginx.
If you need see complete server in bash using socat, have a look at https://github.com/avleen/bashttpd/blob/master/bashttpd
I have an application on the isolated machine. It writes logs to /var/log/app/log.txt for example. However, I want it to write logs to journald daemon. However, I can't change the way application run, because it is encapsulated.
I mean I can not do smth like app | systemd-cat
1) Am I right that all services started with systemd write logs to journald?
2) If so, will the children of process, started by systemd, will also write logs to journald?
3) Is there any way to tell journald to take logs from a specific file?
4) If not, are there any workarounds?
warning: this is not tested
You could mount bind /dev/stdout to log file in ExecStartPre
Example:
ExecStartPre=/use/sbin/mount --bind /dev/stdout /var/log/app/log.txt
Or soft link /dev/stdout to log file in ExecStartPre
Example:
ExecStartPre=/use/bin/ln -s /dev/stdout /var/log/app/log.txt
4) I can only try to help with a workaround:
MY_LOG_FILE=/var/log/app/log.txt
# Create a FIFO PIPE
PIPE=/tmp/my_fifo_pipe
mkfifo $PIPE
MY_IDENTIFIER="my_app_name" # just a label for later searching in journalctl
# Start logging to journal
systemd-cat -t $MY_IDENTIFIER -p info < $PIPE &
exec 3>$PIPE
tail -f $MY_LOG_FILE > $PIPE &
exec 3>&- #closing file descriptor 3 closes the fifo
This is the basic idea, you should now think about timings, when it's needed to have this started and when to be stopped.
The idea is simple, I need to send a signal from a container to another one to restart nginx.
Connect to the nginx container from the first one in ssh is a good solution?
Do you have other recommended ways for this?
I don't recommend installing ssh, Docker containers are not virtual machines, And should respect microservices architecture to benefit from many advantages it provides.
In order to send signal from one container to another, You can use docker API.
First you need to share /var/run/docker.sock between required containers.
docker run -d --name control -v /var/run/docker.sock:/var/run/docker.sock <Control Container>
to send signal to a container named nginx you can do the following:
echo -e "POST /containers/nginx/kill?signal=HUP HTTP/1.0\r\n" | \
nc -U /var/run/docker.sock
Another option is using a custom image, with a custom script, that checks nginx config files and if the hash is changed sends reload signal. This way, each time you change config, nginx will automatically reload, or You can reload manually using comments. these kind of scripts are common among kubernetes users. Following is an example:
nginx "$#"
oldcksum=`cksum /etc/nginx/conf.d/default.conf`
inotifywait -e modify,move,create,delete -mr --timefmt '%d/%m/%y %H:%M' --format '%T' \
/etc/nginx/conf.d/ | while read date time; do
newcksum=`cksum /etc/nginx/conf.d/default.conf`
if [ "$newcksum" != "$oldcksum" ]; then
echo "At ${time} on ${date}, config file update detected."
oldcksum=$newcksum
nginx -s reload
fi
done
Don't forget to install inotifywait package.
I'm using ucarp over linux bonding for high availability and automatic failover of two servers.
Here are the commands I used on each server for starting ucarp :
Server 1 :
ucarp -i bond0 -v 2 -p secret -a 10.110.0.243 -s 10.110.0.229 --upscript=/etc/vip-up.sh --downscript=/etc/vip-down.sh -b 1 -k 1 -r 2 -z
Server 2 :
ucarp -i bond0 -v 2 -p secret -a 10.110.0.243 -s 10.110.0.242 --upscript=/etc/vip-up.sh --downscript=/etc/vip-down.sh -b 1 -k 1 -r 2 -z
and the content of the scripts :
vip-up.sh :
#!/bin/sh
exec 2> /dev/null
/sbin/ip addr add "$2"/24 dev "$1"
vip-down.sh :
#!/bin/sh
exec 2> /dev/null
/sbin/ip addr del "$2"/24 dev "$1"
Everything works well and the servers switch from one to another correctly when the master becomes unavailable.
The problem is when I unplug both servers from the switch for a too long time (approximatively 30 min). As they are unplugged they both think they are master,
and when I replug them, the one with the lowest ip address tries to stay master by sending gratuitous arps. The other one switches to backup as expected, but I'm unable to access the master through its virtual ip.
If I unplug the master, the second server goes from backup to master and is accessible through its virtual ip.
My guess is that the switch "forgets" about my servers when they are disconnected from too long, and when I reconnect them, it is needed to go from backup to master to update correctly switch's arp cache, eventhough the gratuitous arps send by master should do the work. Note that restarting ucarp on the master does fix the problem, but I need to restart it each time it was disconnected from too long...
Any idea why it does not work as I expected and how I could solve the problem ?
Thanks.