Command run on bash but not from nginx conf exec - nginx

So I just build a media server using NGINX and NGINX-RTMP by arut, using this docker image here. The server works great, I can stream RTMP and view in HLS.
What I want to do now is taking a screenshot periodically, found this npm module that can do the thumbnail generation.
The strange things is, when I tried execute it from the nginx.conf, it won't work. I tried running the command manually from the docker bash and it runs smoothly.
This is my nginx.conf, you can download it here

Forgot to add env PATH inside the nginx.conf file. So that doesn't recognize the command.

Related

Nginx reload shell command not working with in lua script

I have a dockerised Nginx server created with openresty base image. When a particular endpoint is called it need to update nginx config dynamically. For the changes to reflect I am trying to reload the nginx soon after the changes in config.
with in the container i am able to reload nginx server using /usr/local/openresty/nginx/sbin/nginx -s reload
when i try to use the same with in lua as below ,It doesn't shoot any error but the config changes aren't getting reflected.
os.execute("/usr/local/openresty/nginx/sbin/nginx -s reload ")
You can skip calling nginx altogether and just send a HUP signal to the master process using LuaJITs FFI.
local process = require 'ngx.process'
local ffi = require 'ffi'
ffi.cdef 'int kill(int pid, int sig);'
ffi.C.kill(process.get_master_pid(), 1)
However, this doesn't fix the permissions problem.
One idea that could work is:
Set up a named pipe with mkfifo and make it so your nginx-user can write to it
Enable the Priviliged Agent worker process.
Set the privileged worker up to listen for input on the named pipe (for example using the ngx.pipe module to open cat and waiting for input) and send a HUP signal to the master process
Change your os.execute code to instead write some line of text into the named pipe to have the privileged agent reload the server.
EDIT: If you dislike the cat hack, you might want to have a look at https://github.com/slact/ngx_lua_ipc
It might be possible to use IPC to keep the whole thing self-contained within a single nginx server instance, without any file access.
This command would be run with nginx worker process privileges, and you need to be a root in order to execute this command. You can try to make a particular script for this (lets assume its name would be /usr/local/openresty/nginx/sbin/reload-nginx.sh:
#!/bin/sh
/usr/local/openresty/nginx/sbin/nginx -s reload
set the owner of this script no nginx process user (lets assume its name is nginx), and set the suid bit on this script
chown nginx /usr/local/openresty/nginx/sbin/reload-nginx.sh
chmod +x /usr/local/openresty/nginx/sbin/reload-nginx.sh
chmod u+s /usr/local/openresty/nginx/sbin/reload-nginx.sh
and try to execute this script from your lua code:
os.execute("/usr/local/openresty/nginx/sbin/nginx-reload.sh")

Website available even with no NGINX processes running

I have pulled into my web server so it has the latest code from my repo, i try to restart nginx - this doesnt do anything.
So I try the command
sudo nginx -s stop, and get the response that its failed because there is no such file or directory "run/nginx.pid" failed.
Trying to run the command ps aux | grep nginx gives me the response: unsupported option (BSD syntax) -- it actually comes out as ps aux > grep nginx in the digital ocean console.
Basically it seems that even though there are apparently no nginx processes running (although the command to check isnt working) my website is still running and using the old code, is there a way for me to check more definitively on the running processes?
Thanks if you can help.
Try sudo netstat -plunt to check if there's any nginx process running. See if there's anything running on port 80 or 443 and then look at the corresponding program name. You might have another server running possibly apache since it ships by default with most distributions which may be why nginx failed to start.
Another reason why it won't start might be due to faulty config. Go to /etc/nginx/ and double check that it's correct. You can also run sudo nginx -t to ensure that the config syntax is correct.
Alternatively, just check your nginx access log to see if it's actually serving any requests. You can also check the error log to see why it might fail to start. These resides in /var/log/nginx by default or check your nginx.conf for any custom path to logs.

Docker run results in "host not found in upstream" error

I have a frontend-only web application hosted in Docker. The backend already exists but it has "custom IP" address, so I had to update my local /etc/hosts file to access it. So, from my local machine I am able to access the backend API without problem.
But the problem is that Docker somehow can not resolve this "custom IP", even when the host in written in the container (image?) /etc/hosts file.
When the Docker container starts up I see this error
$ docker run media-saturn:dev
2016/05/11 07:26:46 [emerg] 1#1: host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
nginx: [emerg] host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
I update the /etc/hosts file via command in Dockerfile, like this
# install wget
RUN apt-get update \
&& apt-get install -y wget \
&& rm -rf /var/lib/apt/lists/*
# The trick is to add the hostname on the same line as you use it, otherwise the hosts file will get reset, since every RUN command starts a new intermediate container
# it has to be https otherwise authentification is required
RUN echo "123.45.123.45 my-server-address.com" >> /etc/hosts && wget https://my-server-address.com
When I ssh into the machine to check the current content of /etc/hosts, the line "123.45.123.45 my-server-address.com" is indeed there.
Can anyone help me out with this? I am Docker newbee.
I have solved this. There are two things at play.
One is how it works locally and the other is how it works in Docker Cloud.
Local workflow
cd into root of project, where Dockerfile is located
build image: docker build -t media-saturn:dev .
run the builded image: docker run -it --add-host="my-server-address.com:123.45.123.45" -p 80:80 media-saturn:dev
Docker cloud workflow
Add extra_host directive to your Stackfile, like this
and then click Redeploy in Docker cloud, so that changes take effect
extra_hosts:
'my-server-address.com:123.45.123.45'
Optimization tip
ignore as many folders as possible to speed up process of sending data to docker deamon
add .dockerignore file
typically you want to add folders like node_modelues, bower_modules and tmp
in my case the tmp contained about 1.3GB of small files, so ignoring it sped up the process significantly

Docker shows inconsistent behaviour when creating container from image

I am developing a web application which depends on a moodle system, as it uses moodles webservices. For my automated tests, I wanted to use docker to provide a preconfigured moodle-application on all my machines. Therefore I created a docker image, which I import from a .tar.gz file.
However, creating a new container-instance from this image behaves inconsistently. Sometimes the container boots up correctly and everything works fine. However, sometimes the container starts but the moodle-website is not reachable. If I connect my bash to the container using docker exec -it <container> bash I see that apache is running. The error logs do not show any entries which might be related to this issue.
If I kill the container instance and boot it up again, everything works as expected (sometimes this step has to be repeated multiple times). Do you have any idea what could be the reason for this strange behaviour? Anyone experiencing similar issues?
Docker is running on Ubuntu 14:04. The problem appears on several machines. The script which imports the image and starts the container looks like this:
#!/usr/bin/env bash
docker rm -f moodle
docker load < my-moodle.tar.gz
docker run -d -p 8080:80 -p 8443:443 -p 3306:3306 --name moodle moodle-image
Thanks in advance!
Successful container startup depends on your container entrypoint and external resources (if the entrypoint has external dependencies). What is the entrypoint? Does it depend on external resources?

Gitlab: Problems running Unicorn, Resque with Passenger/Nginx

I have installed a Gitlab on a brand new Ubuntu (10.04) and it is working almost correctly. Gitlab is reachable on HTTP, I can push/pull data via git to the server. There is one thing missing though, the activity feed is not updating. So I thought there is something wrong with the git hooks. I completely followed the installation process from Gitlab except I'd like to use Passenger to run Nginx in order to deploy multiple apps.
I was running the the sudo -u gitlab -H bundle exec rake gitlab:env:info RAILS_ENV=production to see if everything is set up correctly, but it said, Redis is not running. ps aux says, redis-server is up. So it is not the git hooks. Gitlab docu says, restart the gitlab service to solve that problem. In this case I get an error which I think is the problem I need to solve:
$ sudo /etc/init.d/gitlab restart
Error, unicorn not running!
My question is, how can I get around this problem? How can I run unicorn, I thought the gitlab service would start it? Am I not using Nginx? Before I start reinstalling the whole thing firstly without using Passenger, I thought I might ask the question here beforehand.
As mentioned by the OP pabera, nginx and mysql must be started, for the other components of GitLab (redis, unicorn, and now sidekiq) to run properly.
The official /etc/init.d/gitlab is here.
I have my own version of gitlabd (here), because I manage sidekiq in my own script, and I don't need to run the script as root.
You can see the run order for all the services in this script:
ssh
Apache and/or NGiNX
mysql
redis
GitLab (which will start unicorn and sidekiq)
Kind of a poke in the dark...
In the GitLab installation.md README is states:
"
Start your GitLab instance:
sudo service gitlab start
# or
sudo /etc/init.d/gitlab restart
"
I did the first AND the second and got this exact error. However, I skipped the "or" and continued to the Nginx commands and it seems to work.
Hope this helps!

Resources