I have nginx compiled with the rtmp module. I start nginx and run the following to start streaming my webcam which works:
ffmpeg -i /dev/video0 -f flv rtmp://localhost/live/test
I then try to use the control module to start recording:
curl "http://localhost:8080/control/record/start?app=application&name=test&rec=rec1"
But the recording event doesn't seem to trigger.
Here is a simplified version of my nginx.conf file:
rtmp {
server {
# ... more code here
recorder rec1 {
record all manual;
record_suffix all.flv;
record_path /tmp/rec;
record_unique on;
}
}
}
http {
server {
listen 8080;
server_name localhost;
location /control {
rtmp_control all;
}
}
# ... more code here
}
Note: checked both port 8080 and 1935 are open with nmap.
Note/Update:
I noticed that if I change app= to live I get an actual error message:
<html>
<head><title>500 Internal Server Error</title></head>
<body bgcolor="white">
<center><h1>500 Internal Server Error</h1></center>
<hr><center>nginx/1.11.1</center>
</body>
</html>
As opposed to the command with the actual app name which returns nothing. This tells me it is working to some extent but I still don't end up with a recoding.
I'm also trying to switch name=test to name=live neither causes an error response.
Complete nginx.conf file.
Update #2:
I'm watching /var/log/nginx/error.log while I use the curl command above. Everytime I use it the following is logged:
client 127.0.0.1 closed keepalive connection
I eneded up fixing this. The problem was two fold.
application wasn't my application name, it was live.
the rec1 block wasn't inside the live application block, it was an unrelated block of code for hls.
Related
I am looking at options to add client-side certificate authentication with a fingerprint whitelist to a local site, and have successfully configured nginx to operate in the intended manner. My configuration is as follows:
# Client Certificate Whitelisting
map $ssl_client_fingerprint $reject {
default 1;
<ALLOWED_FINGERPRINT_1> 0;
<ALLOWED_FINGERPRINT_2> 0;
...
<ALLOWED_FINGERPRINT_N> 0;
}
server {
...
ssl_client_certificate /etc/pki/tls/certs/Private-CA-bundle.pem;
ssl_verify_client on;
...
if ($reject) { return 403; }
...
}
However, I would like to store the fingerprint list in a separate text file, rather than manipulating the nginx configuration file directly each time. Is this possible?
As a bonus, it would be great if I could modify the contents of the text file and have them take effect without reloading nginx. It is acceptable for removals to still require a service restart or other manual session teardown procedure.
---- EDIT ----
Based on the accepted answer, I was able to get this working.
The updated configuration file is:
# Client Certificate Whitelisting
map $ssl_client_fingerprint $reject {
default 1;
include /etc/nginx/cert-whitelist;
}
I was able to add a new certificate and apply the changes without a full service restart.
### Attempt connection with client certificate; returns 403 Forbidden
[root]# cat /run/nginx.pid
5606
[root]# echo "${FINGERPRINT} 0;" >> /etc/nginx/cert-whitelist
[root]# kill -1 $(cat /run/nginx.pid)
[root]# cat /run/nginx.pid
5606
### Attempt connection with client certificate; success
The map directive has the ability to source a correctly formatted file. See this document for details.
You can use SIGHUP to re-read the configuration file without restarting Nginx. See this document for details.
I am trying to display a series of short mp4's on my Roku. The idea is to serve them from nginx running on a local host. The Roku hangs on "retrieving" the video. I have used wireshark to witness the requests coming from the roku, and they continuously repeat. Though nginx does not respond, nor does it log the request in access.log or error.log.
I feel that the roku scripts are sufficiently coded as I can confirm that the playlist is received by the video node, the screen is focused, and the request for the video is being made through port 80. I can request the url through a browser and the mp4 plays in the browser if made with http: or in a player if made with rtmp:.
This is the simple nginx configuration in conf.d;
server {
listen 80 default_server;
location / { # the '/' matches all requests
root /myNginx/sites; # the request URI would be appended to the root
index 01_default_test.html; # the index directive provides a default file or list of files to look for
}
location /videos/ { # for testing use 'http://10.0.0.13/videos/sample.mp4'
mp4; # activates the http_mp4 module for streaming the video
root /VODs; # allows adding 'videos' to the uri and the file name
}
}
}
I appended this to the nginx.conf file;
rtmp {
server {
listen 1935;
chunk_size 4000;
# Video on demand
application VOD { # rtmp://10.0.0.13/VOD/sample03.mp4
play /VOD/videos/;
}
}
Not sure where to go from here. Does anyone know why nginx seems to be ignoring the requests? I am using Ubuntu, and the firewall is currently inactive.
I am tryin to run nginx latest version with the following configuration, but I get nginx: [emerg] invalid parameter "route=bloomberg" in /etc/nginx/nginx.conf:13
docker run --rm -ti -v root_to_local_nginx_directory:/etc/nginx:ro -p 3080:80 --name=mynginx --entrypoint nginx nginx
# nginx.conf file inside root_to_local_nginx_directory
http {
map $cookie_route $route_from_cookie {
~.(?P<version>w+)$ $route;
}
split_clients "${remote_addr}" $random_route {
50% server bloomberg.com route=bloomberg;
* server yahoo.com route=yahoo;
}
upstream backend {
zone backend 64k;
server bloomberg.com route=bloomberg;
server yahoo.com route=yahoo;
sticky route $route_from_cookie $randomroute;
}
server {
# ...
listen 80;
location / {
proxy_set_header Host $host;
proxy_pass http://backend;
}
}
}
Why is this? According to the documentation this should be correct http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream.
The route=string parameter of the server directive within the upstream context is considered to be an enterprise-grade feature, and is thus only available through the commercial subscription, in NGINX Plus, not in OSS NGINX. (If you look closer into the documentation, you'll notice it's grouped together with the other parameters under a separate "available as part of our commercial subscription" subsection.)
Additionally, you're also trying to use some similar "server" parameters within the split_clients context as if they were actual directives interpreted by nginx, even though everything is supposed to be string literals in that context; it's unclear whether or not that part is responsible for any errors, but even if not, it's a bad style to introduce such confusion into your configuration.
References:
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server
http://nginx.org/en/docs/http/ngx_http_split_clients_module.html#split_clients
https://www.nginx.com/products/nginx/
The reason why you are seeing the error is because the split_clients module does not support the route parameter. Alternatively, you can do something along the lines:
upstream bloomberg {
server bloomberg.com route=bloomberg;
}
upstream yahoo {
server yahoo.com route=yahoo;
}
split_clients "${remote_addr}" $random_route {
50% bloomberg;
* yahoo;
}
I can't get Nginx working with memcached module, the requirement is to query remote service, cache data in memcached and never fetch remote endpoint until backend invalidates the cache. I have 2 containers with memcached v1.4.35 and one with Nginx v1.11.10.
The configuration is the following:
upstream http_memcached {
server 172.17.0.6:11211;
server 172.17.0.7:11211;
}
upstream remote {
server api.example.com:443;
keepalive 16;
}
server {
listen 80;
location / {
set $memcached_key "$uri?$args";
memcached_pass http_memcached;
error_page 404 502 504 = #remote;
}
location #remote {
internal;
proxy_pass https://remote;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
I tried to set memcached upstream incorrectly but I get HTTP 499 instead and warnings:
*3 upstream server temporarily disabled while connecting to upstream
It seems with described configuration Nginx can reach memcached successfully but can't write or read from it. I can write and read to memcached with telnet successfully.
Can you help me please?
My guesses on what's going on with your configuration
1. 499 codes
HTTP 499 is nginx' custom code meaning the client terminated connection before receiving the response (http://lxr.nginx.org/source/src/http/ngx_http_request.h#0120)
We can easily reproduce it, just
nc -k -l 172.17.0.6 172.17.0.6:11211
and curl your resource - curl will hang for a while and then press Ctrl+C — you'll have this message in your access logs
2. upstream server temporarily disabled while connecting to upstream
It means nginx didn't manage to reach your memcached and just removed it from the pool of upstreams. Suffice is to shutdown both memcached servers and you'd constantly see it in your error logs (I see it every time with error_log ... info).
As you see these messages your assumption that nginx can freely communicate with memcached servers doesn't seem to be true.
Consider explicitly setting http://nginx.org/en/docs/http/ngx_http_memcached_module.html#memcached_bind
and use the -b option with telnet to make sure you're correctly testing memcached servers for availability via your telnet client
3. nginx can reach memcached successfully but can't write or read from it
Nginx can only read from memcached via its built-in module
(http://nginx.org/en/docs/http/ngx_http_memcached_module.html):
The ngx_http_memcached_module module is used to obtain responses from
a memcached server. The key is set in the $memcached_key variable. A
response should be put in memcached in advance by means external to
nginx.
4. overall architecture
It's not fully clear from your question how the overall schema is supposed to work.
nginx's upstream uses weighted round-robin by default.
That means your memcached servers will be queried once at random.
You can change it by setting memcached_next_upstream not_found so a missing key will be considered an error and all of your servers will be polled. It's probably ok for a farm of 2 servers, but unlikely is it what your want for 20 servers
the same is ordinarily the case for memcached client libraries — they'd pick a server out of a pool according to some hashing scheme => so your key would end up on only 1 server out of the pool
5. what to do
I've managed to set up a similar configuration in 10 minutes on my local box - it works as expected. To mitigate debugging I'd get rid of docker containers to avoid networking overcomplication, run 2 memcached servers on different ports in single-threaded mode with -vv option to see when requests are reaching them (memcached -p 11211 -U o -vv) and then play with tail -f and curl to see what's really happening in your case.
6. working solution
nginx config:
https and http/1.1 is not used here but it doesn't matter
upstream http_memcached {
server 127.0.0.1:11211;
server 127.0.0.1:11212;
}
upstream remote {
server 127.0.0.1:8080;
}
server {
listen 80;
server_name server.lan;
access_log /var/log/nginx/server.access.log;
error_log /var/log/nginx/server.error.log info;
location / {
set $memcached_key "$uri?$args";
memcached_next_upstream not_found;
memcached_pass http_memcached;
error_page 404 = #remote;
}
location #remote {
internal;
access_log /var/log/nginx/server.fallback.access.log;
proxy_pass http://remote;
proxy_set_header Connection "";
}
}
server.py:
this is my dummy server (python):
from random import randint
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello: {}\n'.format(randint(1, 100000))
This is how to run it (just need to install flask first)
FLASK_APP=server.py [flask][2] run -p 8080
filling in my first memcached server:
$ telnet 127.0.0.1 11211
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
set /? 0 900 5
cache
STORED
quit
Connection closed by foreign host.
checking:
note that we get a result every time although we stored data
only in the first server
$ curl http://server.lan && echo
cache
$ curl http://server.lan && echo
cache
$ curl http://server.lan && echo
cache
this one is not in the cache so we'll get a response from server.py
$ curl http://server.lan/?q=1 && echo
Hello: 32337
whole picture:
the 2 windows on the right are
memcached -p 11211 -U o -vv
and
memcached -p 11212 -U o -vv
Say I'm accessing www.mywebsite.com.
This website fetches the following asset:
http://www.mywebsite.com/styles/app.css
I want to access the website exactly as I normally would, with one exception:
Whenever my browser makes a request to /styles/app.css, instead of fetching it from http://www.mywebsite.com, I want to fetch it from http://localhost:3000/mywebsite/.
So instead it should be fetching:
http://localhost:3000/mywebsite/styles/app.css
Is this possible with nginx?
I tried to do it using the following server config:
{
...
server {
listen 80;
server_name mywebsite.com;
location /styles/ {
proxy_pass http://localhost:3000/mywebsite/styles/;
}
}
But even after restarting nginx (sudo nginx -s quit, sudo nginx), nothing seems to have changed.
When I browse to www.mywebsite.com/styles/app.css, I still get the same old app.css being retrieved from the server, rather than my local one.