I am trying to setup an nginx reverse proxy for my domain and a few of its subdomains. The subdomains work perfectly, but I keep getting ERR_NAME_NOT_RESOLVED on the top-level domain.
Except for the server_name and the proxy_pass port, the nginx config is identical between the top-level domain and its subdomains.
nginx config:
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://localhost:5500;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
DNS settings:
This is more likely to be a DNS issue than an Nginx one, but I don't understand why the subdomains work and the top-level one doesn't.
#AlexeyTen's comment about restarting my browser gave me an idea which ended up fixing my issue.
Basically, I use Acrylic DNS proxy on my development computer to handle .local domains for development. Most people normally use the hosts file for adding local domains, but I find that process tedious as I have worked with hundreds of local domains over the years so I ended up using this proxy that accepts wildcard domains which means I never have to touch the hosts file again.
However, in this instance, my local DNS proxy seemed to have a corrupt cache of my top-level domain. I just purged the cache and restarted the proxy and that fixed everything. I don't exactly know why this happened, but it's good to know that it can happen so it would be the first place for me to look if something similar happens in the future.
Thank you to #AlexeyTen for making me think outside the box. While it wasn't the browser's DNS cache, that comment made me realize that perhaps there was nothing wrong with my DNS settings on the server and instead something wrong with my local computer.
Related
I want to proxy https requests to a certain domain to another address:
server {
server_name site1;
listen 443;
ssl on;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://172.17.0.1:44110;
}
}
Nginx complains with:
nginx: [emerg] no "ssl_certificate" is defined for the "ssl" directive in /etc/nginx/nginx.conf:33
The point is that the certificate is actually on the proxied server.
How can I tell nginx to not terminate the ssl layer, and simply proxy it to the configured url?
What I am looking for is something similar to this, but with server_name support.
You should probably have to add:
proxy_ssl_session_reuse off;
see here
What you want to do is not possible in NGINX so far as I know. I actually had written out an answer that turned out to have duplicated the link you provided to another StackOverflow answer. If you consider what you are asking, it is in effect for NGINX to be able to Man-in-the-Middle the communication between the client browser and your origin. I don't think you really want this to be possible as it would make SSL/TLS quite useless.
You will either need to do what the linked StackOverflow answer does with the stream module, or you will need to move the certificate to be hosted by NGINX.
Cloudflare has created "Keyless" SSL which allows for the private material to be hosted elsewhere, but only the origin side of it is open source. You would have to modify NGINX to be able to implement the proxy side of the protocol, though perhaps someone else has done that as well. This is likely overkill for your needs.
I have a problem with a particular nginx setup. The scenario is like this: Applications need to access a couchdb service via a nginx proxy. The nginx needs to set an authorization header in order to get access to the backend. The problem is that the backend service endpoint's DNS changes sometimes and that's causing my services to stop working until I reload nginx.
I'm trying to setup the upstream as a variable, but when I do that, authorization stops working, the backend returns 403. When I just use the upstream directive, it works just fine. The upstream variable has the correct value, no errors in logs.
The config snippet below:
set $backend url.to.backend;
location / {
proxy_pass https://$backend/api;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host url.to.backend;
proxy_set_header Authorization "Basic <authorization_gibberish>";
proxy_temp_path /mnt/nginx_proxy;
}
Any help will be appreciated.
Unless you have the commercial version, nginx caches the resolution of an upstream (proxy_pass is basically a "one server upstream"), so the only way to re-resolve it is to perform a restart or reload of the configuration. This is assuming the changing DNS is the issue.
From the upstream module documentation:
Additionally, the following parameters are available as part of our
commercial subscription:
...
resolve - monitors changes of the IP
addresses that correspond to a domain name of the server, and
automatically modifies the upstream configuration without the need of
restarting nginx (1.5.12)
I deployed an meteor app to a digital ocean droplet and mapped that to a domain. I'm pretty new to server management so I followed a guide to set up a reverse proxy with nginx to point to the correct port (the meteor app is served on port 3000).
I created a file called trackburnr.com in /etc/nginx/sites-available with this content:
server {
listen 80;
server_name trackburnr.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
And start / restart the nginx service.
Now, here's the catch. If I navigate to trackburnr.com:3000, it always works. So I'm confident my droplet and DNS record on the domain works fine.
If I navigate to trackburnr.com, it seems like it's working fine, but if I refresh the page after a few minutes or navigate to it with another browser, it returns the "page not found" page from my internet provider.
If I restart the service, it usually works fine for a another few minutes and then stops working again.
There are several guides about this as it's a popular setup for deploying meteor apps, but they all use this same approach.
Following another answer in here I tried setting proxy_pass as a variable beforehand and passing it, but with no success.
Has anyone encountered similar issues?
I think I figured it out. My domain provider had an DNS redirect set up which redirected trackburner.com to www.trackburnr.com. Obviously that subdomain wasn't mapped in nginx.
I revered the redirect so that www redirected to the non-www version and that seemed to do the trick.
After that I was incurring in 400 Bad Request. I attribute this to the google analytics code in my header which made the cookies too big. I fixed this by adding the large_client_header_buffers 4 16k; to my server tag in the nginx conf file. More info about that here
I had a question regarding a proxy pass. A lot of tutorials show a configuration like this in some way or form, with a port identifed:
location / {
proxy_pass http://x.x.x.100:80;
proxy_set_header X-Real-IP $remote_addr;
}
Can someone explain to me why the port needs to be used? Does it need to be a specific number, or is it even necessary?
The explicityly specified port is:
not necessary IF you're reverse proxying to something on the default http (80) or https (443) ports
necessary if you're reverse proxying to something running on any non-default port (common when your application server and webserver are on the same host)
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass shows examples shows examples without the portnumber
I'm using nginx on OpenWRT to reverse-proxy a motion-jpeg feed from an IP camera, but I'm experiencing lag of up to 10-15 seconds, even at quite low frame sizes and rates. With the OpenWRT device removed from the path, the camera can be accessed with no lag at all.
Because of the length of the delay (and the fact that it grows with time), this looks like some kind of buffering/caching issue. I have already set proxy_buffering off, but is there something else I should be watching out for?
Thanks.
I installed mjpg-streamer on an Arduino Yun, and then in my routers settings setup port forwarding whitelisted to my webserver only.
Here is my Nginx config which lives in the sites-enabled directory.
server {
listen 80;
server_name cam.example.com;
error_log /var/log/nginx/error.cam.log;
access_log /var/log/nginx/access.cam.log;
location / {
set $pp_d http://99.99.99.99:9999/stream_simple.html;
if ( $args = 'action=stream' ) {
set $pp_d http://99.99.99.99:9999/$is_args$args;
}
if ( $args = 'action=snapshot' ) {
set $pp_d http://99.99.99.99:9999/$is_args$args;
}
proxy_pass $pp_d;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host:$server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Request-Start $msec;
}
}
I never got this working to my satisfaction with nginx. Depending on your specific needs, two solutions which may be adequate:
if you can tolerate the stream being on a different port, pass it through using the port forwarding feature of OpenWRT's built-in firewall.
use the reverse-proxy capabilities of tinyproxy. The default package has the reverse-proxy capabilities disabled by a flag, so you need to be comfortable checking out and building it yourself. This method is definitely more fiddly, but does also work.
I'd still be interested to hear of anyone who gets this working with nginx.
I have Nginx on Openwrt BB (wndr3800) reverse-proxying to a dlink 932LB1 ip cam, and it's working nicely. No significant lag, even before I disabled proxy_buffering. If I have a lot of stuff going over the network, the video can get choppy, but no more than it does with a straight-to-camera link from the browser (or from any of my ip cam apps). So... it is possible.
Nginx was the way to go for me. I tried tinyproxy & lighttpd for the reverse proxying, but each has missing features on OpenWrt. Both tinyproxy and lighttpd require custom compilation for the full reverse proxy features, and (AFAIK) lighttpd will not accept FQDNs in the proxy directive.
Here's what I have going:
Basic or digest auth on public facing Nginx provides site-wide access control.
I proxy my CGI scripts (shell, haserl, etc) to Openwrt's uhttpd.
Tightly controlled reverse-proxy to the camera mjpeg & jpeg API, no
other camera functions are exposed to the public.
Camera basic-auth handled by Nginx (proxy_set_header), so no backend
authorization code exposed to public.
Relatively small footprint (no perl, apache, ruby, etc).
I would include my nginx.conf here, except there's nothing unusual about it... just the bare bones proxy stuff. You might try tcpdump or wireshark to see what's cluttering your LAN, if traffic is indeed your culprit.
But it sounds like something about your router is the cause of the delay. Maybe the hardware just can't handle the cpu/traffic load, or there could be something else on your Openwrt setup that is hogging the highway. Is your video smooth and just delayed? Or are you seeing seriously choppy video? The lengthening delay you mention does sound like a buffer/cache thing... but I don't know what would be doing that.