Problem:
Nginx doesn't route traffic based on the rule I have defined in a separate config file, and just displays the default 404 response.
Context:
I have a small middleware application written in Go that provides a simple response to GET requests. The application is deployed on port 8080:
$ curl localhost:8080
ok
I wish to write an Nginx configuration that allows me to route calls from /api to localhost:8080, which would allow me to do the following
$ curl localhost/api
ok
To achieve this, I have written the following config:
/etc/nginx/sites-available/custom-nginx-rules
server {
listen 80;
location /api {
proxy_pass http://localhost:8080;
}
}
I have also created a softlink in /etc/nginx/sites-enabled/ for the above file
$ ls -l /etc/nginx/sites-enabled
total 0
lrwxrwxrwx 1 root root 34 Jan 19 16:42 default -> /etc/nginx/sites-available/default
lrwxrwxrwx 1 root root 32 Feb 20 14:56 custom-nginx-rules -> /etc/nginx/sites-available/custom-nginx-rules
The rest of the setup is vanilla Nginx, nothing is changed.
Despite this simple setup, I get a 404 when making the following call:
$ curl localhost/api
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.10.3</center>
</body>
</html>
Other info: the following are nginx packages installed on my system (running on raspberry pi)
$ dpkg -l | grep nginx
ii libnginx-mod-http-auth-pam 1.10.3-1+deb9u1 armhf PAM authentication module for Nginx
ii libnginx-mod-http-dav-ext 1.10.3-1+deb9u1 armhf WebDAV missing commands support for Nginx
ii libnginx-mod-http-echo 1.10.3-1+deb9u1 armhf Bring echo and more shell style goodies to Nginx
ii libnginx-mod-http-geoip 1.10.3-1+deb9u1 armhf GeoIP HTTP module for Nginx
ii libnginx-mod-http-image-filter 1.10.3-1+deb9u1 armhf HTTP image filter module for Nginx
ii libnginx-mod-http-subs-filter 1.10.3-1+deb9u1 armhf Substitution filter module for Nginx
ii libnginx-mod-http-upstream-fair 1.10.3-1+deb9u1 armhf Nginx Upstream Fair Proxy Load Balancer
ii libnginx-mod-http-xslt-filter 1.10.3-1+deb9u1 armhf XSLT Transformation module for Nginx
ii libnginx-mod-mail 1.10.3-1+deb9u1 armhf Mail module for Nginx
ii libnginx-mod-stream 1.10.3-1+deb9u1 armhf Stream module for Nginx
ii nginx 1.10.3-1+deb9u1 all small, powerful, scalable web/proxy server
ii nginx-common 1.10.3-1+deb9u1 all small, powerful, scalable web/proxy server - common files
ii nginx-full 1.10.3-1+deb9u1 armhf nginx web/proxy server (standard version)
I also require that this setup is independent of any host or server names. It should do the routing regardless of host.
Without seeing your full configuration, it seems like it could be the case that the default nginx server block is accepting the request, rather than yours. You can try to fix this by changing the listen line to be:
listen 80 default_server;
You can also confirm that this is the case by adding a server_name and curling using that:
server_name api.example.com;
Then:
curl -H "Host: api.example.com" http://localhost/api
If that works, the issue is definitely the default_server handling.
From the NGINX docs on server selection:
In this configuration nginx tests only the request’s header field “Host” to determine which server the request should be routed to. If its value does not match any server name, or the request does not contain this header field at all, then nginx will route the request to the default server for this port. In the configuration above, the default server is the first one — which is nginx’s standard default behaviour. It can also be set explicitly which server should be default, with the default_server parameter in the listen directive:
Related
I have an NGINX configuration that forwards HTTP to HTTPS. It works fine on my local system and it fails on an AWS EC2.
Here's the only configuration I have added to NGINX, the rest is left intact:
server {
listen 58080;
server_name localhost;
location / {
proxy_pass https://acme.com;
}
}
When serving the content (a single web page UI app) from my laptop everything works as expected. When trying to server the content from AWS I keep seeing errors in the dev-tools console as bellow:
GET https://ec2-*****.compute.amazonaws.com:58080/assets/index.5c112cd8.css net::ERR_SSL_PROTOCOL_ERROR
GET https://ec2-*****.compute.amazonaws.com:58080/assets/index.5660fafb.js net::ERR_SSL_PROTOCOL_ERROR
GET https://ec2-*****.compute.amazonaws.com:58080/favicon.ico net::ERR_SSL_PROTOCOL_ERROR
Something overwrites the protocol from HTTP to HTTPS. I did try just copy-n-paste these links and replace the protocol to HTTP and it works for this link specifically.
I did try adding additional directives such as proxy_set_header Host $proxy_host; and/or changing the server_name _; or the specific host of my EC2 instance but still got the same result.
Both systems env:
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.1 LTS
Release: 22.04
Codename: jammy
$ nginx -v
nginx version: nginx/1.22.0
I will definitely add a self-signed certificate to NGINX next which I believe it will resolve my issues but I am curious why it works on my localhost while on AWS it does not.
I'm using the gitea versioning system in a docker environment. The gitea used is a rootless type image.
The http port mapping is “8084:3000” and the ssh port mapping is “2224:2222”.
I generated the keys on my Linux host and added the generated public key to my Gitea account.
1.Test environment
Later I created the ssh config file nano /home/campos/.ssh/config :
Host localhost
HostName localhost
User git
Port 2224
IdentityFile ~/.ssh/id_rsa
After finishing the settings i created the myRepo repository and cloned it.
To perform the clone, I changed the url from ssh://git#localhost:2224/campos/myRepo.git to git#localhost:/campos/myRepo.git
To clone the repository I typed: git clone git#localhost:/campos/myRepo.git
This worked perfectly!
2.Production environment
However, when defining a reverse proxy and a domain name, it was not possible to clone the repository.
Before performing the clone, I changed the ssh configuration file:
Host gitea.domain.com
HostName gitea.domain.com
User git
Port 2224
IdentityFile ~/.ssh/id_rsa
Then I tried to clone the repository again:
git clone git#gitea.domain.com:/campos/myRepo.git
A connection refused message was shown:
Cloning into 'myRepo'...
ssh: connect to host gitea.domain.com port 2224: Connection refused
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I understand the message is because by default the proxy doesn't handle ssh requests.
Searching a bit, some links say to use "stream" in Nginx.
But I still don't understand how to do this configuration. I need to continue accessing my proxy server on port 22 and redirect port 2224 of the proxy to port 2224 of the docker host.
The gitea.conf configuration file i use is as follows:
server {
listen 443 ssl http2;
server_name gitea.domain.com;
# SSL
ssl_certificate /etc/nginx/ssl/mycert_bundle.crt;
ssl_certificate_key /etc/nginx/ssl/mycert.key;
# logging
access_log /var/log/nginx/gitea.access.log;
error_log /var/log/nginx/gitea.error.log warn;
# reverse proxy
location / {
proxy_pass http://192.168.10.2:8084;
include myconfig/proxy.conf;
}
}
# HTTP redirect
server {
listen 80;
server_name gitea.domain.com;
return 301 https://gitea.domain.com$request_uri;
}
3. Redirection in Nginx
I spent several hours trying to understand how to configure Nginx's "stream" feature. Below is what I did.
At the end of the nginx.conf file I added:
stream {
include /etc/nginx/conf.d/stream;
}
In the stream file in conf.d, I added the content below:
upstream ssh-gitea {
server 10.0.200.39:2224;
}
server {
listen 2224;
proxy_pass ssh-gitea;
}
I tested the Nginx configuration and restart your service:
nginx -t && systemctl restart nginx.service
I viewed whether ports 80,443, 22 and 2224 were open on the proxy server.
ss -tulpn
This configuration made it possible to perform the ssh clone of a repository with a domain name.
4. Clone with ssh correctly
After all the settings I made, I understood that it is possible to use the original url ssh://git#gitea.domain.com:2224/campos/myRepo.git in the clone.
When typing the command git clone ssh://git#gitea.domain.com:2224/campos/myRepo.git, it is not necessary to define the config file in ssh.
This link helped me:
https://discourse.gitea.io/t/password-is-required-to-clone-repository-using-ssh/5006/2
In previous messages I explained my solution. So I'm setting this question as solved.
I have a VPS running Ubuntu + Nginx. There's an old website I'm no longer using, so I followed these steps (based on these instructions, and these to remove the SSL.
cd /etc/nginx/sites-enabled
sudo rm oldwebsite.com
cd ../sites-available
sudo rm oldwebsite.com
Next, I figured I could also delete the relevant files in /var/www/
cd /var/www/
sudo rm -r oldwebsite.com
Now when I try to access www.oldwebsite.com, I still get the same website, just without HTTPS anymore. I've checked /etc/nginx/sites-available/default for any remaining references to that website, but as far as I know, I've erased all traces of its existence from my server.
Was this the incorrect way to delete an old website?
If it helps, my old website was set up to use a reverse proxy to direct to my Express app. It was set up as a server block according to this guide.
So first of all if you dont want to make it accessable anymore delete the Host A record on your DNS. With this the DNS query will not point to any severs IP address.
Based on your comment: If its showing the APACHE defaults page your DNS points to an IP address of an webserver running httpd. So let me draft a couple of steps for you how I would do it (as somebody how moved 10K of sites from and to NGINX).
1. DNS is key
Check the current DNS Setting for your domain. Do a quick lookup using tools like host or dig.
$# host nginx.org
nginx.org has address 52.58.199.22
So great now we know the public IPv4 of our WebServer (we are not talking about LoadBalancers or anything else in between. We asume the webserver is directly conncted to the internet.)
2. WebServer configuration
On your server make sure nginx is installed and listening on port 80 for example.
$# netstat -tulpn | grep "LISTEN"
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 25674/nginx: master
Great. We have NGINX listen on Port 80. Let make sure we can send reuqest.
$# curl -v http://YOURDOMAIN
* About to connect() to localhost port 80 (#0)
* Trying ::1...
* Connection refused
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: localhost
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.19.5
< Date: Sun, 14 Feb 2021 08:47:56 GMT
< Content-Type: text/plain
< Content-Length: 10
< Connection: keep-alive
<
localhost
* Connection #0 to host localhost left intact
So if you got an response that means your NGINX is up and running, listening on port 80 and there is no firewall (ufw, firewalld, iptables, security-groups...) blocking your from reaching out to the server.
NOTICE: Make sure your firewall setup is done right. Let me know if you need more information on that depending on your systems architecture.
NGINX Configuration
Lets say your website should just print out a String saying "We will be here shortly!"
Based on your OS the configuration directory for custom nginx files can be different. Check the default /etc/nginx/nginx.conf file and see the include path in the http context. This should be something like: include conf.d/*.conf or sites-enabled/*.conf. Create a conf-file in one of that directories.
server {
listen 80;
server_name YOURDOMAIN.com
location {
add_hedaer "Content-Type: text/html";
return 200 "We will be here shortly!\n";
}
}
With this simple setup you can have a webserver up and running but not showing anyhing special. If you want to create a little html file feel free to do so and use root and index to configure it in your nginx configuration.
After deleting enter command sudo systemctl restart nginx
I would like to set up nginx to distribute different servers from request pointing dirrerent domain.
The nginx server environment is below.
CentOS Linux release 7.3.1611 (Core)
nginx 1.11.8
* in configure with --with-stream parameter. build & install from source.
My image is.
server1.testdomain.com ssh request ->(global IP) *nginx server -> (local IP)192.168.1.101 server
server2.testdomain.com ssh request ->(global IP) *nginx server -> (local IP)192.168.1.102 server
nginx server is same glocal IP and same server.
nginx.conf is ...
stream {
error_log /usr/local/nginx/logs/stream.log info;
upstream server1 {
server 192.168.1.101:22;
}
upstream server2 {
server 192.168.1.102:22;
}
server {
listen 22 server1.testdomain.com;
proxy_pass server1;
}
server {
listen 22 server2.testdomain.com;
proxy_pass server2;
}
}
But...
nginx: [emerg] the invalid "server1.testdomain.com" parameter in・・
error occurred. It seems like impossilbe to execute such as listen "22 server1.testdomain.com".
And,
I tried to write "server_name" in "server".
nginx: [emerg] "server_name" directive is not allowed here in・・・
don't permit to use "server_name" in "server".
How do I write config file to distribute difference server for difference domain request?
If you have a idea or information, could you teach me?
Its not possible with nginx because stream module is L3 balancer. SSH protocol works at L5/7.
Its not possible at all because ssh negotiation does not include destination host name.
You can do what you want only using two different IP or using two different ports. In both cases nginx can forward connection, but much better to use iptables in this case.
I installed Gitlab CE on a dedicated Ubuntu 14.04 server edition with Omnibus package.
Now I would want to install three other virtual hosts next to gitlab.
Two are node.js web applications launched by a non-root user running on two distinct ports > 1024, the third is a PHP web application that need a web server to be launched from.
There are:
a private bower registry running on 8081 (node.js)
a private npm registry running on 8082 (node.js)
a private composer registry (PHP)
But Omnibus listen 80 and doesn't seem to use neither Apache2 or Nginx, thus I can't use them to serve my PHP app and reverse-proxy my two other node apps.
What serving mechanics Gitlab Omnibus uses to listen 80 ?
How should I create the three other virtual hosts to be able to provide the following vHosts ?
gitlab.mycompany.com (:80) -- already in use
bower.mycompany.com (:80)
npm.mycompany.com (:80)
packagist.mycompany.com (:80)
About these
But Omnibus listen 80 and doesn't seem to use neither Apache2 or Nginx [, thus ...].
and #stdob comment :
Did omnibus not use nginx as a web server ??? –
Wich I responded
I guess not because nginx package isn't installed in the system ...
In facts
From Gitlab official docs :
By default, omnibus-gitlab installs GitLab with bundled Nginx.
So yes!
Omnibus package actually uses Nginx !
but it was bundled, explaining why it doesn't require to be installed as dependency from the host OS.
Thus YES! Nginx can, and should be used to serve my PHP app and reverse-proxy my two other node apps.
Then now
Omnibus-gitlab allows webserver access through user gitlab-www which resides
in the group with the same name. To allow an external webserver access to
GitLab, external webserver user needs to be added gitlab-www group.
To use another web server like Apache or an existing Nginx installation you will have to do
the following steps:
Disable bundled Nginx by specifying in /etc/gitlab/gitlab.rb
nginx['enable'] = false
# For GitLab CI, use the following:
ci_nginx['enable'] = false
Check the username of the non-bundled web-server user. By default, omnibus-gitlab has no default setting for external webserver user.
You have to specify the external webserver user username in the configuration!
Let's say for example that webserver user is www-data.
In /etc/gitlab/gitlab.rb set
web_server['external_users'] = ['www-data']
This setting is an array so you can specify more than one user to be added to gitlab-www group.
Run sudo gitlab-ctl reconfigure for the change to take effect.
Setting the NGINX listen address or addresses
By default NGINX will accept incoming connections on all local IPv4 addresses.
You can change the list of addresses in /etc/gitlab/gitlab.rb.
nginx['listen_addresses'] = ["0.0.0.0", "[::]"] # listen on all IPv4 and IPv6 addresses
For GitLab CI, use the ci_nginx['listen_addresses'] setting.
Setting the NGINX listen port
By default NGINX will listen on the port specified in external_url or
implicitly use the right port (80 for HTTP, 443 for HTTPS). If you are running
GitLab behind a reverse proxy, you may want to override the listen port to
something else. For example, to use port 8080:
nginx['listen_port'] = 8080
Similarly, for GitLab CI:
ci_nginx['listen_port'] = 8081
Supporting proxied SSL
By default NGINX will auto-detect whether to use SSL if external_url
contains https://. If you are running GitLab behind a reverse proxy, you
may wish to keep the external_url as an HTTPS address but communicate with
the GitLab NGINX internally over HTTP. To do this, you can disable HTTPS using
the listen_https option:
nginx['listen_https'] = false
Similarly, for GitLab CI:
ci_nginx['listen_https'] = false
Note that you may need to configure your reverse proxy to forward certain
headers (e.g. Host, X-Forwarded-Ssl, X-Forwarded-For, X-Forwarded-Port) to GitLab.
You may see improper redirections or errors (e.g. "422 Unprocessable Entity",
"Can't verify CSRF token authenticity") if you forget this step. For more
information, see:
What's the de facto standard for a Reverse Proxy to tell the backend SSL is used?
https://wiki.apache.org/couchdb/Nginx_As_a_Reverse_Proxy
To go further you can follow the official docs at https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc/settings/nginx.md#using-a-non-bundled-web-server
Configuring our gitlab virtual host
Installing Phusion Passenger
We need to install ruby (gitlab run in omnibus with a bundled ruby) globally in the OS
$ sudo apt-get update
$ sudo apt-get install ruby
$ sudo gem install passenger
Recompile nginx with the passenger module
Instead of Apache2 for example, nginx isn't able to be plugged with binary modules on-the-fly. It must be recompiled for each new plugin you want to add.
Phusion passenger developer team worked hard to provide saying, "a bundled nginx version of passenger" : nginx bins compiled with passenger plugin.
So, lets use it:
requirement: we need to open our TCP port 11371 (the APT key port).
$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 561F9B9CAC40B2F7
$ sudo apt-get install apt-transport-https ca-certificates
creating passenger.list
$ sudo nano /etc/apt/sources.list.d/passenger.list
with these lignes
# Ubuntu 14.04
deb https://oss-binaries.phusionpassenger.com/apt/passenger trusty main
use the right repo for your ubuntu version. For Ubuntu 15.04 for example:
deb https://oss-binaries.phusionpassenger.com/apt/passenger vivid main
Edit permissions:
$ sudo chown root: /etc/apt/sources.list.d/passenger.list
$ sudo chmod 600 /etc/apt/sources.list.d/passenger.list
Updating package list:
$ sudo apt-get update
Allowing it as unattended-upgrades
$ sudo nano /etc/apt/apt.conf.d/50unattended-upgrades
Find or create this config block on top of the file:
// Automatically upgrade packages from these (origin:archive) pairs
Unattended-Upgrade::Allowed-Origins {
// you may have some instructions here
};
Add the following:
// Automatically upgrade packages from these (origin:archive) pairs
Unattended-Upgrade::Allowed-Origins {
// you may have some instructions here
// To check "Origin:" and "Suite:", you could use e.g.:
// grep "Origin\|Suite" /var/lib/apt/lists/oss-binaries.phusionpassenger.com*
"Phusion:stable";
};
Now (re)install nginx-extra and passenger:
$ sudo cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak_"$(date +%Y-%m-%d_%H:%M)"
$ sudo apt-get install nginx-extras passenger
configure it
Uncomment the passenger_root and passenger_ruby directives in the /etc/nginx/nginx.conf file:
$ sudo nano /etc/nginx/nginx.conf
... to obtain something like:
##
# Phusion Passenger config
##
# Uncomment it if you installed passenger or passenger-enterprise
##
passenger_root /usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini;
passenger_ruby /usr/bin/passenger_free_ruby;
create the nginx site configuration (the virtual host conf)
$ nano /etc/nginx/sites-available/gitlab.conf
server {
listen *:80;
server_name gitlab.mycompany.com;
server_tokens off;
root /opt/gitlab/embedded/service/gitlab-rails/public;
client_max_body_size 250m;
access_log /var/log/gitlab/nginx/gitlab_access.log;
error_log /var/log/gitlab/nginx/gitlab_error.log;
# Ensure Passenger uses the bundled Ruby version
passenger_ruby /opt/gitlab/embedded/bin/ruby;
# Correct the $PATH variable to included packaged executables
passenger_env_var PATH "/opt/gitlab/bin:/opt/gitlab/embedded/bin:/usr/local/bin:/usr/bin:/bin";
# Make sure Passenger runs as the correct user and group to
# prevent permission issues
passenger_user git;
passenger_group git;
# Enable Passenger & keep at least one instance running at all times
passenger_enabled on;
passenger_min_instances 1;
error_page 502 /502.html;
}
Now we can enable it:
$ sudo ln -s /etc/nginx/sites-available/gitlab.cong /etc/nginx/sites-enabled/
There is no a2ensite equivalent coming natively with nginx, so we use ln, but if you want, there is a project on github:
nginx_ensite:
nginx_ensite and nginx_dissite for quick virtual host enabling and disabling
This is a shell (Bash) script that replicates for nginx the Debian a2ensite and a2dissite for enabling and disabling sites as virtual hosts in Apache 2.2/2.4.
It' done :-). Finally, restart nginx
$ sudo service nginx restart
With this new configuration, you are able to run other virtual hosts next to gitlab to serve what you want
Just create new configs in /etc/nginx/sites-available.
In my case, I made running and serving this way on the same host :
gitlab.mycompany.com - the awesome git platform written in ruby
ci.mycompany.com - the gitlab continuous integration server written in ruby
npm.mycompany.com - a private npm registry written in node.js
bower.mycompany.com - a private bower registry written in node.js
packagist.mycompany.com - a private packagist for composer registry written in php
For example, to serve npm.mycompany.com :
Create a directory for logs:
$ sudo mkdir -p /var/log/private-npm/nginx/
And fill a new vhost config file:
$ sudo nano /etc/nginx/sites-available/npm.conf
With this config
server {
listen *:80;
server_name npm.mycompany.com
client_max_body_size 5m;
access_log /var/log/private-npm/nginx/npm_access.log;
error_log /var/log/private-npm/nginx/npm_error.log;
location / {
proxy_pass http://localhost:8082;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Then enable it and restart it:
$ sudo ln -s /etc/nginx/sites-available/npm.conf /etc/nginx/sites-enabled/
$ sudo service nginx restart
As I would not like to change the nginx server for gitlab (with some other integrations), the safest way would be below solution.
also as per
Gitlab:Ningx =>Inserting custom settings into the NGINX config
edit the /etc/gitlab/gitlab.rb of your gitlab:
nano /etc/gitlab/gitlab.rb
and sroll to nginx['custom_nginx_config'] and modify as below make sure to uncomment
# Example: include a directory to scan for additional config files
nginx['custom_nginx_config'] = "include /etc/nginx/conf.d/*.conf;"
create the new config dir:
mkdir -p /etc/nginx/conf.d/
nano /etc/nginx/conf.d/new_app.conf
and add content to your new config
# my new app config : /etc/nginx/conf.d/new_app.conf
# set location of new app
upstream new_app {
server localhost:1234; # wherever it might be
}
# set the new app server
server {
listen *:80;
server_name new_app.mycompany.com;
server_tokens off;
access_log /var/log/new_app_access.log;
error_log /var/log/new_app_error.log;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
location / { proxy_pass http://new_app; }
}
and reconfigure gitlab to get the new settings inserted
gitlab-ctl reconfigure
to restart nginx
gitlab-ctl restart nginx
to check nginx error log:
tail -f /var/log/gitlab/nginx/error.log