Not able to change NGINX port - nginx

The problem I'm having trouble with is that I can't access remotely unless I use port 80 and I want to use a different port.
Here's the NGINX configuration I'm using. This will work on port 80. However, if I change
listen 80;
to
listen 6000;
it does not work when accessed from outside the local machine.
In other words, curl 127.0.0.1:6000 on the machine works. However trying to visit externally with 184.169.100.100:6000 does not work. (Pretending that's my public IP address.) It gives me a "site can't be reached" error in Chrome.
I've checked the security settings to make sure port 6000 is open. It's an AWS EC2 instance.
Optional note to put things in context: Overall what I'm trying to do is set up two different servers on one machine, each accessible from a different port, and each running it's own set of python workers. As a first step, I just want to make sure I can change the port by which a server is accessed, however, I'm not even able to do that yet and still access it externally.
ubuntu#ip-172-31-9-113:/etc/nginx/conf.d$ cat flask.conf
upstream gunicorn_server {
server localhost:8080 fail_timeout=0;
}
server {
listen 80;
server_name 184.169.100.100;
root /home/ubuntu/www;
client_max_body_size 4G;
keepalive_timeout 5;
proxy_read_timeout 900;
location / {
try_files $uri #app;
}
location #app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_redirect off;
# pass to the upstream gunicorn server mentioned above
proxy_pass http://gunicorn_server;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
Any help is appreciated.

this happen for firewall.
ubuntu:
if your ufw is enabled:
sudo ufw allow 6000
in centos disable selinux (not recommended. search for selinux config for allow port)

Is 6000 port open ?
Check in inbound port rules.
And then try hitting sudo ufw allow 6000 and check if you can access with 6000 also.
(this question is old, but still answering because if someone is facing same issue this answer may help them)

Try to use this command:
firewall-cmd --zone=public --add-port=6000/tcp --permanent

Answering my own question:
I tried using port 6001 instead of 6000 and then it worked. I could access it from outside. I can't say why, but there must have been an issue with 6000 for my particular case.

Related

Nginx 1.18 (on Ubuntu 20.04) proxy_pass not working

I'm new to Nginx proxy_pass, I want to convert app1 subdomain to path of app1. and url should be subdomain. I tried again and again but no luck. I'm stuck in this matter for two days.
server {
listen 80;
server_name app1.example.com;
location / {
proxy_pass http://example.com/app1$request_uri;
proxy_set_header Host app1.example.com;
}
}
Output : is 502 Bad gate way.
But when I access manually the http://example.com/app1 , it showing fine. Can't accessing from proxy_pass.
I tried many ways from the blog internet but not solved.
Thanks in advance.

502 error bad gateway with nginx on a server behind corporate proxy

I'm trying to install a custom service to one of our corporae server (those kind of server that are not connected to internet, unless all the trafic passes to a corporate proxy).
This proxy has been setup with the classic export http_proxy=blablabla in the .bashrc file, among other things.
Now the interesting part. I'm trying to configure nginx to redirect all traffic from server_name to local url localhost:3000
Here is the basic configuration I use. Nothing too tricky.
server {
listen 443 ssl;
ssl_certificate /crt.crt;
ssl_certificate_key /key.key;
server_name x.y.z;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass &http_upgrade;
}
}
When I try to access the server_name, from my browser, I get a 502 error (a nginx error, so the requests hits my server).
When I try to access the local url curl --noproxy '*' https://localhost:3000, from the server, it is working. (I have to write the --noproxy '*' flag because of the export http_proxy=blablabla setup in the .bashrc file. If i do not write this, the localhost reuquest is send to our distant proxy, resulting the requet to fail)
My guess is that this is has to be related to the corporate proxy configuration, but I might been missleading.
Do you have any insights that you could share with me about this?
Thanks a lot !
PS: the issue is not related to any kind of SSL configuration, this part is working great
PS2: I'm not a sysadmin, all these issuses are confusing
PS3: the server I'm working on is a RHEL 7.9
It has nothing to do with proxy, found my solution here :
https://stackoverflow.com/a/24830777/4991067
Thanks anyway

nginx using custom port

I have installed LEMP on Ubuntu. I am using a port 4738 as an nginx listener. Everything works fine. I can access the page as 123.123.123.123:4738.
I do want to get rid of port in the url. How to do it? I have read many answers on SO and tried but didn't work for me. Following is an example:
port_in_redirect off;
location / {
proxy_pass http://123.123.123.123:4738;
}
and
proxy_redirect http://123.123.123.123 http://123.123.123.123:4738;
port_in_redirect off;
If you're not using the default port for the protocol, you have to have the port in the URL. You have to listen on port 80 if you expect to not have the port in your HTTP URL.
Configure your server to listen on port 80.

Nginx as Reverse Proxy for Docker VHosts

I'm currently trying to built my own webserver/service and wanted to set up things like this:
Wordpress for the main "blog"
Gitlab for my git repositories
Owncloud for my data storage
I've been using Docker for getting a nice little gitlab running, which works perfectly fine, mapping to port :81 on my webserver with my domain.
What annoys me a bit is, that Docker images are always bound to a specific portnumber and are thus not really easy to remember, so I'd love to do something like this:
git.mydomain.com for gitlab
mydomain.com (no subdomain) for my blog
owncloud.mydomain.com for owncloud
As far as I understood, I need a reverse proxy for this, which I decided to use nginx for. So I set things up like this:
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name mydomain.com;
location / {
proxy_pass http://localhost:84;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
server {
listen 80;
server_name git.mydomain.com;
location / {
proxy_pass http://localhost:81;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
This way, I have git.mydomain.com up and running flawlessly, but my wordpress just shows me a blank webpage. My DNS is setup like this:
Host Type MX Destination
* A IP
# A IP
www CNAME #
Am I just too stupid or whats going on here?
I know your question is more specifically about your Nginx proxy configuration, but I thought it would be useful to give you this link which details how to set up an Nginx docker container that automagically deploys configurations for reverse-proxying those docker containers. In other words, you run the reverse proxy and then your other containers, and the Nginx container will route traffic to the others based on hostname.
Basically, you pull the proxy container and run it with a few parameters set in the docker run command, and then you bring up the other containers which you want proxied. Once you've got docker installed and pulled the nginx-proxy image, the specific commands I use to start the proxy:
docker run -d --name="nginx-proxy" --restart="always" -p 80:80 \
-v /var/run/docker.sock:/tmp/docker.sock jwilder/nginx-proxy
And now the proxy is running. You can verify by pointing a browser at your address, which should return an Nginx 502 or 503 error. You'll get the errors because nothing is yet listening. To start up other containers, it's super easy, like this:
docker run -d --name="example.com" --restart="always" \
-e "VIRTUAL_HOST=example.com" w3b1x/mywebcontainer
That -e "VIRTUAL_HOST=example.com" is all it takes to get your Nginx proxy routing traffic to the container you're starting.
I've been using this particular method since I started with Docker and it's really handy for exactly this kind of situation. The article I linked gives you step-by-step instructions and all the information you'll need. If you need more information (specifically about implementing SSL in this setup), you can check out the git repository for this software.
Your nginx config look sane, however, you are hitting localhost:xx, which is wrong. It should be either gatewayip:xx or better target_private_ip:80.
An easy way to deal with this is to start your containers with --link and to "inject" the ip via a shell script: have the "original" nginx config with a placeholder instead of the ip, then sed -i with the value from the environment.

Installed gitlab, but only nginx welcome page shows

I installed gitlab using its installation guide. Everything was OK, but when I open localhost:80 in the browser all I see it the message Welcome to nginx!. I can't find any log file with any errors in it.
I am running Ubuntu in VirtualBox. My /etc/nginx/sites-enabled/gitlab config file reads:
# GITLAB
# Maintainer: #randx
# App Version: 3.0
upstream gitlab {
server unix:/home/gitlab/gitlab/tmp/sockets/gitlab.socket;
}
server {
listen 192.168.1.1:80; # e.g., listen 192.168.1.1:80;
server_name aridev-VirtualBox; # e.g., server_name source.example.com;
root /home/gitlab/gitlab/public;
# individual nginx logs for this gitlab vhost
access_log /var/log/nginx/gitlab_access.log;
error_log /var/log/nginx/gitlab_error.log;
location / {
# serve static files from defined root folder;.
# #gitlab is a named location for the upstream fallback, see below
try_files $uri $uri/index.html $uri.html #gitlab;
}
# if a file, which is not found in the root folder is requested,
# then the proxy pass the request to the upsteam (gitlab unicorn)
location #gitlab {
proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://gitlab;
}
}
The nginx documentation says:
Server names are defined using the server_name directive and determine which server block is used for a given request.
That means in your case that that you have to enter aridev-VirtualBox within your browser instead of localhost.
To get this working you have to enter aridev-VirtualBox within your local Hosts file and point it to the IP of your VirtualBox PC.
This would look something like follows:
192.168.1.1 aridev-VirtualBox
I removed /etc/nginx/sites-enabled/default to get rid of that problem.
Try following both orkoden's advice of removing the default site from /etc/nginx/sites-enabled/ but also comment out your listen line since the default implied line there should be sufficient.
Also, make sure that when you make changes to these configurations, shut down both the gitlab and nginx services and start them in the order of gitlab first, followed by nginx.
Your configuration file is right. # /etc/nginx/sites-enabled/gitlab
Maybe I think your gitlab file link is wrong.
So Example:
ln -s /etc/nginx/sites-available/default
/etc/nginx/sites-enabled/gitlab
pls check default content == your /etc/nginx/sites-enabled/gitlab
content
after
Me I changed this line :
proxy_pass http://gitlab;
by this :
proxy_pass http://localhost:3000;
3000 is the port of my unicorn server.
moreover I did a chown root:ngnix on the conf file and it work now.
Old topic, but it may happen when there is a previously installed nginx.
$ gitlab-ctl reconfigure
or restart will not complain but the previous nginx instance may actually running instead of the one under gitlab.
This just happened to me.
Shutdown and disable this old nginx instance and do again:
$ gitlab-ctl reconfigure

Resources