Nginx and Django-socketio gives address already in use error - nginx

I am trying to setting up my django-socketio with uwsgi and nginx, and when I ran
sudo uwsgi --ini uwsgi.ini
I got an error saying Address is already in use.
I know what the problem is, I think they problem is when I ran sudo uwsgi --ini uwsgi.ini, it creates a SocketIOServer on port 80, and since my nginx is also started, it also listens to port 80. Therefore, they are conflicts, but I don't know how to solve it.
Could someone help.
My wsgi.py file looks like:
import os
PORT = 80
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project.settings")
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
from socketio import SocketIOServer
print 'Listening on port %s and on port 843 (flash policy server)' % PORT
SocketIOServer(('', PORT), application, resource="socket.io").serve_forever()
And my nginx file looks like:
upstream django {
server unix:///tmp/uwsgi.sock;
}
server {
listen 80;
charset utf-8;
error_log /home/ubuntu/nginxerror.log ;
location /static {
alias /home/ubuntu/project/static;
}
location / {
uwsgi_pass django;
include /etc/nginx/uwsgi_params;
}
}

I was looking at django-socketio recently, I remember I only let socketio listen on port 843.
Any reason why you need to listen on both 80 & 843?
under development, you may add open port 843, and see if this solves your problem.

Instead of creating a socketio server in your wsgi file, use the built in runserver_socketio and start it on port 9000 using supervisor, then have nginx proxy any requests for /socket.io/ to port 9000

Related

HTTP proxy_pass to HTTPS works on localhost but not on AWS EC2

I have an NGINX configuration that forwards HTTP to HTTPS. It works fine on my local system and it fails on an AWS EC2.
Here's the only configuration I have added to NGINX, the rest is left intact:
server {
listen 58080;
server_name localhost;
location / {
proxy_pass https://acme.com;
}
}
When serving the content (a single web page UI app) from my laptop everything works as expected. When trying to server the content from AWS I keep seeing errors in the dev-tools console as bellow:
GET https://ec2-*****.compute.amazonaws.com:58080/assets/index.5c112cd8.css net::ERR_SSL_PROTOCOL_ERROR
GET https://ec2-*****.compute.amazonaws.com:58080/assets/index.5660fafb.js net::ERR_SSL_PROTOCOL_ERROR
GET https://ec2-*****.compute.amazonaws.com:58080/favicon.ico net::ERR_SSL_PROTOCOL_ERROR
Something overwrites the protocol from HTTP to HTTPS. I did try just copy-n-paste these links and replace the protocol to HTTP and it works for this link specifically.
I did try adding additional directives such as proxy_set_header Host $proxy_host; and/or changing the server_name _; or the specific host of my EC2 instance but still got the same result.
Both systems env:
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.1 LTS
Release: 22.04
Codename: jammy
$ nginx -v
nginx version: nginx/1.22.0
I will definitely add a self-signed certificate to NGINX next which I believe it will resolve my issues but I am curious why it works on my localhost while on AWS it does not.

How to redirect ssh requests in Nginx?

I'm using the gitea versioning system in a docker environment. The gitea used is a rootless type image.
The http port mapping is “8084:3000” and the ssh port mapping is “2224:2222”.
I generated the keys on my Linux host and added the generated public key to my Gitea account.
1.Test environment
Later I created the ssh config file nano /home/campos/.ssh/config :
Host localhost
HostName localhost
User git
Port 2224
IdentityFile ~/.ssh/id_rsa
After finishing the settings i created the myRepo repository and cloned it.
To perform the clone, I changed the url from ssh://git#localhost:2224/campos/myRepo.git to git#localhost:/campos/myRepo.git
To clone the repository I typed: git clone git#localhost:/campos/myRepo.git
This worked perfectly!
2.Production environment
However, when defining a reverse proxy and a domain name, it was not possible to clone the repository.
Before performing the clone, I changed the ssh configuration file:
Host gitea.domain.com
HostName gitea.domain.com
User git
Port 2224
IdentityFile ~/.ssh/id_rsa
Then I tried to clone the repository again:
git clone git#gitea.domain.com:/campos/myRepo.git
A connection refused message was shown:
Cloning into 'myRepo'...
ssh: connect to host gitea.domain.com port 2224: Connection refused
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I understand the message is because by default the proxy doesn't handle ssh requests.
Searching a bit, some links say to use "stream" in Nginx.
But I still don't understand how to do this configuration. I need to continue accessing my proxy server on port 22 and redirect port 2224 of the proxy to port 2224 of the docker host.
The gitea.conf configuration file i use is as follows:
server {
listen 443 ssl http2;
server_name gitea.domain.com;
# SSL
ssl_certificate /etc/nginx/ssl/mycert_bundle.crt;
ssl_certificate_key /etc/nginx/ssl/mycert.key;
# logging
access_log /var/log/nginx/gitea.access.log;
error_log /var/log/nginx/gitea.error.log warn;
# reverse proxy
location / {
proxy_pass http://192.168.10.2:8084;
include myconfig/proxy.conf;
}
}
# HTTP redirect
server {
listen 80;
server_name gitea.domain.com;
return 301 https://gitea.domain.com$request_uri;
}
3. Redirection in Nginx
I spent several hours trying to understand how to configure Nginx's "stream" feature. Below is what I did.
At the end of the nginx.conf file I added:
stream {
include /etc/nginx/conf.d/stream;
}
In the stream file in conf.d, I added the content below:
upstream ssh-gitea {
server 10.0.200.39:2224;
}
server {
listen 2224;
proxy_pass ssh-gitea;
}
I tested the Nginx configuration and restart your service:
nginx -t && systemctl restart nginx.service
I viewed whether ports 80,443, 22 and 2224 were open on the proxy server.
ss -tulpn
This configuration made it possible to perform the ssh clone of a repository with a domain name.
4. Clone with ssh correctly
After all the settings I made, I understood that it is possible to use the original url ssh://git#gitea.domain.com:2224/campos/myRepo.git in the clone.
When typing the command git clone ssh://git#gitea.domain.com:2224/campos/myRepo.git, it is not necessary to define the config file in ssh.
This link helped me:
https://discourse.gitea.io/t/password-is-required-to-clone-repository-using-ssh/5006/2
In previous messages I explained my solution. So I'm setting this question as solved.

How to set up nginx setting to distribute different servers from request pointing different domain?

I would like to set up nginx to distribute different servers from request pointing dirrerent domain.
The nginx server environment is below.
CentOS Linux release 7.3.1611 (Core)
nginx 1.11.8
* in configure with --with-stream parameter. build & install from source.
My image is.
server1.testdomain.com ssh request ->(global IP) *nginx server -> (local IP)192.168.1.101 server
server2.testdomain.com ssh request ->(global IP) *nginx server -> (local IP)192.168.1.102 server
nginx server is same glocal IP and same server.
nginx.conf is ...
stream {
error_log /usr/local/nginx/logs/stream.log info;
upstream server1 {
server 192.168.1.101:22;
}
upstream server2 {
server 192.168.1.102:22;
}
server {
listen 22 server1.testdomain.com;
proxy_pass server1;
}
server {
listen 22 server2.testdomain.com;
proxy_pass server2;
}
}
But...
nginx: [emerg] the invalid "server1.testdomain.com" parameter in・・
error occurred. It seems like impossilbe to execute such as listen "22 server1.testdomain.com".
And,
I tried to write "server_name" in "server".
nginx: [emerg] "server_name" directive is not allowed here in・・・
don't permit to use "server_name" in "server".
How do I write config file to distribute difference server for difference domain request?
If you have a idea or information, could you teach me?
Its not possible with nginx because stream module is L3 balancer. SSH protocol works at L5/7.
Its not possible at all because ssh negotiation does not include destination host name.
You can do what you want only using two different IP or using two different ports. In both cases nginx can forward connection, but much better to use iptables in this case.

Nginx sites available config not working for port 80 only

I have setup the nginx on my server. It worked fine for port 5000.
Now I want to setup a different server to listen to port 80.
So I have this config, same as the first server
server {
# location /etc/nginx/sites-available/backoffice
# after creating link to sites available by
# sudo ln -s /etc/nginx/sites-available/backoffice /etc/nginx/sites-enabled
listen 80;
server_name backofficeX;
location / {
include proxy_params;
proxy_pass http://unix:/tmp/backoffice_gunicorn.sock;
}
}
It doesn't work and I get the generic 'Welcome to nginx!' message .
The thing is, ITS not working just for port 80 .
When I try port 5008/ 81 / ... it works fine. What Am I missing for port 80?
I tailed the error log and the access log
tail -f /var/log/nginx/error.log
but since there are no errors nothing comes up there
Don´t get mad with me, but I have to ask.
Isn't any other service running at port 80? Like apache...
Maybe you should use a port scanner to discover active ports...
Open your root config and ensure that you have info in error_log. This will log everything.
error_log /var/log/nginx/error.log info;
Reload your configuration using nginx -s reload
Then see the tail of error log...
tail -n 100 /var/log/nginx/error.log
It should give you pointers about what's going on.
Apache often runs on port 80, which might be the reason NGINX is not working.
Turns out what was listening on port 80 was nginx itself!!!
so I entered the default nginx file at /etc/nginx/sites-available/default:
server {
listen 4008; ## changed 80 -> 4008 (no really important what port)

Restricting direct access to port, but allow port forwarding in Nginx

I'm trying to restrict direct access to elasticsearch on port 9200, but allow Nginx to proxy pass to it.
This is my config at the moment:
server {
listen 80;
return 301;
}
server {
listen *:5001;
location / {
auth_basic "Restricted";
auth_basic_user_file /var/data/nginx-elastic/.htpasswd;
proxy_pass http://127.0.0.1:9200;
proxy_read_timeout 90;
}
}
This almost works as I want it to. I can access my server on port 5001 to hit elasticsearch and must enter credentials as expected.
However, I'm still able to hit :9200 and avoid the HTTP authentication, which defeats the point. How can I prevent access to this port, without restricting nginx? I've tried this:
server {
listen *:9200;
return 404;
}
But I get:
nginx: [emerg] bind() to 0.0.0.0:9200 failed (98: Address already in use)
as it conflicts with elasticsearch.
There must be a way to do this! But I can't think of it.
EDIT:
I've edited based on a comment and summarised the question:
I want to lock down < serverip >:9200, and basically only allow access through port 5001 (which is behind HTTP Auth). 5001 should proxy to 127.0.0.1:9200 so that elasticsearch is accessible only through 5001. All other access should 404 (or 301, etc).
add this in your ES config to ensure it only binds to localhost
network.host: 127.0.0.1
http.host: 127.0.0.1
then ES is only accessible from localhost and not the world.
make sure this is really the case with the tools of your OS. e.g. on unix:
$ netstat -an | grep -i 9200
tcp4 0 0 127.0.0.1.9200 *.* LISTEN
in any case I would lock down the machine using the OS firewall to really only allow the ports you want and not only rely on proper binding. why is this important? because ES also runs its cluster communication on another port (9300) and evil doers might just connect there.

Resources