I've been looking at possible ways to run a Minecraft server behind a reverse proxy on NGINX with a IP/location/to/server (e.g. 127.0.0.1/minecraft-server) connection, but the only information that I've found is to either use a SRV DNS record, or to use a stream proxy (but no further information is included about this possibility, or it does not provide a NGINX location config).
I need to use a NGINX reverse proxy as ports 80 and 443 will be the only ports that will be open externally via our provider (HTTP/S servers only allowed, they can't be used for anything else and the connection will be managed by administrators), and I don't have a domain. I can get one and a SSL certificate if that's all that's needed in order to be able to do this.
I know Minecraft runs on a TCP or UDP connection, and that's part of the reason why this is not an easy task, but since this is the only way I can possibly have future external access to my Minecraft Server (self hosted), I need a way to run the connection through an HTTP reverse proxy.
Is there any way to do this through NGINX or NGINX+other software?
Thank you in advance.
Related
Is it possible to setup a reverse proxy that would allow a database client to use a ssl port 443 connection and redirect to port 1521? I suspect it would not work. Can someone explain why or why not?
I'm assuming Oracle database based on port 1521.
There is no problem setting up Nginx TCP (L4) proxy for any TCP backend. Look here https://www.nginx.com/blog/tcp-load-balancing-udp-load-balancing-nginx-tips-tricks/ for example configuration.
When it comes to terminating SSL (L5) and sending data decrypted to TCP backend it's also technically possible with ngx_stream_ssl_module but I have never tested it and from what I can read people have problems setting this up for postgresql:
http://nginx.org/en/docs/stream/ngx_stream_ssl_module.html
Can nginx do TCP load balance with SSL termination
I have never seen Nginx setup as proxy for databases. Instead connection poolers (i.e. pgbouncer for postgresql) are often used not only for pooling but also as SSL offloading service. They are in fact L7 proxies for databases.
Oracle equivalent for pgbouncer seems to be Oracle Connection Manager and it supports SSL so I'd strongly recommend using it instead of Nginx or any other general purpose reverse proxy server:
https://docs.oracle.com/en/database/oracle/oracle-database/18/netag/configuring-oracle-connection-manager.html#GUID-AF8A511E-9AE6-4F4D-8E58-F28BC53F64E4
Suppose I want to use a combination of NGinX (probably another since it doesn't proxy HTTP/2 requests) and Hypercorn. As both can handle SSL certificate files, I wonder who is the best suited to do this for an HTTPS request. It is important to me that Hypercorn could listen to 443 port and I'm not sure it can do that without specifying certfile and keyfile parameters.
Well, that depend what you want to do.
The simpliest solution is to configure both to use SSL.
Nginx will receive the request, decipher it, process it, send it to Hypercom on port 443 as an HTTPS Client. Hypercom will get the request as any normal HTTPS client.
If your goal is security : go with both
If your goal is just to not
have hypercom expose directly, you can configure it to not use SSL
Nginx support by default proxying request to an HTTPS upstream so that's the best solution I think. However, you might need to play with setting http-header for hypercom to correctly understand who's the client by playing with X-Forwarded-For, X-Forwarded-Host and any headers that might be needed by Hypercom.
Good day,
I want to connect to my kafka server from the internet. Kafka installed on the virtual server and all servers hidden behind a nginx.
I updated kafka settings (server.properties).
Added: listeners=PLAINTEXT://:9092
I can connect to kafka server from local network via ip address 10.0.0.1:9092, but unable connect from internet by domain name.
Response from kafka: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Topic test-topic not present in metadata after 60000 ms.
Nginx: [26/Nov/2019:12:38:25 +0100] "\x00\x00\x00\x14\x00\x12\x00\x02\x00\x00\x00\x00\x00" 400 166 "-" "-" "request_time=1.535" "upstream_response_time=-" "upstream_connect_time=-" "upstream_header_time=-"
nginx conf:
server {
listen 9092;
server_name site.name;
# Max Request size
client_max_body_size 20m;
location / {
proxy_pass http://10.0.0.1:9092;
}
}
Does anyone know what the problem is?
Kafka doesn't use http protocol for communication, so it can't be fronted by an HTTP reverse proxy.
You'll have to use nginx stream definition blocks for TCP proxying
(I've not tried this personally)
https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/
unable connect from internet by domain name.
Sounds like an issue with your advertised.listeners configuration. Note that there is no clear way to "hide" Kafka behind a proxy since your clients are required to communicate directly with each broker individually (therefore defeating the purpose of having
Ngnix unless you want to use one Nginx server or open a new port, per broker), and would therefore also require Kafka to know that it would need to "advertise" the proxy rather than its own address.
If you really want to expose Kafka to the public web, you should really be using SSL/SASL listeners, not PLAINTEXT
If you want to use HTTP, then you can install Kafka REST Proxy, then put Nginx in front of that. Then your clients would use http rather than standard kafka libraries
I am running Rethinkdb on a server that lays behind cloudflare. I cannot make connection with my server when using my hostname. (I can access other stuff on my server so i am almost sure the problem doesn't lies there)
I am also able to connect using my ip directly (without adding a cert to the .connect function of rethinkdb client)
I do use Nginx and my client is in java
What i have tried:
Using custom set ports (that were said to be open on cloudflare)
trying proxying location to certain port
Using a cert (Rethink client side)
I couldn't find any information about Rethinkdb behind CloudFlare, so i am open to any suggestions
If i need to post more information please ask, i'm not sure what i should share...
This is not a RethinkDB issue. Cloudflare only supports specific ports, as spelled out here:
https://support.cloudflare.com/hc/en-us/articles/200169156-Which-ports-will-Cloudflare-work-with-
80
8080
8880
2052
2082
2086
2095
And all traffic must be HTTP sessions. RethinkDB communicates via raw TCP sockets. You cannot run it behind Cloudflare.
Theres a server in a customer that runs a nginx, a salt master daemon from saltstack and a secret web app that does secret things.
Considerations:
In this scenario, theres only one ip, only one server and multiple DNS records available;
I have nginx running in port 80;
And salt master running in 6453;
A domain.example.com binding to that IP, exposing my nginx 80 port, that points to the secret webapp;
otherdomain.example.com binding to the same IP, exposing my nginx 80 port, that I want to use to proxy salt port.
That customer has a machine in other place, that does need to connect to the salt and the internet connection is provided by a secret organization and they only allow connections to port 80, no negotiation possible.
My question:
Is possible to use nginx to redirect the otherdomain.example.com 80 port to the 6453 port? I tried the following:
server {
listen 80;
server_name otherdomain.example.com;
proxy_pass 127.0.0.1:6453;
}
But that doesn't work as expected. It is possible? There's some way to do this using nginx?
The error I got from log was:
"proxy_pass" directive is not allowed here
proxy_pass needs to be specified within a location context, and is fundamentally a Web Thing. It only comes into play after the web headers are sent and interpreted.
Things like what you're trying to accomplish are commonly done using HAProxy in tcp mode, although there is a tcp proxy module that also does similar things.
However, I don't think you're going to be very successful, as ZMQ does not participate in the protocol (HTTP Host: headers) that easily allows you to tell the web requests apart from the non-web requests (that come in on the same port).
My recommendation is to either find some way to use another port for this, a second IP address, or write a tricky TCP proxier that'll identify incoming HTTP and/or ZMQ connections and transparently forward them to the correct local port.