I need to access a webserver in a private network, that has no direct access from outside. Opening router ports etc. is not an option.
I try to solve this with a raspi in that network, that i can manage via upswift.io.
Amongst other things, upswift allows temporary remote access to a given port over url's like
http://d-4307-5481-nc7nflrh26s.forwarding.upswift.io:56947/
This will map to a port that i can define.
With this, i can access a VNC Server on the pi, start a browser there and access the webserver i need.
But i hope to find a more elegant way, where i can access the Site from my local browser, and where the Pi does not need to run a Desktop.
As far as i found out, this can be done with a reverse proxy like nginx.
I found a lot of tutorials on it, but i struggle at one point:
After being able to install nginx and accessing it's default index page from my local browser through the temporary upswift.io url, i can't get it to work as reverse proxy.
I think my conf needs to look like
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://192.x.x.2;
}
}
Where example.com would be the name or IP under which the device is accessed.
Now, this would not work for me, as that name is dynamic.
So i wonder if there's a way to configure nginx so it does not need that name. I would expect that is possible, as the default webserver config works without it too. Are reverse proxies different in that regard?
Or, is there a better way than with a reverse proxy to do what i want?
You could try to define it as a default block
server {
listen 80 default_server;
server_name _;
location / {
proxy_pass http://192.x.x.2;
}
}
Related
I'm using the below config in nginx to proxy RDP connection:
server {
listen 80;
server_name domain.com;
location / {
proxy_pass http://192.168.0.100:3389;
}
}
but the connection doesn't go through. My guess is that the problem is http in proxy_pass. Googling "Nginx RDP" didn't yield much.
Anyone knows if it's possible and if yes how?
Well actually you are right the http is the problem but not exactly that one in your code block. Lets explain it a bit:
In your nginx.conf file you have something similar to this:
http {
...
...
...
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
So everything you write in your conf files are inside this http block/scope. But rdp is not http is a different protocol.
The only workaround I know for nginx to handle this is to work on tcp level.
So inside in your nginx.conf and outside the http block you have to declare the stream block like this:
stream {
# ...
server {
listen 80;
proxy_pass 192.168.0.100:3389;
}
}
With the above configuration just proxying your backend on tcp layer with a cost of course. As you may notice its missing the server_name attribute you can't use it in the stream scope, plus you lose all the logging functionality that comes on the http level.
For more info on this topic check the docs
For anyone who is looking to load balance RDP connection using Nginx, here is what I did:
Configure nginx as you normally would, to reroute HTTP(S) traffic to your desired server.
On that server, install myrtille (it needs IIS and .Net 4.5) and you'll be able to RDP into your server from a browser!
I have a server with CentOS, and there I will have at least 4 Golang applications running, every one of them is a different site that I should be able to access in the browser with domain/subdomains as follows:
dev00.mysite.com
dev01.mysite.com
dev02.mysite.com
dev03.mysite.com
So, I need to configure some kind of software that redirects the requests to the correct Golang process. Every site will be running in a different port, so for example if someone calls dev00.mysite.com I should be able to send that request to the process of dev00 site (this is for development porpouses, not production). So, here I'm starting to believe that I need Nginx or Caddy as I read, but I have no experience with none of them.
Can someone confirm that this is the way to fix that problem? and where can I find some example of configuration of any of that servers redirecting to Golang applications?
And, in the future if a have a lot (really a lot) of domains running in the same server, which of that servers is better? who is better with high load?
Yes, Nginx can solve your problem:
Start a web server using the standard library of Go or Caddy.
Redirect request to Go application using Nginx:
Example Nginx configuration:
server {
listen 80;
server_name dev00.mysite.com;
...
location / {
proxy_pass http://localhost:8000;
...
}
}
server {
listen 80;
server_name dev01.mysite.com;
...
location / {
proxy_pass http://localhost:8001;
...
}
}
In Nginx, can one somehow block or allow access from certain ports, in a location? Looking at the allow & deny docs it seems to me that they cannot be used for this purpose. Right? But is there no other way to do this?
Background:
In an Nginx virtual host, I'm allowing only a certain IP to publish websocket events:
server {
listen 80;
location /websocket/publish {
allow 192.168.0.123;
deny all;
}
However, soon the IP address of the appserver will we unknown, because everything will run inside Docker and I think I'll have no idea which ip a certain container will have.
So I'm thinking I could do this instead:
server {
listen 80;
listen 81;
location /websocket/publish {
# Let the appserver publish via port 81.
allow :81; # <–– "invalid parameter" error
# Block everything else, so browsers cannot publish via port 80.
deny all;
}
... other locations, accessible via port 80
And then have the firewall block traffic to port 81 from the outside world. But allow :81 doesn't work. Is there no other way? Or am I on the wrong track; are there better ways to do all this?
(As far as I've understood from the docs about the websocket Nginx plugin I use (namely Nchan) I cannot add the /websocket/publish endpoint in another server { } block that listens on port 81 only. Edit: Turns out I can just use different server blocks, because Nchan apparently ignores in which server block I place the config stuff, see: https://github.com/slact/nchan/issues/157. So I did that, works fine for me now. However would still be interesting to know if Nginx supports blocking a port in a location { ... }. )
I think I finally grasped how Docker works, so I am getting ready for the next step: cramming a whole bunch of unrelated applications into a single server with a single public IP. Say, for example, that I have a number of legacy Apache2-VHost-based web-sites, so the best I could figure was to run a LAMP container to replicate the current situation, and improve later. For argument sake, here is what I have a container at 172.17.0.2:80 that serves
http://www.foo.com
http://blog.foo.com
http://www.bar.com
Quite straightforward: publishing port 80 lets me correctly access all those sites. Next, I have two services that I need to run, so I built two containers
service-a -> 172.17.0.3:3000
service-b -> 172.17.0.4:5000
and all is good, I can privately access those services from my docker host. The trouble comes when I want to publicly restrict access to service-a through service-a.bar.com:80 only, and to service-b through www.foo.com:5000 only. A lot of reading after, it would seem that I have to create a dreadful artefact called a proxy, or reverse-proxy, to make things more confusing. I have no idea what I'm doing, so I dove nose-first into nginx -- which I had never used before -- because someone told me it's better than Apache at dealing with lots of small tasks and requests -- not that I would know how to turn Apache into a proxy, mind you. Anyway, nginx sounded perfect for a thing that has to take a request a pass it onto another server, so I started reading docs and I produced the following (in addition to the correctly working vhosts):
upstream service-a-bar-com-80 {
server 172.17.0.3:3000;
}
server {
server_name service-a.bar.com;
listen 80;
location / {
proxy_pass http://service-a-bar-com-80;
proxy_redirect off;
}
}
upstream www-foo-com-5000 {
server 172.17.0.4:5000;
}
server {
server_name www.foo.com;
listen 5000;
location / {
proxy_pass http://www-foo-com-5000;
proxy_redirect off;
}
}
Which somewhat works, until I access http://blog.bar.com:5000 which brings up service-b. So, my question is: what am I doing wrong?
nginx (like Apache) always has a default server for a given ip+port combination. You only have one server listening on port 5000, so it is your defacto default server for services on port 5000.
So blog.bar.com (which I presume resolves to the same IP address as www.foo.com) will use the default server for port 5000.
If you want to prevent that server block being the default server for port 5000, set up another server block using the same port, and mark it with the default_server keyword, as follows:
server {
listen 5000 default_server;
root /var/empty;
}
You can use a number of techniques to render the server inaccessible.
See this document for more.
I'd like to stop NGINX from logging my own IP addreess in my access.log Is this possible? I can easily do it in Apache but I haven't been able to find anything like this for NGINX.
This should really be on serverfault so I'll vote for a move.
But I can help a little here.
Short version, no you can't.
Long version. You can hack around it by using different backends for where you log one and don't log the other. Or by creating an extra server on a different port. But there isn't really a clean way of filtering an IP address from the logs.
You can however filter by url, perhaps that is an option for you?
You could create a virtual host that will log only your accesses, while the main log will log the rest. In this case you would access the new virtual host from your machine.
server {
listen 80;
server_name domain.com www.domain.com;
access_log logs/domain.access.log;
Then you create a second one
server {
listen 80;
server_name me.domain.com;
access_log logs/me.domain.access.log;
Or remove the last line.
This way your accesses won't mix with the external accesses.
You have to add me.domain.com in DNS or in your /etc/hosts, with the same IP as the main domain.