facing an issue with haproxy / nginx - nginx

I need to setup a reverse proxy server which would distribute traffic to the backend servers based on the incoming HOST header.
I opted for HAproxy for this but after setting up everything I realized that HAproxy reads the configuration just once when the service starts and continues to use the backend IP address unless it has been reloaded/restarted.
This is an issue for me since in my case if the backend server reboots it will have a different IP address and I dont have control on which IP address it gets.
I am thinking of moving to nginx server but before I go through all the setup I would like to know if we have the same issue with Nginx or not?
Meaning: If in the configuration file I have specific the name of backend server and if the related IP address changes, will Nginx refresh its dns cache to identify the new IP address?
(When the backend server changes IP, it is automatically updated in the hosts file of proxy server)

Yes, nginx will do the job. See 'resolve' option here:
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server

Related

Get real ip with Openlitespeed as webserver with a reverse proxy in front of our Openlitespeed webservers

My issue is that I can get the real IP address from the client when I am using the openlitespeed webserver with a reversed proxy in front of my openlitespeed serveres.
We have the ssl termination on the openlitespeed web serveres and NOT on the proxy server.
The proxy is only going to farward the request to the correct server nothing else. We have multiple serveres.
We are at this point only able to get the reversed proxy IP address and not the client ip address.
We have tried this with haProxy and are now trying it with nginx as reversed proxy.
I have read that it wont work with haProxy, but nginx is a bit more flexible it think.
I have set the server Use Client IP in Header to Yes on the openlitespeed servere:
My first question is:
Is this possible or dosent the openlitespeed server support this at all.
Ref: https://clients.javapipe.com/knowledgebase/135/Real-Visitor-IPs-With-Website-DDoS-Protection.html
This says its built in on litespeed.
My second question is:
Do you know if this have been done successfully with haProxy, Nginx or Squid proxy?
My third question is:
Do anyone have a config that works for either haProxy, Nginx or Squid proxy.
Prefered: Nginx or haProxy
A big thanks in advance for anyone who can answer these questions.
I can confirm that is doesn't not work with ols. The litespeed team has confirmed it. Though they might add support for proxy protocol in the future. We are now syncing the ssl from the webserver to the proxy making it s secure all the way.

Map DNS entry to specific port

Let's say I have this DNS entry: mysite.sample. I am developing, and have a copy of my website running locally in http://localhost:8080. I want this website to be reachable using the (fake) DNS: http://mysite.sample, without being forced to remember in what port this site is running. I can setup /etc/hosts and nginx to do proxing for that, but ... Is there an easier way?
Can I somehow setup a simple DNS entry using /etc/hosts and/or dnsmasq where also a non-standard port (something different than :80/:443) is specified? Without the need to provide extra configuration for nginx?
Or phrased in a simpler way: Is it possible to provide port mappings for dns entries in /etc/hosts or dnsmasq?
DNS has nothing to do with the TCP port. DNS is there to resolv names (e.g. mysite.sample) into IP addresses - kind of like a phone book.
So it's a clear "NO". However, there's another solution and I try to explain it.
When you enter http://mysite.sample:8080 in your browser URL bar, your client (e.g. browser) will first try to resolve mysite.sample (via OS calls) to an IP address. This is where DNS kicks in, as DNS is your name resolver. If that happened, the job of DNS is finished and the browser continues.
This is where the "magic" in HTTP happens. The browser is connecting to the resolved IP address and the desired port (by default 80 for http and 443 for https), is waiting for the connection to be accepted and is then sending the following headers:
GET <resource> HTTP/1.1
Host: mysite.sample:8080
Now the server reads those headers and acts accordingly. Most modern web servers have something called "virtual hosts" (i.e. Apache) or "sites" (i.e. nginx). You can configure multiple vhosts/sites - one for each domain. The web server will then provide the site matching the requested host (which is retreived by the browser from the URL bar and passed to the server via Host HTTP header). This is pure HTTP and has nothing to do with TCP.
If you can't change the port of your origin service (in your case 8080), you might want to setup a new web server in front of your service. This is also called reverse proxy. I recommend reading the NGINX Reverse Proxy docs, but you can also use Apache or any other modern web server.
For nginx, just setup a new site and redirect it to your service:
location mysite.example {
proxy_pass http://127.0.0.1:8080;
}
There is a mechanism in DNS for discovering the ports that a service uses, it is called the Service Record (SRV) which has the form
_service._proto.name. TTL class SRV priority weight port target.
However, to make use of this record you would need to have an application that referenced that record prior to making the call. As Dominique has said, this is not the way HTTP works.
I have written a previous answer that explains some of the background to this, and why HTTP isn't in the standard. (the article discusses WS, but the underlying discussion suggested adding this to the HTTP protocol directly)
Edited to add -
There was actually a draft IETF document exploring an official way to do this, but it never made it past draft stage.
This document specifies a new URI scheme called http+srv which uses a DNS SRV lookup to locate a HTTP server.
There is an specific SO answer here which points to an interesting post here

Can nginx support URL based server blocks (VirtualHosts)?

Note: “VirtualHost” is an Apache term. NGINX does not have Virtual hosts, it has “Server Blocks”. (https://www.nginx.com/resources/wiki/start/topics/examples/server_blocks/).
I know about ip-based and name-based server blocks, but is it possible to have URL-based server blocks? In other words, I want http://example.com/foo and http://example.com/bar to be served from completely independent roots. This would be a trivial problem to solve with name-based server blocks if the names were different http://example1.com and http://example2.com, but since the names are the same (example.com) and only the path part of the URL is different... Can nginx support separate server blocks for these types of URL's?
See https://nginx.org/en/docs/http/request_processing.html
It seems the only options available are: IP address, port, host, and if none of those match, then it serves the default server. So, given that the name and port are the same in both cases, the only possible solution is to put a proxy server in front of nginx, and have the proxy server distribute to the backend nginx server using a different IP or port.

Why do i need to configure an extra port for Websphere Application server

We use an Apache HTTP server with a Websphere Application Server 8.5
Requests to HTTP servers work on default port 80
I have configured port 2021 on http.conf + in default host in Websphere and everything works. The only 'problem' i have is that we need port info in the URL.
http://oursite/index.html works
http://oursite/myApp.jsp doesn't work
When i add the portnumer to the request it works.
I understand that this extra port is needed to tell HTTP server that this request should be forwarded to Websphere. But customers are complaining that the port we used is blocked by their firewall and some customers refuse to add this port to give access.
Now i tried to add port 80 to the Websphere config (default host) and this seems to work.
Is it really needed to config an additional port ?
*:80 is in the "default_host" by default. Whatever host and port your clients will use to address the proxy must be present in the virtual host that your application is deployed to -- otherwise it won't be handled by the WAS Plug-in.
It sounds like someone removed that *:80 alias from the default host, mistakenly thinking it only needed to be there if the application server explicitly listened on port 80. That is misguided.

Block HTTP_X_FORWADED_FOR by iptables

I have to 2 servers:
- front
- backend
Front server is a proxy to backend
I have no access to front server.
So..
How i can block ip address in backend, if i see real ip only in http_x_forwaded_for header?
I think you can not.
You can:
block the IP on front server - this is the way you must do it.
also is probably good idea to firewall backend server from all IP addresses but front server and probably your administrative IP's.
use webserver on backend to block based on the header.
on Apache this can be done with .htaccess.
on backend, use the scripting language (PHP or whatever) to show blank page based on the header.

Resources