What is "Reverse Proxy" and "Load Balancing" in Nginx / Web server terms? - nginx

These are two phrases I hear about very often, mainly associated with Nginx. Can someone give me a laymans defintion?

Definitions are often difficult to understand. I guess you just need some explanation for their use case.
A short explanation is: load balancing is one of the functionalities of reverse proxy, and reverse proxy is one of the softwares that can do load balancing.
And a long explanation is given below.
For example a service of your company has customers in UK and German. Because the policy is different for these two countries, your company has two web servers, uk.myservice.com for UK and de.myservice.com for German, each with different business logic. In addition, your company wants there to be only one unified endpoint, myservice.com for the service. In this case, you need to set up a reverse proxy as the unified endpoint. The proxy takes the url myservice.com, and rewrites the url of incoming requests so that requests from UK(determined by source ip) go to uk.myservice.com and requests from German go to de.myservice.com. From the view of a client from UK, it never knows the response is actually generated from uk.myservice.com.
In this case, the load of request traffic to the service is actually balanced to servers on uk.myservice.com and de.myservice.com as a side effect. So we normally don't call it used as a load balancer, just say it as a reverse proxy.
But lets say if your company uses the same policy for all countries, and has 2 servers, a.myservice.com and b.myservice.com, only for the reason that the work load is to heavy for one server machine. In this case, we normally call the reverse proxy as load balancer to emphasize the reason why it is being used.

Here is the basic definition:
Reverse Proxy is a proxy host, that receives requests from a client, and sends it to one of the servers behind itself. Nginx and apache httpd are commonly used as reverse proxies. These are in the administrative network of the web server that a servers a request.
This is in contrast with a (forward) Proxy, which sits in front of a client, and sends requests on behalf of a client to a web server. As an example, your corporate network address translator is a forward proxy. These are in the administrative network of the client from where the request originates.
Load balancing is a function performed by reverse proxies. The client requests are received by a load balancer, and the load balancer tries to send that request to one of the nodes (hosts) in the server pool, in an attempt to balance the load across various nodes.

I see both of them as a functionality of a HTTP/Web Server.
Load balancer’s job is to distribute the workload between servers node in a way that makes the best use of it.
Reverse proxy is a interface for external world ,forwarding request to a server node (even when we have a single node)
Its other use cases are caching of static content ,compression etc

Related

Docker, nginx and several sites on one server

I have server with nginx and one working app. I want to add several apps to this servers. I would like to assimilate a few things for myself.
What is the difference between load balancer and reverse proxy?
In which situations should I use the first, and in which situations should I use the second?
What should I use if my sites are static, and what if not static?
And additionally it would be a big plus to hear about containers in the context of several sites for nginx
Differences between load balancer and reverse proxy
A reverse proxy accepts a request from a client, forwards it to a server that can fulfill it, and returns the server’s response to the client.
A load balancer distributes incoming client requests among a group of servers, in each case returning the response from the selected server to the appropriate client.
Taken from nginx docs
TL;DR :
Reverse proxying is about : routing requests to the correct server using the domain name
Load balancing is about : distributing load to multiple instances
What should I use if my sites are static, and what if not static?
You can combine an HTTP reverse proxy + load balancing with both static and non static web apps, so it depends.
And additionally it would be a big plus to hear about containers in the context of several sites for nginx
I recommend one nginx container per app / site + a dynamic reverse proxy, traefik in particular (http://traefik.io)
You need a reverse proxy to route the incoming traffic to the proper application taking into account the content of the original request (and rules that you may define).
When the target application(s) is determined, you will need to load balance them in order to distribute the amount of work across them.
Both tasks can be done by software like classic nginx, apache, haproxy, etc or by those that are designed for the microservices world, like fabio, traefik and others.

Forward HTTP traffic with erlang

I want to write a "smart" load balancer, with "smart" I mean that it should route different request on different servers and ports based on the URL.
As example somehost.com/server1 should go to the server1 while somehost.com/server3 should go to the server3.
However I don't want my load balancer to establish the connection with the client, make the requests to the backend server, and the return to the client.
The load balancer should be as much transparent as possible.
The request should arrive to the backend servers and then return immediately to the client, without the need to go through the load balancer.
How is this achievable ? Are there example in erlang ?

Will a request to api.myapp.com be slower then a request to api-myapp.herokuapp.com when hosted on heroku?

I'm trying to understand the best way to handle SOA on heroku, i've got it into my head that making requests to custom domains will somehow be slower, or would all requests go "out" via the internet?
On previous projects which are SOA in nature we've had dedicated hosting so could make requests like http://blogs/ (obviously on the internal network) I'm wondering if heroku treats *.herokuapp.com requests as "internal"... Or is it clever enough to know the myapp.com is actually myapp.herokuapp.com and route locally, or am i missing the point completely, and in fact all requests are "external"
What you are asking about is general knowledge of how internet requests are working.
Whenever you do request from your application to lets say example.com, domain name will first be translated into IP address using so called DNS servers.
So this how it works: does not matter you request myapp.com or myapp.heroku.com you will always request infromation from specific IP address, and domain name you have requested will be passed as part of request headers.
Server which receives this request will try to find in its internal records this domain name and handle request accordingly.
So conclusion is that does not matter you put myapp.com or myapp.heroku.com, the speed of request will always be same.
PS: As heroku will load balance your requests between different instances of your running myapp.com, the speed here will depend on several factors: how quickly your application will respond, how many instances you have running and load average per instance, how much is load balancer loaded at the moment. But surely it will not depend on which domain name you use.

Proxy server basics

I'm learning about network programming. Specifically proxy servers. I've created a very rudimentary proxy server on my mobile phone. However I think there's some proxy server basics that I don't know that will help me create a more robust proxy server.
What I've done so far: server on my mobile device listens for requests from laptop. When server receives a request like www.google.com the web page contents are fetched and returned to the client on the laptop. The client then opens the page contents in a desktop browser.
I think the sending/receiving of requests can happen on a lower OSI model layer (perhaps transport). How can I create a more robust proxy server? (one that just sends and receives bytes and doesn't care/know about HTTP)
A proxy server runs at the same layer as the protocol being proxied. It seems you are talking about an HTTP proxy. HTTP runs over TCP, and so does an HTTP proxy.
Define 'more robust'. What have you done so far?
An HTTP proxy server is a pretty simple thing, unless it has elaborate logging, caching, etc. The basis of it is (1) something to recognize and action the GET/POST/PUT/CONNECT etc. commands and (2) thereafter just copying bytes in both directions simultaneously.

How do you load balance TCP traffic?

I'm trying to determine how to load balance TCP traffic. I understand how HTTP load balancing works because it is a simple Request / Response architecture. However, I'm unsure of how you load balance TCP traffic when your servers are trying to write data to other clients. I've attached an image of the work flow for a simple TCP chat server where we want to balance traffic across N application servers. Are there any load balancers out there that can do what I'm trying to do, or do I need to research a different topic? Thanks.
Firstly, your diagram assumes that the load balancer is acting as a (TCP) proxy, which is not always the case. Often Direct Routing (or Direct Server Return) is used, or Destination NAT is performed. In both cases the connection between backend server and the client is direct. So in this case it is essentially the TCP handshake that is distributed amongst backend servers. See the following for more info:
http://www.linuxvirtualserver.org/VS-DRouting.html
http://www.linuxvirtualserver.org/VS-NAT.html
Obviously TCP proxies do exist (HAProxy being one), in which case the proxy manages both sides of the connecton, so your app would need to be able to identify the client by the incoming IP/Port (which would happen to be from the proxy rather than the client). The proxy will handle getting the messages back to the client.
Either way, it comes down to application design as I would imagine the tricky bit is having a common session store (a database of some kind, or key=>value store such as Redis), so that when your app server says "I need to send a message to Frank" it can determine which backend server Frank is connected to (from DB), and signal that server to send it the message. You reduce the problem of connections (from the same client) moving around different backend servers by having persistent connections (all load balancers can do this), or by using something intrinsically persistent like a websocket.
This is probably a vast oversimplification as I have no experience with chat software. Obviously DB servers themselves can be distributed amongst several machines, for fault-tolerance and load balancing.

Resources