How do I separate traffic by domain? - networking

Background
I currently have multiple low power servers running on my LAN. They all run different services, but some of them are similar. I also own 3 domains and have many sub-domains for those domains.
So, what's your problem?
Well, as I said before, some of my service are REALLY similar, and run on the same port (I have an Owncloud server on one, and my website is hosted on another). This means that if I want owncloud.mydomain.com to go to my Owncloud server, and www.mydoamain.com to go to my web server, I have a little bit of an issue. Both sub-domains just go to my house and the services use the same port. I can't really separate the traffic per subdomain.
edit: It also needs to be able to direct many types of traffic like SSH, HTTPS, and FTP
Possible Solutions
I've though about just running the different service on different ports, but would not be optimal AT ALL. It means it's weird to look at, people will have a harder time using any of my services, and it will generally be something I do not like.
I've thought about similar services on the same server, but their are some pretty dinky servers. I'd rather not have to do anything like that at all. Also, since the servers are a little old, it's nice to know that if one of them dies, at least I'll have my other services. I don't think this option is good at all.
Best possible solution: I've heard that there's a service that has the exact functionality that I'm looking for called haproxy. My only issues with this is that I don't know how to use this service and I especially don't know How to get the use I want out of it.
My final question
I would love to get haproxy working, I just need to know how to set it up the way I need. If anyone has a link to a tutorial on how to do what I want specifically (I've already found out how to get haproxy working, just not the way I want) then I would be really grateful. I would look for this myself, but I already have, and I don't even know what to search for. Can anyone help me out?
Thank you

Make your own config file, say haproxy.cfg, containing something like the following
defaults
mode http
frontend my_web_frontend
bind 0.0.0.0:80
timeout client 86400000
acl is_owncloud hdr_end(host) -i owncloud.mydomain.com
acl is_webserver hdr_end(host) -i www.mydomain.com
use_backend owncloud if is_owncloud
use_backend webserver if is_webserver
backend owncloud
balance source
option forwardfor
option httpclose
timeout queue 500000
timeout server 500000
timeout connect 500000
server server1 10.0.0.25:5000 weight 1 maxconn 1024 check inter 10000
backend webserver
balance source
option forwardfor
option httpclose
timeout queue 500000
timeout server 500000
timeout connect 500000
server server1 10.0.0.30:80 weight 1 maxconn 1024 check inter 10000
And then run haproxy on one of your servers.
./haproxy -f ~/haproxy.cfg
Point all your domains and subdomain to this machine. They'll route according to the config.

You only need one ip address but you need to configure the virtual host correctly. This link provides step by step details for Ubuntu virtual host configuration. This is the easiest way and everyone else will agree it's the cheapest if you insist on using your personal network.
https://www.digitalocean.com/community/articles/how-to-set-up-nginx-virtual-hosts-server-blocks-on-ubuntu-12-04-lts--3

Related

Nginx Reverse Proxy With Alternating Live Backend Services

I have different versions of a backend service, and would like nginx to be like a "traffic cop", sending users ONLY to the currently online live backend service. Is there a simple way to do this without changing the nginx config each time I want to redirect users to a different backend service?
In this example, I want to shut down the live backend service and direct users to the test backend service. Then, vice-versa. I'm calling it a logical "traffic cop" which knows which backend service to direct users to.
I don't think adding all backend services to the proxy_pass using upstream load balancing will work. I think load balancing would not give me what I'm looking for.
I also do not want user root to update the /etc/hosts file on the machine, because of security and collision concerns with multiple programs editing /etc/hosts simultaneously.
I'm thinking of doing proxy_pass http://live-backend.localhost in nginx and using a local DNS server to manage the internal IP for live-backend-localhost which I can change (re-point to another backend IP) at any time. However, would nginx actually query the DNS server on every request, or does it resolve once then cache the IP forever?
Am I over-thinking this? Is there an easy way to do this within nginx?
You can use the backup parameter to the server directive so that the test server will only be used when the live one is down.
NGINX queries DNS on startup and caches it, so you'd still have to reload it to update.

Achive a "more than 10 connections, pass to next server" setup with NGINX or other

Idea
Gradually use a few small-scale dedicated servers in combination with an expensive cloud platform, where - on little traffic - the dedicated servers should first filled up before the cloud kicks in. Hedging against occasional traffic spikes.
nginx
Is there an easy way (without nginx plus) to achieve a "waterfall like" set-up, where small servers should first be served up to a maximum number of concurrent connections, or better, current bandwidth before the cloud platform sees any traffic?
Nginx Config, Libraries, Tools?
Thanks
You will use nginx upstream module.
If you want total control, set your cloud servers with backup parameter, so that they won't be used until your primary servers fail. Then use custom monitoring scripts to determine when those could servers should kick-in, change nginx config and remove the backup keyword from them. Also monitor conditions when you want to stop using the cloud servers and alter the nginx config.
More simple solution (but without fine tuning like avoiding spikes) is to use the max_conns=number parameter. Nginx should start to use the backup server if all other already have max number of connections (I didn't test it).
NOTE: max_conns parameter was only available in paid nginx between v1.5.9 and v1.11.5, so the only solution with these versions is own monitoring + reloading of nginx config when needed to change the upstream servers. Thanks Mickaël Le Baillif's comment to point out this parameter is now available to all.

nginx behind load balancers

I've found at that Instagram share their technology implementation with other developers trough their blog. They've some great solutions for the problems they run into. One of those solutions they've is an Elastic Load Balancer on Amazon with 3 nginx instances behind it. What is the task of those nginx servers? And what is the task of the Elastic Load balancers, and what is the relation between them?
Disclaimer: I am no expert on this in any way and am in the process of learning about AWS ecosystem myself.
The ELB (Elastic load balancer) has no functionality on its own except receiving the requests and routing it to the right server. The servers can run Nginx, IIS, Apache, lighthttpd, you name it.
I will give you a real use case.
I had one Nginx server running one WordPress blog. This server was, like I said, powered by Nginx serving static content and "upstreaming" .php requests to phpfpm running on the same server. Everything was going fine until one day. This blog was featured on a tv show. I had a ton of users and the server could not keep up with that much traffic.
My first reaction would be to just use the AMI (Amazon machine image) to spin up a copy of my server on a more powerful instance like m1.heavy. The problem was I knew I would have traffic increasing over time over the next couple of days. Soon I would have to spin an even more powerful machine, which would mean more downtime and trouble.
Instead, I launched an ELB (elastic load balancer) and updated my DNS to point website traffic to the ELB instead of directly to the server. The user doesn’t know server IP or anything, he only sees the ELB, everything else goes on inside amazon’s cloud.
The ELB decides to which server the traffic goes. You can have ELB and only one server on at the time (if your traffic is low at the moment), or hundreds. Servers can be created and added to the server array (server group) at any time, or you can configure auto scaling to spawn new servers and add them to the ELB Server group using amazon command line, all automatically.
Amazon cloud watch (another product and important part of the AWS ecosystem) is always watching your server’s health and decides to which server it will route that user. It also knows when all the servers are becoming too loaded and is the agent that gives the order to spawn another server (using your AMI). When the servers are not under heavy load anymore they are automatically destroyed (or stopped, I don’t recall).
This way I was able to serve all users at all times, and when the load was light, I would have ELB and only one Nginx server. When the load was high I would let it decide how many servers I need (according to server load). Minimal downtime. Of course, you can set limits to how many servers you can afford at the same time and stuff like that so you don’t get billed over what you can pay.
You see, Instagram guys said the following - "we used to run 2 Nginx machines and DNS Round-Robin between them". This is inefficient IMO compared to ELB. DNS Round Robin is DNS routing each request to a different server. So first goes to server one, second goes to server two and on and on.
ELB actually watches the servers' HEALTH (CPU usage, network usage) and decides which server traffic goes based on that. Do you see the difference?
And they say: "The downside of this approach is the time it takes for DNS to update in case one of the machines needs to get decommissioned."
DNS Round robin is a form of a load balancer. But if one server goes kaput and you need to update DNS to remove this server from the server group, you will have downtime (DNS takes time to update to the whole world). Some users will get routed to this bad server. With ELB this is automatic - if the server is in bad health it does not receive any more traffic - unless of course the whole group of servers is in bad health and you do not have any kind of auto-scaling setup.
And now the guys at Instagram: "Recently, we moved to using Amazon’s Elastic Load Balancer, with 3 NGINX instances behind it that can be swapped in and out (and are automatically taken out of rotation if they fail a health check).".
The scenario I illustrated is fictional. It is actually more complex than that but nothing that cannot be solved. For instance, if users upload pictures to your application, how can you keep consistency between all the machines on the server group? You would need to store the images on an external service like Amazon s3. On another post on Instagram engineering – “The photos themselves go straight to Amazon S3, which currently stores several terabytes of photo data for us.”. If they have 3 Nginx servers on the load balancer and all servers serve HTML pages on which the links for images point to S3, you will have no problem. If the image is stored locally on the instance – no way to do it.
All servers on the ELB would also need an external database. For that amazon has RDS – All machines can point to the same database and data consistency would be guaranteed.
On the image above, you can see an RDS "Read replica" - that is RDS way of load balancing. I don't know much about that at this time, sorry.
Try and read this: http://awsadvent.tumblr.com/post/38043683444/using-elb-and-auto-scaling
Can you please point the blog entry out?
Load balancers balance load. They monitor the Web servers health (response time etc) and distribute the load between the Web servers. On more complex implementations it is possible to have new servers spawn automatically if there is a traffic spike. Of course you need to make sure there is a consistency between the servers. THEY CAN share the same databases for instance.
So I believe the load balancer gets hit and decides to which server it will route the traffic according to server health.
.
Nginx is a Web server that is extremely good at serving a lot of static content for simultaneous users.
Requests for dynamic pages can be offloaded to a different server using cgi. Or the same servers that run nginx can also run phpfpm.
.
A lot of possibilities. I am on my cell phone right now. tomorrow I can write a little more.
Best regards.
I am aware that I am late to the party, but I think the use of NGINX instances behind ELB in Istagram blogpost is to provide high available load balancer as described here.
NGINX instances do not seem to be used as web servers in the blogpost.
For that role they mention:
Next up comes the application servers that handle our requests. We run Djangoon Amazon High-CPU Extra-Large machines
So ELB is used just as a replacement for their older solution with DNS Round-Robin between NGINX instances that was not providing high availability.

Redirecting http traffic to another server temporarily

Assume you have one box (dedicated server) that's on 24 7 and several other boxes that are user machines that have unused bandwidth. Assume you want to host several web pages. How can the dedicated server redirect http traffic to the user machines. It is desirable that the address field in the web browser still displays the right address, and not an ip. Ie. I don't want to redirect to another web page, I want to tell the web browser that it should request the same web page from a different server. I have been browsing through the 3xx codes, and I don't think they are made for anything like this.
It should work some what along these lines:
1. Dedicated server is online all the time.
2. User machine starts and tells the dedicated server that it's online.
(several other user machines can do similarly)
3. Web browser looks up domain name and finds out that it points to dedicated server.
4. Web browser requests page.
5. Dedicated server tells web browser to repeat request to user machine
Is it possible to use some kind of redirect, and preferably tell the browser to keep sending further requests to user machine. The user machine can close down at almost any point of time, but it is assumed that the user machine will wait for ongoing transactions to finish, no closing the server program in the middle of a get or something.
What you want is called a Proxy server or load balancer that would sit in front of your web server.
The web browser would always talk to the load balancer, and the load balancer would forward the request to one of several back-end servers. No redirect is needed on the client side, as the client always thinks it is just talking to the load balancer.
ETA:
Looking at your various comments and re-reading the question, I think I misunderstood what you wanted to do. I was thinking that all the machines serving content would be on the same network, but now I see that you are looking for something more like a p2p web server setup.
If that's the case, using DNS and HTTP 30x redirects would probably be what you need. It would probably look something like this:
Your "master" server would serve as an entry point for the app, and would have a well known host name, e.g. "www.myapp.com".
Whenever a new "user" machine came online, it would register itself with the master server and a the master server would create or update a DNS entry for that user machine, e.g. "user123.myapp.com".
If a request came to the master server for a given page, e.g. "www.myapp.com/index.htm", it would do a 302 redirect to one of the user machines based on whatever DNS entry it had created for that machine - e.g. redirect them to "user123.myapp.com/index.htm".
Some problems I see with this approach:
First, Once a user gets redirected to a user machine, if the user machine went offline it would seem like the app was dead. You could avoid this by having all the links on every page specifically point to "www.myapp.com" instead of using relative links, but then every single request has to be routed through the "master server" which would be relatively inefficient.
You could potentially solve this by changing the DNS entry for a user machine when it goes offline to point back to the master server, but that wouldn't work without an extremely short TTL.
Another issue you'll have is tracking sessions. You probably wouldn't be able to use sessions very effectively with this setup without a shared session state server of some sort accessible by all the user machines. Although cookies should still work.
In networking, load balancing is a technique to distribute workload evenly across two or more computers, network links, CPUs, hard drives, or other resources, in order to get optimal resource utilization, maximize throughput, minimize response time, and avoid overload. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy. The load balancing service is usually provided by a dedicated program or hardware device (such as a multilayer switch or a DNS server).
and more interesting stuff in here
apart from load balancing you will need to set up more or less similar environment on the "users machines"
This sounds like 1 part proxy, 1 part load balancer, and about 100 parts disaster.
If I had to guess, I'd say you're trying to build some type of relatively anonymous torrent... But I may be wrong. If I'm right, HTTP is entirely the wrong protocol for something like this.
You could use dns, off the top of my head, you could setup a hostname for each machine that is going to serve users:
www in A xxx.xxx.xxx.xxx # ip address of machine 1
www in A xxx.xxx.xxx.xxx # ip address of machine 2
www in A xxx.xxx.xxx.xxx # ip address of machine 3
Then as others come online, you could add then to the dns entries:
www in A xxx.xxx.xxx.xxx # ip address of machine 4
Only problem is you'll have to lower the time to live (TTL) entry for each record down to make it smaller (I think the default is 86400 - 1 day)
If a machine does down, you'll have to remove the dns entry, though I do think this is the least intensive way of adding capacity to any website. Jeff Attwood has more info here: is round robin dns good enough?

Website currently being viewed

I have 50 machines in a LAN and each of these have internet access. Can a program be developed using vc++ which will tell what are all the websites which is being opened by users in each machine?
You can easily accomplish this by writing an application which captures packets outbound on port 80 (and the associated DNS information). The problem is that this application must run on every client computer which you want to trace. The easier method, as stated by others, is to take advantage of your network architecture and tunnel all traffic through a central proxy which can record the same information.
There are many-many enterprise tools suited for just this task in the latter instance.
Route your internet traffic through a centralized proxy and monitor the traffic from proxy say using Fiddler, or something else. In case proxying is not possible, use Fiddler to generate data at known location and then collate it at required intervals.
Install a firewall, if you don't already have one, and use it to log connections.

Resources