I am using the meteorhacks:cluster package to load balance my application. https://github.com/meteorhacks/cluster
I am confused about how to setup DNS entries with this package.
It seems like for each server you should provide a local env variable called CLUSTER_BALANCE_URL, which is the DNS entry for that specific server. This makes sense as I can point a DNS entry at a single server.
But what about the ROOT_URL that is set on both server. That needs to be the shared DNS entry that the user goes to. When I setup that DNS entry which server to I point it too?
The DNS entries that you have pointing to your CLUSTER_BALANCE_URLs will take care of the DDP balancing.
You can have your DNS point the ROOT_URL to any server ip. There won't be a conflict. The ROOT_URL ip(s) will be the one(s) to take care of static load balancing.
https://github.com/meteorhacks/cluster#dns--ssl
Related
Context
I have a prod dns, www.example.com & test website dns www.test-example.com. Long story short, I can spin up a self-hosted BIND DNS server on my local network that introduces a CNAME mapping www.example.com --> www.test-example.com. After which, I can access test through the prod url.
The CNAME held in my local BIND DNS server is found before any records held externally.
Question
How does the DNS protocal know to find (and query) my local BIND DNS server first before checking any external NS?
Resarching aritcile son DNS resolution 'DNS order of resolution', I can see local host files are checked before external ones. However, I can't see what happens when hosting a local NS.
I have different versions of a backend service, and would like nginx to be like a "traffic cop", sending users ONLY to the currently online live backend service. Is there a simple way to do this without changing the nginx config each time I want to redirect users to a different backend service?
In this example, I want to shut down the live backend service and direct users to the test backend service. Then, vice-versa. I'm calling it a logical "traffic cop" which knows which backend service to direct users to.
I don't think adding all backend services to the proxy_pass using upstream load balancing will work. I think load balancing would not give me what I'm looking for.
I also do not want user root to update the /etc/hosts file on the machine, because of security and collision concerns with multiple programs editing /etc/hosts simultaneously.
I'm thinking of doing proxy_pass http://live-backend.localhost in nginx and using a local DNS server to manage the internal IP for live-backend-localhost which I can change (re-point to another backend IP) at any time. However, would nginx actually query the DNS server on every request, or does it resolve once then cache the IP forever?
Am I over-thinking this? Is there an easy way to do this within nginx?
You can use the backup parameter to the server directive so that the test server will only be used when the live one is down.
NGINX queries DNS on startup and caches it, so you'd still have to reload it to update.
I have a website that is growing so my dedicated server can't handle and the lag is real deal. So I decided to try cloud hosting and for this purpose I will use nginx as load balancer.
Qustion 1.
If I configure the main webserver where domain.tld is located as load balancer like the example here will I be able to use the same server for all my other domains(right now I use normal nginx config to maintan 10 small websites on the same webserver), or the main role will be ONLY to balance and redirect the traffic.
Question 2.
Shall I put copy of the files on the mirror servers ?
Example : my website is in the http_web folder , where he communicate with MYSQL server. How the requests are handled ? What happens when the balance server redirect the client to the server1 ?
Question 3.
I plan to start with this structure:
Load balancer (dedicated server) + Mysql -> http servers1, server2,,, on demand3..4..5.. Is that ok?
This diagram should help with how you should set it up. Forgive me if I missed your point to your question. So the green is what is public and red is your private internal network. So your load balancer which you want as nginx has two networks connected to it. Your external public network and your internal network. The lb should handle all the ip's coming from your clients. Then nginx delegates the client to one of the webapps through the private internal network. I hope this helps.
When firing up multiple new EC2 instances, how do I make these new machines automatically accessible publicly on my domain ****.example.com?
So if I fire up two instances that would normally have a public DNS of
ec2-12-34-56.compute-1.amazonaws.com and ec2-12-34-57.compute-1.amazonaws.com
instead be ec2-12-34-56.example.com and ec2-12-34-57.example.com
Is there a way to use a VPC and Route53 or do I need to run my own DNS server?
Lets say you want to do this in the easiest way. You don't need a VPC
First we need to set up an elastic ip address. This is going to be the connection point between the Route53 DNS service (which you should absolutely use) and the instance. Go into the EC2 menu of the management console, click elastic ip and click create. Create it into EC2-Classic (option will pop up). Remember this ip.
Now go into Route53. Create a hosted zone for your domain. Go into this zone and create a record set for staging.example.com (or whatever your prefix is). Leave it as an A record (default) and put the elastic IP in the textbox.
Note you now need to go into your registrar login (e.g. goDaddy) and replace the nameservers with the ones shown on the NS record. They will look like:
ns-1776.awsdns-30.co.uk.
ns-123.awsdns-15.com.
ns-814.awsdns-37.net.
ns-1500.awsdns-59.org
and you will be able to see them once you create a hosted zone.
Once you've done this it will redirect all requests to that IP address. But it isn't associated with anything. Once you have created an instance, go back into the elastic ip menu and associate it with the instance. Now all requests to that domain will go to that instance. To change just re-associate. Make sure your security zones allow all traffic (or at least HTTP) or it will seem like it doesn't work.
This is not good cloud architecture, but it will get the job done. Better cloud architecture would be making the route point to a load balancer, and attaching the instance to the load balancer. This should all be done in a VPC. It may not be worth your time if you are doing development.
I want to host a new subdomain on an Ec2 Instance(ec2-xx-xxx-xxx-xx.compute-1.amazonaws.com) like blog.somesite.com
I have the DNS settings on a 3rd party host(like Godaddy) that look like:
site ip addr as shown above, is the value of the ec2 server e.g. xxx.xxx.xx.xx and not
ec2-xx-xxx-xxx-xx.compute-1.amazonaws.com
If I try to do an mxtoolbox lookup on DNS for blog.myapp.com, it seems to have properly propogated the A-Record, do I need a CNAME record instead of A-Record?
If I try to access blog.myapp.com via browser, it is just a never ending connection. If I access myapp.com , it has always been working fine.
On my ec2 box, I'm running nginx, does something need to be configured on nginx too?
Sorry about the newbieness - still learning.
Thank you!
To start with, you should assign an elastic IP to your instance. IP addresses will change if the instance is ever stopped. With an elastic IP, you can re-associate the ip address to the instance if you need to stop it.
If you are setting up a DNS record for the apex, it needs to be an A record (Apex records is your domain with no subdomain).
For the domain blog.yourdomain.com you can set up either an A or CNAME record.
You will likely need to configure your host within nginx to respond to requests with your domain name.
You will also need to make sure port 80 is open on your security group, and system firewall if your OS has one configured.