How to connect an Amazon EC2 instance to external domain registrar - wordpress

Hello (I'm posting this here because I apparently can't post on amazons forums), I'm new to Amazon AWS and I'm trying to figure out how to make a wordpress site on EC2 (is this the best choice?) and I just want to point it to my registrar (hover.com).
I'm used to setting up sites and pointing domain names on other hosts but AWS is different so I need some help.
For starters, I seem to be constantly directed at using Route53, well that's another AWS service, and it sounds optional, is it really optional? I did a cost estimator for Route 53 and I did 1 hosted zone, with 1000 hits a month and it estimated around $400. That can't be right. I must be missing something. You're telling me for 1000 hits on a particular domain name, I'm going to have to pay $400?
Where's the name servers?
So to give a little history, I setup a wordpress site on an instance of EC2, and I made an elastic IP and connected it to that WP instance. I see a public IP address, great. The public IP address works fine, but I need to find up that domain name I have registered.
I put the public IP address in the DNS settings at hover, but I need the name servers. The only way I was able to get name servers was doing it through Route 53, which again, is this really needed?
I guess all in all, too, is this the right route? Should I be doing an instance of S3 instead of EC2? I'm on the free tier for a year but I'd like to stick with AWS and I don't want to have to change to another host a year form now. I know the elastic IP's is another cost, is this an astronomical cost? Do I even need to setup an elastic IP? I'd assume so. Lots of questions... any help would be much appreciated. Thanks.

Amazon route 53 cost pennies to use in most cases, so your estimate is way, way off.
You need to either tell your host registrar that Route53 will handle your DNS for you, by getting the 4 nameservers from Route53 when you setup the domain and entering them onto your domain at your registrar, or assuming your registrar has the ability to do DNS for you, then you don't need route 53 at all. Most registrars will do your DNS for free - even so I prefer to pay the few dollars a month to have it all under one roof, so to speak.
In either case, once you know who is going to handle your DNS, you need to create at minimum a single 'A' record that points your domain name to your EC2 instance IP - there are lots of variations to this, but at its simplest, that is what needs to happen.

Your math is wrong:
I did a cost estimator for Route 53 and I did 1 hosted zone, with 1000 hits a month and it estimated around $400.
Route53 pricing for zones: $0.50 per hosted zone / month for the first 25 hosted zones
So, 50 cents.
Route 53 pricing for traffic: $0.400 per million queries – first 1 Billion queries / month
So, 0.04 cents.
That's a total of 50.04 cents for a month. It'll take over 60 years to add up to $400 for Route53 DNS at that volume.
I'm having trouble tracking the rest of your question.

I did an elastic IP on AWS and bound that to the EC2 instance. I used the public IP on my registrars admin screen and left their existing name servers (hovers) in the name server area.

Ec2 wouldn't be your first choice if you don't expect high loads. For what you will pay with ec2 (starting from like $10 a month) you can have some space in a shared hosting and even your own domain for a whole year. Moreover you won't have to deal with DNS because almost all hosting sellers do it for free.

Just to notify others, OP's math was wrong because AWS calculator counts queries in Millions per month. He likely entered "1000 million queries per month", bringing the estimate to 400

Related

How to create a IP whitelist for avoid false positive?

To avoid false positive, how can we create a whitelist of IP or Range of IP. I tried to create a IP whitelist by using resolving IP of the whitelist domain. Do you guys have any idea?
The question is not completely clear to me. I don't understand exactly why you need a whitelist IP but as far as I know it's better to have a block/black list IP rather than a white list.
it might be the case the IP address w.x.y.z is clean today and somehow someone hack the server tomorrow and serve malicious content. So the IP is not clean anymore!
Having a daily IP blocklist is better since there are lots of services out there which serve such lists (for different types of abuse like spam, malware and phishing) and you can use them on a daily basis.
If you have access to an enterprise firewall/proxy logs or PCAP data, you can extract the traffic from that environment, do DNS resolution to get the IPs, sort the output from most most hits to lowest, then grab the top N ones as they would probably be commonly used hosts like Google, YouTube, Facebook etc.
The problem with this approach is that reputation is fleeting: I've seen malware on Google Drive, Dropbox, Discord, Onedrive, Pastebin and also Github. Reputation is only as good as the hosting company is to remove malware from their sites. Some are fast to take down malware after reports, some are not.
You can also use statistical ranking data like Alexa to resolve FQDNs into IPs, just be aware that ranking does not equate to morality/acceptable use policy as there are plenty of torrent and porn sites listed on Alexa that you may not want to allow to fly under the radar on your corporate network.

Is there any IP range for a certain country?

We are in a business where we need to block visitors from certain areas or countries. We want to show 403 error page when visitors comes from that certain areas.
Now what we can do is, on every request, get the visitors IP address and get the country name for that IP using any third-party services like Telize or ipapi.co and if it from that country, stop and show the error page.
But the problem is, it will check for all others visitors and if we do a curl on every request, it will definitely slow down our website.
Is there any way we can get the country name from IP address without using any third-party service or curl request or anything that will not slow down our website?
We are using PHP & Symfony 3 framework on a VPS, and speed and performance are very important for us, in case it helps you.
At this moment we want to block visitors from Cameroon, is there any range of IP is assigned for Cameroon?
You can use the Maxmind GeoIP library for php.
The idea is that you download a database (which is just a file) containing geographical information for all the IPs in the world. Since the database is on your server, and you call it using the library, it won't slow down your server. Actually, getting the country code from an IP is so fast the performance impact will be negligible.
The database is updated regularly, so you can periodically re-download it to stay up-to-date. You can get details about the downloadable databases here.
You may generate the htaccess deny file for Cameroon IP ranges at https://www.ip2location.com/free/visitor-blocker, and block them at htaccess level, which will be much faster.

How do I set up global load balancing using Digital Ocean DNS and Nginx?

UPDATE: See the answer I've provided below for the solution I eventually got set up on AWS.
I'm currently experimenting with methods to implement a global load-balancing layer for my app servers on Digital Ocean and there's a few pieces I've yet to put together.
The Goal
Offer highly-available service to my users by routing all connections to the closest 'cluster' of servers in SFO, NYC, LON, and eventually Singapore.
Additionally, I would eventually like to automate the maintenance of this by writing a daemon that can monitor, scale, and heal any of the servers on the system. Or I'll combine various services to achieve the same automation goals. First I need to figure out how to do it manually.
The Stack
Ubuntu 14.04
Nginx 1.4.6
node.js
MongoDB from Compose.io (formerly MongoHQ)
Global Domain Breakdown
Once I rig everything up, my domain would look something like this:
**GLOBAL**
global-balancing-1.myapp.com
global-balancing-2.myapp.com
global-balancing-3.myapp.com
**NYC**
nyc-load-balancing-1.myapp.com
nyc-load-balancing-2.myapp.com
nyc-load-balancing-3.myapp.com
nyc-app-1.myapp.com
nyc-app-2.myapp.com
nyc-app-3.myapp.com
nyc-api-1.myapp.com
nyc-api-2.myapp.com
nyc-api-3.myapp.com
**SFO**
sfo-load-balancing-1.myapp.com
sfo-load-balancing-2.myapp.com
sfo-load-balancing-3.myapp.com
sfo-app-1.myapp.com
sfo-app-2.myapp.com
sfo-app-3.myapp.com
sfo-api-1.myapp.com
sfo-api-2.myapp.com
sfo-api-3.myapp.com
**LON**
lon-load-balancing-1.myapp.com
lon-load-balancing-2.myapp.com
lon-load-balancing-3.myapp.com
lon-app-1.myapp.com
lon-app-2.myapp.com
lon-app-3.myapp.com
lon-api-1.myapp.com
lon-api-2.myapp.com
lon-api-3.myapp.com
And then if there's any strain on any given layer, in any given region, I can just spin up a new droplet to help out: nyc-app-4.myapp.com, lon-load-balancing-5.myapp.com, etc…
Current Working Methodology
A (minimum) trio of global-balancing servers receive all traffic.
These servers are "DNS Round-Robin" balanced as illustrated in this
(frankly confusing) article: How To Configure DNS Round-Robin Load
Balancing.
Using the Nginx GeoIP
Module and
MaxMind GeoIP Data
the origin of any given request is determined down to the
$geoip_city_continent_code.
The global-balancing layer then routes the request to the least
connected server on the load-balancing layer of the appropriate
cluster: nyc-load-balancing-1, sfo-load-balancing-3,
lon-load-balancing-2, etc.. This layer is also a (minimum) trio of
droplets.
The regional load-balancing layer then routes the request to the
least connected server in the app or api layer: nyc-app-2,
sfo-api-1, lon-api-3, etc…
The details of the Nginx kung fu can be found in this tutorial:
Villiage Idiot: Setting up Nginx with GSLB/Reverse Proxy on
AWS. More general info about Nginx load-balancing is available
here
and
here.
Questions
Where do I put the global-balancing servers?
It strikes me as odd that I would put them either all in one place, or spread that layer out around the globe either. Say, for instance, I put them all in NYC. Then someone from France hits my domain. The request would go from France, to NYC, and then be routed back to LON. Or if I put one of each in SFO, NYC, and LON then isn't it still possible that a user from Toronto (Parkdale, represent) could send a request that ends up going to LON only to be routed back to NYC?
Do subsequent requests get routed to the same IP?
As in, if a user from Toronto sends a request that the global-balancing layer determines should be going to NYC, does the next request from that origin go directly to NYC, or is it still luck of the draw that it will hit the nearest global-balancing server (NYC in this case).
What about sessions?
I've configured Nginx to use the ip_hash; directive so it will direct the user to the same app or api endpoint (a node process, in my case) but how will global balancing affect this, if at all?
Any DNS Examples?
I'm not exactly a DNS expert (I'm currently trying to figure out why my CNAME records aren't resolving) but I'm a quick study when provided with a solid example. Has anyone gone through this process before and can provide a sample of what the DNS records look like for a successful setup?
What about SSL/TLS?
Would I need a certificate for every server, or just for the three global-balancing servers since that's the only public-facing gateway?
If you read this whole thing then reward yourself with a cupcake. Thanks in advance for any help.
The Goal: Offer highly-available service to my users by routing all connections to the closest 'cluster' of servers in SFO, NYC, LON, and eventually Singapore.
The global-balancing layer then routes the request to theleast
connected server...
If I'm reading your configuration correctly, you're actually proxying from your global balancers to the balancers at each region. This does not meet your goal of routing users to the nearest region.
There are three ways that I know of to get what you're looking for:
30x Redirect Your global balancers receive the HTTP request and then redirect it to a server group in or near the region it thinks the request is coming from, based on IP address. This sounds like what you were trying to set up. This method has side effects for some applications, and also increases the time it takes for a user to get data since you're adding a ton of overhead. This only makes sense if the resources you're redirecting to are very large, and the local regional cluster will be able to serve much more efficiently.
Anycast (taking advantage of BGP routing) This is what the big players like Akamai use for their CDN. Basically, there are multiple servers out on the internet with the exact same routable IP address. Suppose I have servers in several regions, and they have the IP address of 192.0.2.1. If I'm in the US and try to connect to 192.0.2.1, and someone is in Europe that tries to connect to 192.0.2.1, it's likely that we'll be routed to the nearest server. This uses the internet's own routing to find the best path (based on network conditions) for the traffic. Unfortunately, you can't just use this method. You need your own AS number, and physical hardware. If you find a VPS provider that lets you have a chunk of their Anycast block, let me know!
Geo-DNS There are some DNS providers that provide a service often marketed as "Geo-DNS". They have a bunch of DNS servers hosted on anycast addresses which can route traffic to your nearest servers. If a client queries a European DNS server, it should return the address for your European region servers, vs. some in other regions. There are many variations on the Geo DNS services. Others simply maintain a geo-IP database and return the server for the region they think is closer, just like the redirect method but for DNS before the HTTP request is ever made. This is usually the good option, for price and ease of use.
Do subsequent requests get routed to the same IP?
Many load balancers have a "stickiness" option that says requests from the same network address should be routed to the same end server (provided that end server is still up and running).
What about sessions?
This is exactly why you would want that stickiness. When it comes to session data, you are going to have to find a way to keep all your servers up-to-date. Realistically, this isn't always guaranteed. How you handle it depends on your application. Can you keep a Redis instance or whatever out there for all your servers to reliably hit from around the world? Do you really need that session data in every region? Or can you have your main application servers dealing with session data in one location?
Any DNS Examples?
Post separate questions for these. Everyone's "successful setup" looks differently.
What about SSL/TLS?
If you're proxying data, only your global balancers need to handle HTTPS. If you're redirecting, then all the servers need to handle it.
A Working Solution
I've had a wild ride over the past few months figuring out the whole Global-HA setup. Tonnes of fun and I've finally settled with a rig that works very well, and is nothing like the one outlined in the above question.
I still plan on writing this up in tutorial form, but time is scarce as I head into the final sprint to get my app launched early next year, so here's a quick outline of the working rig I ended up with.
Overview
I ended up moving my entire deployment to AWS. I love Digital Ocean, but the frank reality is that AWS is light years ahead of them (and everyone, really) when it comes to the services offered under one roof. My monthly expenses went up slightly, but once I was done tweaking and streamlining I ended up with a solution that costs about $75/month per region for the most basic deployment (2 instances behind an ELB). And a new region can be spun up and deployed within about 30 minutes.
Global Balancing
I quickly found out (thanks to #Brad's answer above) that trying to spin up my own global balancing DNS layer is insane. It was a hell of a lot of fun figuring out how a layer like this works, but short of getting on a plane and scraping my knuckles installing millions of dollars worth of equipment around the world, it was not going to be possible to roll my own.
When I finally figured out what I was looking for, I found my new best friend: AWS Route 53. It offers a robust DNS network with about 50-odd nodes globally and the ability to do some really cool routing tricks like location-based routing, latency-based routing (which is kinda awesome), and AWS Alias records that 'automagically' route traffic to other AWS Services you'll be using (Like ELB for load balancing).
I ended up using latency-based routing that directs the global traffic to the closest regional Elastic Load Balancer, which has an Auto-Scaling Group attached to it in any given region.
I'll leave it up to you to do your homework on the other providers: www.f5.com, www.dyn.com, www.akamai.com, www.dnsmadeeasy.com. Depending on your needs, there may be a better solution for you, but this works very well for me.
Content Delivery Network
Route 53 integrates with AWS Cloudfront very nicely. I setup an S3 bucket that I'm using to store all the static media files that my users will upload, and I've configured a Cloudfront distribution to source from my media.myapp.com S3 bucket. There are other CDN providers, so do your shopping. But Cloudfront gets pretty good reviews and it's a snap to setup.
Load Balancing & SSL Termination
I'm currently using AWS Elastic Load Balancer to balance the load across my application instances, which live in an Auto-Scaling Group. The request is first received by ELB, at which point SSL is terminated and the request is passed through to an instance in the Auto-Scaling Group.
NOTE: One giant caveat for ELB is that, somewhat ironically, it doesn't handle massive spikes very well. It can take up to 15 minutes for an ELB to trigger a scale-up event for itself, creating 500/timeouts in the meantime. A steady, constant increase in traffic is supposedly handled quite well, but if you get hit with a spike it can fail you. If you know you're going to get hit, you can 'call ahead' and AWS will warm up your ELB for you, which is pretty ridiculous and anti-pattern to the essence of AWS, but I imaging they're either working on it, or ignoring it because it's not really that big of a problem. You can always spin up your own HAProxy or Nginx load-balancing layer if ELB doesn't work for you.
Auto-Scaling Group
Each region has an ASG which is programmed to scale when the load passes a certain metric:
IF CPU > 90% FOR 5 MINUTES: SCALEUP
IF CPU < 70% FOR 5 MINUTES: SCALEDN
I haven't yet put the ELB/ASG combo through its paces. That's a little way down my To-Do list, but I do know that there are many others using this setup and it doesn't seem to have any major performance issues.
The config for an Auto-Scaling Group is a little convoluted in my opinion. It's actually a three-step process:
Create an AMI configured to your liking.
Create a Launch Configuration that uses the AMI you've created.
Create an Auto-Scaling Group that uses the Launch Configuration you've created to determine what AMI and instance type to launch for any given SCALEUP event.
To handle config and app deployment when any instance launches, you use the "User Data" field to input a script that will run once any given instance launches. This is possibly the worst nomenclature in the history of time. How "User Data" describes a startup script only the author knows. Anyhow, that's where you stick the script that handles all your apt-gets, mkdirs, git clones, etc.
Instances & Internal Balancing
I've also added an additional 'internal balancing layer' using Nginx that allows me to 'flat-pack' all my Node.js apps (app.myapp.com, api.myapp.com, mobile.myapp.com, www.myapp.com, etc.myapp.com) on every instance. When an instance receives a request passed to it from ELB, Nginx handles routing the request to the correct Node.js port for any given application. Sort of like a poor-mans containerization. This has the added benefit that any time one of my apps needs to talk to the other (like when app. needs to send a request to api.) it's done via localhost:XXXX rather than having to go out across the AWS network, or the internet itself.
This setup also maximizes usage of my resources by eliminating any idle infrastructure if the app layer it hosts happens to be receiving light traffic. It also obviates the need to have and ELB/ASG combo for every app, saving more cash.
There's no gotchas or caveats that I've run into using this sort of setup, but there is one work-around that needs to be in place with regard to health-checking (see below).
There's also a nice benefit in that all instances have an IAM role which means that your AWS creds are 'baked in' to each instance upon birth and accessible via your ENV vars. And AWS 'automagically' rotates your creds for you. Very secure, very cool.
Health Checks
If you go the route of the above setup, flat-packing all your apps on one box and running an internal load-balancer, then you need to create a little utility to handle the ELB Health Checks. What I did was create an additional app called ping.myapp.com. And then I configured my ELB Health Checks to send any health checks to the port that my ping app is running on, like so:
Ping Protocol: HTTP
Ping Port: XXXX
Ping Path: /ping
This sends all health checks to my little ping helper, which in turn hits localhost:XXXX/ping on all the apps residing on the instance. If they all return a 200 response, my ping app then returns a 200 response to the ELB health check and the instances gets to live for another 30 seconds.
NOTE: Do not use Auto-Scaling Health Checks if you're using an ELB. Use the ELB health checks. It's kinda confusing, I thought they were the same thing, they're not. You have the option to enable one or the other. Go with ELB.
The Data Layer
One thing that is glaringly absent from my setup is the data layer. I use Compose.io as my managed data-layer provider and I deploy on AWS so I get very low latency between my app layers and my data layer. I've done some prelim investigation on how I would roll my data layer out globally and found that it's very complex — and very expensive — so I've kicked it down my list as a problem that doesn't yet need to be solved. Worst case is that I'll be running my data layer in US-East only and beefing up the hardware. This isn't the worst thing in the world since my API is strictly JSON data on the wire so the average response is relatively tiny. But I can see this becoming a bottleneck at very large, global scale — if I ever get there. If anyone has any input on this layer I'd love to hear what you have to say.
Ta-Da!
Global High Availability On A Beer Budget. Only took me 6 months to figure it out.
Love to hear any input or ideas from anyone that happens to read this.
You can use Anycast for your webservice for free if using Cloudflare free plan.
Digital Ocean now supports Load Balancing of servers itself. It is extremely easy to set up and works great! Saves you having to add in unnecessary components such as nginx (if you only want to use for load balancing).
We were having issues using SSL file uploads with nginx on a digital ocean server, however since the Digital Ocean update, we have removed nginx and now use Digital Ocean's load balancing feature and it works just as we need it to!

is there a list of ip's available that I can block?

Yesterday I setup some software which tracks all http requests across our network of websites. After analyzing the first day of traffic we found nearly a dozen IP's that were flat out harvesting our data. It's pretty obvious when one ip browses 300 pages in a matter of 1 hour lol. I did do a reverse lookup on these and the majority were from Singapore, China, etc so they weren't search engine bots.
Does anyone know a service or website that maintains a list of bad IP's that should be blocked?
Yes there is a list of IPs which is dynamic. So there is no download for that list. But you can query it via DNS.
Have a look at the Http:BL of projecthoneypot.org:
http://www.projecthoneypot.org/httpbl_api.php

Unable to investigate DNS poisoning between China and US

I am interested to know how the DNS requests to political sites differ in different countries.
I need to know how I can send a DNS query to a remote computer, let say, in China. Then, I want to compare the results to US. The goal of the experiment is to get a hand-on experience on the concept about DNS poison. I feel my lectures so theoretical.
How can you compare DNS requests between China and US, such that I can investigate DNS poisoning?
This depends a bit on how the queries are being altered. If the server is giving different results based on your locality, then asking it directly will not be of any use. If you're queries are being poisoned by a caching server in between, these methods might help.
If you have shell accounts in different parts of the world you can perform a simple test.
I'm using 'dig', which is available on most *nix systems. If you're running Windows you might want to search for an alternative in this list of DNS tools
To find the responsible DNS servers
dig ns domain-in-question.com #the.dns.server.you.want.to.use
To get the IP addres for the hostname
dig a host.domain-in-question.com #the.dns.server.you.want.to.use
(You can skip the #.. part to run with your current server)
I recommend trying both of these from different parts of the world to see if the server itself is giving different results or if the caching servers on the way there are being poisoned.
Also, searching for 'how to poison dns' gave me a number of practical results.
You can just use nslookup (the server command lets you specify the DNS server to ask)
Try this web tool:
http://www.kloth.net/services/dig.php
As for learning about DNS poisoning, every computer has settings for which DNS server to trust, and so on. If one of them in a chain is compromised, every computer downstream will receive bad information.
If the remote servers are correctly configured, they won't let you interrogate them.
Any recursive resolver should be configured to only provide answers to the clients its intended to serve.

Resources