Applying Domain Name to EC2 Instance - nginx

I want to host a new subdomain on an Ec2 Instance(ec2-xx-xxx-xxx-xx.compute-1.amazonaws.com) like blog.somesite.com
I have the DNS settings on a 3rd party host(like Godaddy) that look like:
site ip addr as shown above, is the value of the ec2 server e.g. xxx.xxx.xx.xx and not
ec2-xx-xxx-xxx-xx.compute-1.amazonaws.com
If I try to do an mxtoolbox lookup on DNS for blog.myapp.com, it seems to have properly propogated the A-Record, do I need a CNAME record instead of A-Record?
If I try to access blog.myapp.com via browser, it is just a never ending connection. If I access myapp.com , it has always been working fine.
On my ec2 box, I'm running nginx, does something need to be configured on nginx too?
Sorry about the newbieness - still learning.
Thank you!

To start with, you should assign an elastic IP to your instance. IP addresses will change if the instance is ever stopped. With an elastic IP, you can re-associate the ip address to the instance if you need to stop it.
If you are setting up a DNS record for the apex, it needs to be an A record (Apex records is your domain with no subdomain).
For the domain blog.yourdomain.com you can set up either an A or CNAME record.
You will likely need to configure your host within nginx to respond to requests with your domain name.
You will also need to make sure port 80 is open on your security group, and system firewall if your OS has one configured.

Related

DNS order of resolution

Context
I have a prod dns, www.example.com & test website dns www.test-example.com. Long story short, I can spin up a self-hosted BIND DNS server on my local network that introduces a CNAME mapping www.example.com --> www.test-example.com. After which, I can access test through the prod url.
The CNAME held in my local BIND DNS server is found before any records held externally.
Question
How does the DNS protocal know to find (and query) my local BIND DNS server first before checking any external NS?
Resarching aritcile son DNS resolution 'DNS order of resolution', I can see local host files are checked before external ones. However, I can't see what happens when hosting a local NS.

Access Multiple Web Sites Hosted on single server on local network from workstations

I am trying to set up a secondary web site hosted on our local domain controller running IIS-8.
I already have one site working successfully thought our network, the default site.
I have successfully got the second one to work on the localhost (the domain controller Server 2012-R2), but I can't seem to access it from any of the other workstations on our network.
I added the new site.
Set the binding to IP address:192.168.1.1, Port:80, Host Name:dyo.mysite.com
I have modifed C:\Windows\system32\drivers\etc\hosts to show 192.168.1.1 dyo.mysite.com, and I have added an alias to the forward lookup Zone in the DNS Manager. (Name:byo.mysite.com, FQND:byo.mysite.com.mydc.com, Target Host: 192.168.1.1)
I can't seem to access the site from any of the network work stations. I have tried many combinations of addresses, http://byo.mysite.com, 192.168.1.1/byo.mysite.com, \mydc\byo.mysite.com, etc.
I would imagine that I am probably missing something simple. I just don't know it is.
Any insight would be greatly appreciated.
To get your server accessed from other workstation. You have to promise
Your IIS site can be accessed via IP address directly.
the client workstation is using your DNS
Your client workstation is not bypassing your DNS server by .pac proxy
So could you get access the website via IP address by disabling default website and set the site to unassigned IP or 192.168.1.1 with null domain name?
If you want to access the website via byo.mysite.com. Then you shouldn't set FQDN like byo.mysite.com.mydc.com. because Web browser will never consider byo.mysite.com as an alias but a different server. That's why When you set FQDN like byo.myDC.com, you could get work by access http://dyo and you could also access website via byo.mysite.com.mydc.com but fail with byo.mysite.com.
How to set DNS correctly
To get it work, please create an new primary Forward Loopup Zone named mysite.com. Then create a new HOST(A) record to map to your machine name like dc.mysite.com and 192.168.1.1. Then create an Alias(CNAME) called www to map to this A NAME. Then the FQDN will be www.mysite.com.
Finally bind your IIS site and access the website should work.
PS: Please make sure your other workstation is not using a proxy.

IIS 10 Site Bindings wildcard development machine

I have successfully setup IIS on my local development machine (dev branch - setup as localdev.me) but when I went to setup another branch (hotfix - setup as localhotfix.me) I am running into issues. The issues are due to the way the site is setup. The subdomain of the url is used to determine which Database to connect to. So going to host.localdev.me will connect to the host database. So in IIS I have the following settings for the bindings of the site.
Type Host Name Port IP Address
http localdev.me 80 *
http *.localdev.me 80 *
I can ping localdev.me with any subdomain and I get the loopback address as expected. When I then setup the hotfix branch (exactly the same as the dev but with the following bindings) I get name not resolved errors.
Type Host Name Port IP Address
http localhotfix.me 80 *
http *.localhotfix.me 80 *
Is there a reason the first setup would work and not the second? What is perhaps even stranger if I tell IIS to stop I can still ping subdomains on localdev.me and get the loopback address.
I could always get it working by manually specifying the host name in my windows hosts file but I would rather not do that as I would need to go in and edit the file every time we add a new subdomain.
EDIT: These are the specific errors I am getting.
ping localhotfix.me
Ping request could not find host localhotfix.me. Please check the name and try again.
EDIT2: I have a solution that works fairly well. It requires Acrylic DNS and installation of the Microsoft Loopback Adapter. I set the loopback adapter to a valid IP Address and set the DNS server to 127.0.0.1 then edit the AcrylicHosts file to contain entries for each domain with a wildcard. Once I did all of this I was able to ping localhotfix.me along with *.localhotfix.me. I believe the reason localdev.me worked is because it is a valid domain. The name would resolve at which point I believe IIS was able to take over. But thats really just an educated guess. But kindof makes sense as to why it worked for one and not the other.
The reason *.localdev.me works without a hosts file is because the public DNS for that domain resolves to 127.0.0.1 as long as it is not localdev.me or www.localdev.me. You can check this using nslookup *.localdev.me (replace the asterisk with anything except www) while your hosts file is empty. On the other hand, *.localhotfix.me is not registered in public DNS at all, which is why you'd need a hosts file entry for those.

AWS automatically route EC2 instances to domain

When firing up multiple new EC2 instances, how do I make these new machines automatically accessible publicly on my domain ****.example.com?
So if I fire up two instances that would normally have a public DNS of
ec2-12-34-56.compute-1.amazonaws.com and ec2-12-34-57.compute-1.amazonaws.com
instead be ec2-12-34-56.example.com and ec2-12-34-57.example.com
Is there a way to use a VPC and Route53 or do I need to run my own DNS server?
Lets say you want to do this in the easiest way. You don't need a VPC
First we need to set up an elastic ip address. This is going to be the connection point between the Route53 DNS service (which you should absolutely use) and the instance. Go into the EC2 menu of the management console, click elastic ip and click create. Create it into EC2-Classic (option will pop up). Remember this ip.
Now go into Route53. Create a hosted zone for your domain. Go into this zone and create a record set for staging.example.com (or whatever your prefix is). Leave it as an A record (default) and put the elastic IP in the textbox.
Note you now need to go into your registrar login (e.g. goDaddy) and replace the nameservers with the ones shown on the NS record. They will look like:
ns-1776.awsdns-30.co.uk.
ns-123.awsdns-15.com.
ns-814.awsdns-37.net.
ns-1500.awsdns-59.org
and you will be able to see them once you create a hosted zone.
Once you've done this it will redirect all requests to that IP address. But it isn't associated with anything. Once you have created an instance, go back into the elastic ip menu and associate it with the instance. Now all requests to that domain will go to that instance. To change just re-associate. Make sure your security zones allow all traffic (or at least HTTP) or it will seem like it doesn't work.
This is not good cloud architecture, but it will get the job done. Better cloud architecture would be making the route point to a load balancer, and attaching the instance to the load balancer. This should all be done in a VPC. It may not be worth your time if you are doing development.

Nginx two virtual hosts on with domain name one in localhost

On my Nginx I've got two hosts.
One with the values
server_name = www.mydomain.com;
root /var/www/production/myFirstWebSite;
and the other with
server_name=localhost;
root /var/www/development/mySecondWebSite;
To my domain registrar account I configured the DNS with two A record "
www IN A myIP
IN A myIP
This is cool, i can reach my first website with www.mydomain.com or mydomain.com.
Now the problem is how to reach my second website which is in development and I don't buy the domain name. And myIP/development/myScondWebSite is no more working ...
I think that the problem come from the DNS entries but I'm not sure.
Do you've got some ideas ?
Thanks in advance.
There's a couple of ways I could think of to access the localhost one.
Creating a subdomain instead of localhost
This is the best one I'd recommend, try doing something like server_name localhost.mydomain.com.
If you need to put further security, you could make it only allow a certain IP(s) or a range of IPs.
Play with your hosts file
In this specific case I would not recommend this, because you're messing with localhost it self, might break some other stuff on your machine, if it was any other name I could have said it's fine.
Use an ssh tunnel to the server
In this method you create a dynamic port on your ssh connection and set your browser to pass all traffic through tunnel which goes to the server then it's handled from there, so if you run localhost for example it would be like running localhost from over there, but since this involved a browser setting, you need to remember to disable it after you disconnect the ssh connection otherwise the browser would return an error saying that the proxy server is refusing the connection.
Using a local Nginx as a proxy
This one I just came up with right now, and I can't say If it would work or not, the 3 before I've worked with before and I know they work.
You'd set a certain domain name that your local nginx would capture and then proxy it to the remote server, but edit the host header setting it to localhost instead, that way it would match the localhost in the remote machine, if this one works it would not need any setting to be turned on and off every time.
Out of all these, I'd recommend the first one first (if it's an option), then try the last one if you don't want to keep turning things on and off before and after each setting.

Resources