Access ELB from inside the VPC using domain name and subdomain without going to outside internet - aws-elb

How can I access a subdomain from inside VPC going thru the ELB.
Let's say have 3 EC2 instances, two of them are running an endpoint API that go thru the Load Balance, and the third one is accessing their API.
I have a domain added to Route 53 that has a subdomain for the API endpoint which I am trying to access from within the VPC, is this possible?
What I am trying to do: EC2-3 -> api.mysite.com (stay inside VPC) -> LoadBalancer -> Target group -> either instance 1 or 2

create a private zone , associate it with the vpc that the resources are deployed in and then create an alias record that points to elb .
That pvt zone will be resolvable without leaving the network , just ensure that your elb is an internal loadbalancer

Related

Reusing custom domain between GCP and Firebase

I am planning to host 2 webapps using Firebase Hosting: example.com and dev.example.com. For corresponding APIs, I have 2 projects on GCP (using managed instance groups and a load balancer) with custom domains: api.example.com and dev-api.example.com.
Is it possible to have a setup where subdomains of the custom domain example.com can be split/used across Firebase and GCP load balancer? I thought this is a popular setup but can't find any documentation/howto around this. I am using Google Domains as the domain provider for example.com and using Google Managed SSL certificates as well. All the projects belong to one account.
Assuming that you are using a Classic HTTPS Load Balancer with your GCP project, you may get your Firebase Hosting linked to your LB as an additional backend through Internet Network Endpoint Group so all of them can be reached through the same Load Balancer IP.
To do this,
Edit the current Load Balancer and go to Backend configuration
Create a Backend Service, under Backend type, select Internet Network Endpoint Group
Under Backends > New Backend, Create Internet Network Endpoint Group. This will take you to Network endpoint groups under Compute Engine
Under New network endpoint > Add through, you may select IP and port or Fully qualified domin name and port. Just supply the correct FQDN or IP of your Firebase hosting and the Port where the Firebase hostings are listening to, then Create.
Finish creating the backend service using the the Internet network endpoint group that you created as Backend Type
Under Host and Path rules. click +Add Host and Path Rule, please fill out the Host field with the domain of your Firebase hosting. For Path, just put /*. Then select the Internet network endpoint group that you created as Backend.
I am also under the assumption that your Google Managed Certificate is also deployed within the Load Balancer. If this is the case, then you may provision another Google Managed SSL certificate and include all 4 domains
example.com
dev.example.com
api.example.com
dev-api.example.com
Once done, you may create A records with the Load Balancer's IP address for each domain. This is to ensure that the requests will be forwarded to the correct backend, as oppose to just creating CNAME's which will always forward the request to the root domain (example.com) and not to their intended backends. The LB should be able to forward requests based on the domain being accessed.

ECS Nginx network setup

I have 3 containers on ECS: web, api and nginx. Basically nginx is proxying traffic to web and api containers:
upstream web {
server web-container:3000;
}
upstream api {
server api-container:3001;
}
But every time I redeploy web or api they change their IPs so I need to redeploy nginx afterwards in order to make it to "pick up" new IPs.
Is there a way to avoid this so I could just update let's say api service and nginx service would automatically proxy to correct IP address?
I assume these containers belong to 3 different task definitions and ultimately 3 different tasks (or better 3 different services).
If that is the setup then you want to use service discovery for this. This only works with ECS services and the idea is that you create 3 distinct services each with 1+ tasks in it. You give the service a name (e.g. nginx, web, api) and each container in them is going to be able to resolve the other containers by pointing to the fqdn (e.g. api.local). When your container in the nginx service tries to connect to api.local service discovery will resolve that name to the IP of one of the tasks in the ECS service api.
If you want to see an example re how this is setup you can look at this demo app and particularly at this CloudFormation template

Connect GCP Wordpress CTD to existing load balancer?

I have already setup my domain using the Google Cloud Platform, including a Load Balancer with SSL protection. Everything works on this end.
How do I connect a Marketplace Wordpress click-to-deploy creation to this existing load balancer?
If the marketplace solution is a single VM go to the instance groups menu in GCE, select unmanged groups, create a group and add the vm to the group.
Then go back to the load balancer and add a backend. It will ask you what to use as a backend: endpoint (no), bucket (no) or instance group.
Go for the instance group.
Mind that LB will work only if an attached health check will detect the vm active (usually you want to check for http on listening port)

Azure VM DNS - What to specify as 'NS' and 'A' record

I have created virtual machine in Azure with static IP. I specified a DNS Name in my ResourceGroup called somevm. So Azure automatically made it somevm.southeastasia.cloudapp.azure.com
Then I added endpoint on public port 80, so the website can be accessed via localhost, via http:// ipaddress/ (outside of VM) and http://somevm.southeastasia.cloudapp.azure.com
On my existing hosting provider, I see the following DNS entries.
NS ns1.myhost.arvixevps.com
NS ns2.myhost.arvixevps.com
A <ipaddress>
* A <ipaddress>
www A <ipaddress>
mail A <ipaddress>
mail2 A <ipaddress>
MX [10], mail.mywebsite.com
MX [21], mail2.mywebsite.com
TXT globalsign-domain-verification=SKFHKSJHDLKUERIJKDCFJLKF_234KJFDJK
I want to migrate my website from Existing hosting to the Azure VM.
What do I need to enter in NS, and 'A' records. Do I also need to enter CNAME? If yes, what will it be and how to find it for my Azure VM.
Everything you need to be able to configure your domain for Azure Virtual Machine is here including CNAME/A records difference and what do you need to put on your domain registrar site. The reason that i added the link for the Cloud Service is that virtual machines in a classic mode (if you created that VM) reside in the specific entity called Cloud Service.
If you created the new Azure VM, then you could go with the Azure DNS service or that approach.
So, CNAME is basically the alias for the azure domain, and you may see the first domain. With A, the domain maps to the IP address of the resource instead of the domain. The choice depends on your scenario. The simplest way is to configure CNAME because if you configure A, you will need to configure static IP (or, because of the dynamic nature of a cloud, it may change in a future, so you will need to change it on your domain registrar site).
Some useful links for understanding how things are working here:
https://azure.microsoft.com/en-in/documentation/articles/dns-domain-delegation/
https://azure.microsoft.com/en-in/documentation/articles/dns-operations-recordsets/
http://blogs.msdn.com/b/cloud_solution_architect/archive/2015/05/05/creating-azure-vms-with-arm-powershell-cmdlets.aspx

AWS automatically route EC2 instances to domain

When firing up multiple new EC2 instances, how do I make these new machines automatically accessible publicly on my domain ****.example.com?
So if I fire up two instances that would normally have a public DNS of
ec2-12-34-56.compute-1.amazonaws.com and ec2-12-34-57.compute-1.amazonaws.com
instead be ec2-12-34-56.example.com and ec2-12-34-57.example.com
Is there a way to use a VPC and Route53 or do I need to run my own DNS server?
Lets say you want to do this in the easiest way. You don't need a VPC
First we need to set up an elastic ip address. This is going to be the connection point between the Route53 DNS service (which you should absolutely use) and the instance. Go into the EC2 menu of the management console, click elastic ip and click create. Create it into EC2-Classic (option will pop up). Remember this ip.
Now go into Route53. Create a hosted zone for your domain. Go into this zone and create a record set for staging.example.com (or whatever your prefix is). Leave it as an A record (default) and put the elastic IP in the textbox.
Note you now need to go into your registrar login (e.g. goDaddy) and replace the nameservers with the ones shown on the NS record. They will look like:
ns-1776.awsdns-30.co.uk.
ns-123.awsdns-15.com.
ns-814.awsdns-37.net.
ns-1500.awsdns-59.org
and you will be able to see them once you create a hosted zone.
Once you've done this it will redirect all requests to that IP address. But it isn't associated with anything. Once you have created an instance, go back into the elastic ip menu and associate it with the instance. Now all requests to that domain will go to that instance. To change just re-associate. Make sure your security zones allow all traffic (or at least HTTP) or it will seem like it doesn't work.
This is not good cloud architecture, but it will get the job done. Better cloud architecture would be making the route point to a load balancer, and attaching the instance to the load balancer. This should all be done in a VPC. It may not be worth your time if you are doing development.

Resources