I have an ELB (in EC2 classic) running and one of my client want to hardcode an IP to his firewall rule to access our site.
I know that ELB doesn't provide static IP but is there a way to set up an instance (just for them) that they could hit and be used as gateway to our API?
(I was thinking of using HA Proxy on OpsWorks but it points directly to my instances and I need something that points to my ELB because SSL resolution happens at this level)
Any recommendation would be very helpful.
I assume you are running one or more instances behind your ELB.
You should be able to assign an elastic IP to one of those instances. In EC2 classic, EIPs will need to get reattached when you restart your instance.
Related
Followed this tutorial to setup two ec2 instances: 12 . Creation of two EC2 instances and how to establish ping communication - YouTube
The only difference is I used a linux image.
I setup a simple python http server on a machine (on port 8000). But I cannot access this from my other machine; whenever I curl, the program kind of waits. (It might eventually timeout but I wasn't patient enough to witness that).
However, the workaround, I figured, was that you have to add a port rule via the security group. I do not like this option since it means that that port (for the machine that hosts the web server) can be accessed via the internet.
I was looking for an experience similar to what people usually have at home with their routers; machines connected to the same home router can reach out to other machines on any port (provided the destination machine has some service hosted on that port).
What is the solution to achieve something like this when working with ec2?
The instance is open to the internet because you are allowing access from '0.0.0.0/0' (anywhere) in the inbound rule of the security group.
If you want to the communication to be allowed only between the instances and not from the public internet. You can achieve that by assigning the same security group to both the instances and modifying the inbound rule in the security group to allow all traffic or ICMP traffic sourced from security group itself.
You can read more about it here:
AWS Reference
I want to assign a domain name to an internal openstack floating ip, to access the instance over the internet.
I checked that you can set dnsmasq_dns_servers = 1.1.1.1 and configure dhcp_agent.ini accordingly, it seems to be a step in the right direction, but i couldn't find a way to allocate domain name to openstack instance (via horizon or cli).
The dnsmasq server that is managed by the DHCP agent is used to implement DHCP in subnets where DHCP is enabled. It does not resolve hostnames. If you want to be able to resolve hostnames internally, you could look into running a DNS server in your subnet or maintaning a hostfile on each instance that needs to communicate with the instance.
You could look at Designate. That is the DNS as a Service component of OpenStack. It is also possible to integrate Designate with an external service to manage external DNS.
See SysEleven's How to set up DNS for a Server/Website.
It walks you through the process of:
Creating the zone,
adding the DNS record, and finally
making the zone authoritative in global DNS.
It assumes you can use the OpenStack CLI, but there's also documentation on doing the same thing with Terraform, which I'd recommend as it fully automates the entire infrastructure with infrastructure as code (IaC).
It should apply to any OpenStack provider.
Task:
Create a UDP Load Balancer with Failover at Amazon for EC2 Instances.
Problems:
Based on the explanation below, I have the following problems:
AWS EC2 Doesn't have a Public DNS Name that works for both IPv4 and IPv6 traffic.
Unable to reassign the current IPv6 address to a new instance in another availability zone.
Explanation:
By Failover, I mean that if the instance goes down for whatever reason, spin up a new one and replace it. If the availability zone it is in is down spin up a new instance in another availability zone and replace it.
With Elastic IP Addresses I am able to re-assign the existing Elastic IP Address to the new instance, regardless of its availability zone.
With IPv6 Addresses, I am unable to reassign the existing IPv6 Address if the new instance is created in a different availability zone, because it is not in the same subnet. By availability zone, I am referring to Amazon's Availability Zones, such as us-west-2a, us-west-2b, us-west-2c, etc.
The only way I know how to resolve this is to update the Host Record at my registrar (Godaddy in my case.) with the new IPv6 address. Godaddy has an API and I believe I can update my host record programmatically. However, Godaddy has a minimum TTL of 600 seconds, that means my server could be unreachable by IPv6 Traffic for 10 minutes or more based on propagation.
Amazon has an amazing Load Balancer system if I am just doing normal TCP traffic. This problem would be non existent if that were the case. Since I need to load balance UDP traffic, I'm running into this problem. The AWS ELB (Amazon Elastic Load Balancer) provides me with a CNAME that I can point all of my traffic to for TCP traffic. So I don't need to worry about the separate IPv4 vs IPv6 traffic. I can just point the CNAME directly to the DNS Name that Amazon provides with the ELB.
Amazon Also Provides a Public DNS for EC2, but it is only for IPv4 Traffic. So that would work for my IPv4 Traffic but not my IPv6 Traffic.
The only option I can think of is to setup a Software Based Load Balancer, in my case NGINX on an EC2 Instance. Then point the domain to the NGINX Load Balancer's IPv4 and IPv6 Addresses. Then when a zone crashes, I spin up a new AWS EC2 Instance in another zone. Then use Godaddy's API to update the IPv6 Address to the New Instance's IPv6 Address.
Request
Does anyone know how to assign a CNAME to an EC2 Instance without an AWS ELB? The instance would need to be able to receive both IPv4 and IPv6 traffic at the CNAME.
The only way I can think of doing it, will cause down time due to propagation issues with DNS changes at my Domain Registrar.
I've been looking at the Route 53 options in Amazon and it appears to have the same propagation delays.
I've thought about setting up my own DNS server for the domain. Then if the IP Address changes I could potentially change the DNS entry faster than using Godaddy. But DNS Propagation issues are going to be a problem with any dns change.
[EDIT after thinking about my answer]
One item that I did not mention is that Route 53 supports simple load balancing and failover. Since you will need two systems in my answer below, just spin up two EC2 instances for your service, round-robin load balance with Route 53 and add a failover record. Create a CloudWatch alarm so that when one of your instances fail, you know to replace it manually. This will give you a "poor man" load balancer for UDP.
Configuring DNS Failover
[END of EDIT]
First, I would move from GoDaddy DNS to Route 53. I have no experience with programming GoDaddy DNS entries, but Route 53's API is excellent.
GoDaddy does not support zone apex CNAME records (example.com). You would need to use IPv4 A records and IPv6 AAAA records. This should not be a problem. I would use AWS EIP records so that when launching the new instance, at least IPv4 DNS entries would not require DNS delays.
I would not setup my own DNS server. I would switch to Route 53 first. When you mention propagation delays, you mean TTL. You can change the TTL to be short. Route 53 supports a 1 second TTL entry, but most DNS clients will ignore short TTL values so you will have little to no control over this. Short TTLs also mean more DNS requests.
AWS does not offer UDP load balancing but there are third party products and services that do that run on AWS. If your service is critical or revenue producing use a well tested solution.
I would not try to reinvent the wheel. However, sometimes this is fun to do so to better understand how real systems work.
STEP 1: You will need to design a strategy to detect that your instance has failed. You will need to duplicate the health check that a load balancer performs and then trigger an action.
STEP 2: You will need to write code that can update Route 53 (GoDaddy) DNS entries.
STEP 3: You will need to write code that can launch an EC2 instance and to terminate the old instance.
STEP 4: You will need to detect the new addresses for the new instance and update Route 53 (GoDaddy).
The above steps will require a dedicated always on computer with a highly reliable Internet connection. I would use EC2 for the monitoring system. T2-micro is probably fine.
However, look at the amount of time that it will take you to develop and test this new system. Go back and rethink your strategy.
I have got a dedicated server that has nginx web server located in a uk datacenter, the ngnix acts as a front end server which then directs users to other instances that i have on aws located in america. What ip address would the client see, the ngnix front server(which is the desired result), or would the client still know about the instances or servers ip address located in america?
PS. nginx acts as a load balancer here.
Typically, users connect to "www.yoursite.com", and that gets looked up in DNS.
Assuming there is only one DNS entry (corresponding your nginx frontend), then as far as those users are concerned, they are only talking to that one machine.
Sometimes people use round-robin DNS, where multiple machines correspond to a given host name.
Presumably you would know if you were doing this, though (:
You can confirm this by tracing your traffic when connecting. Maybe use WireShark?
I have the following scenario:
Elastic beanstalk with N instances
ELB for load balancing to the EBS
External datacenter with IP Filtering
Since I can't filter by name (fqdn) and I can't filter for a single IP either, is there a way to make all the request that came from AWS Machines have only one IP or maybe use a third machine to serve as proxy for the calls for the AWS Machines and then attach a EIP on it.
Not really. Or at least, if there's a way to do it, I'd love to hear about it. One of the biggest problems with beanstalk is its requirement to exist outside of VPCs, and thus, in arbitrary Amazon IP space. About the only workaround I've found for this after talking to AWS engineers is to forward traffic from them to something like a bastion server, and allow the bastion server to communicate with your data center firewall. Maybe there's something I'm missing, but I know of no other way to get it working without some server in between the beanstalk instances and the data center; not if the IP of the server matters.