ECS EC2 instance needs to be private and connect to ECS endpoint for container internet access - networking

My question is similar to this one, except the measures taken are not enough to solve my problem.
The aim is to run containers in ECS on EC2, which need to have internet access, but do not need incoming access.
My reading suggests that in order to launch containers in ECS on EC2 and still have internet access, the container must be run in a subnet where 0.0.0.0/0 is routed to a NAT gateway on a different subnet. I have set this up, and this works as expected, an EC2 instance in that subnet has access to the internet, and even if you give it a public IP address and add rules to the security group, you can't SSH to it from outside as there is no IGW for the subnet.
The problem is that the EC2 instance has to be in the same subnet as the containers. When launching the instance in a subnet that has no internet gateway, it can't connect to the ECS endpoint and so never registers in ECS (regardless of whether it has a public ip).
Changing the subnet to one with an internet gateway allows it to register to ECS, but then the containers either can't launch as they are in a different subnet, or if I use the same subnet as the host, they launch and have no internet connection.

In the end the issue was due to me trying to run the containers in awsvpc mode, which I was trying to do for cross compatibility with fargate mode.
So the workaround was to run the service and task in bridge mode, with the EC2 instance with a public IP and in a subnet with 0.0.0.0/0 pointing to an internet gateway.

Related

Migrating cloud VMs while maintaining internal IPs

I'm working on a migration plan in GCP where we have some VMs in a project that has its own VPC. We are setting up a Shared VPC and want to move the VMs to the new VPC. However, the system owners want to maintain the existing IPs (i.e. the VPCs each have the same subnet IP ranges). There are about 30 machines that need to be migrated so shutting everything off and migrating them would be challenging. The owners want us to migrate some of the VMs each day.
Of course, the current project has a VPN configured to connect the On-prem. When we stand up the VPN in the Shared VPC I believe that, alone, will cause problems, because the routes that are exchanged will cause the On-Prem to have two routes to the same subnet IP range.
Are there ways to configure the routes to tightly restrict this? For example, define routes for each IP as we move it from one VPC to another?
Scenario: The VMs are located in a Shared VPC.
Shared VPCs cannot have overlapping subnets. Therefore, you cannot migrate VMs between subnets and maintain the same private IP address.
Scenario: The VMs are located in independent VPCs.
You can allocate a private IP address when creating a new VM instance. Shut down the existing VM, create an image of the VM. Then create a new VM, reserve a static private IP address (under Primary Internal IP), and specify the image for the source boot disk.
However, you cannot specify overlapping or duplicate addresses for your VPN. This means that the migrated VMs will not be accessible to the VPN until you reconfigure the VPN.
My recommendation is to not even try to maintain the same private IP address. Migrate the VMs to the new VPC and reconfigure name resolution to use the new IP addressses.

Unable to SSH/Ping to VMs on Private Network of Openstack/packstack

We are using a setup of Openstack-Train through a Packstack installation and Openvswitch as the backend of neutron.
We have created an external network (10.5.0.0/22), which is an internal network of our org. and an private network (10.3.0.0/22) linked via a router.
Our org. network is connected with a Pfsense firewall which has been given permission to connect the network 10.5.0.0/22 to 10.3.0.0/22 of openstack and vice versa.
In the security group of openstack, we have added the egress and ingress rule to allow traffic between the two networks.
However, we are unable to ping or SSH any VMs that are built on the private network (10.3.0.0/22) from our org. network (10.5.0.0/22).
VMs on the private network have internet connectivity and can ping google and ssh into our org. machines that are on the 10.5.0.0/22 ip range.
The only way to SSH into private network VMs seem to via a floating IP.
Is there a way to directly SSH into the private network VMs without using the floating IP?
Or is this part of openstack design?
Thank you
Do you have any physical network hardware like Switches that are configured to only allow a specific VLAN or subnet traffic?
Can you also share how your subnet is configured "openstack subnet show"
Security does isolate traffic outside a subnet so floating IP is alternative way in, but it's possible to have multiple ports on a vm with different subnets and access.

Google Cloud Platform networking: Resolve VM hostname to its assigned internal IP even when not running?

Is there any way in the GCP, to allow VM hostnames to be resolved to their IPs even when the VMs are stopped?
Listing VMs in a project reveals their assigned internal IP addresses even when the VMs are stopped. This means that, as long as the VMs aren't re-created, their internal IPs are statically assigned.
However, when our VMs are stopped, the DNS resolution stops working:
ping: my-vm: Name or service not known
even though the IP is kept assigned to it, according to gcloud compute instances list.
I've tried reserving the VM's current internal IP:
gcloud compute addresses create my-vm --addresses 10.123.0.123 --region europe-west1 --subnet default
However, the address name my-vm above is not related to the VM name my-vm and the reservation has no effect (except for making the IP unavailable for automatic assignment in case of VM re-creation).
But why?
Some fault-tolerant software will have a configuration for connecting to multiple machines for redundancy, and if at least one of the connections could be established, the software will run fine. But if the hostname cannot be resolved, this software would not start at all, forcing us to hard-code the DNS in /etc/hosts (which doesn't scale well to a cluster of two dozen VMs) or to use IP addresses (which gets hairy after a while). Specific example here is freeDiameter.
Ping uses the IP ICMP protocol. This requires that the target is running and responding to network requests.
Google Compute Engine VMs use DHCP for private IP addresses. DHCP is integrated with (communicates with) Google DNS. DHCP informs DNS about running network services (VM IP address and hostname). If the VM is shutdown, this link does not exist. DHCP/DNS information is updated/replaced/deleted hourly.
You can set up Google Cloud DNS private zones, create entries for your VPC resources and resolve private IP addresses and hostnames that persist.

How does open stack assign ip to virtual machines?

I want to know how does the openstack assign ip to virtual machines ? and how to find out port and ips used by the VM. Is it possible for us to find out the IP and ports being used by an application running inside the VM ?
To assign an IP to your VM you can use this command:
openstack floating ip create public
To associate your VM and the IP use the command below:
openstack server add floating ip your-vm-name your-ip-number
To list all the ports used by applications, ssh to your instance and run:
sudo lsof -i
Assuming you know the VM name
do the following:
On controller run
nova interface-list VM-NAME
It will give you port-id, IP-address and mac address of VM interface.
You can login to VM and run
netstat -tlnp to see which IP and ports being used by applications running inside the VM.
As to how a VM gets IP, it depends on your deployment. On a basic openstack deployment when you create a network and create a subnet under that network, you will see on the network node a dhcp namespace getting created. (do ip netns on network node). The namespace name would be qdhcp-network-id. The dnsmasq process running inside the dhcp namespace allots IPs to VM. This is just one of the many ways in which VM gets IP.
This particular End User page of the official documentation could be a good start:
"Each instance can have a private, or fixed, IP address and a public, or floating, one.
Private IP addresses are used for communication between instances, and public ones are used for communication with the outside world.
When you launch an instance, it is automatically assigned a private IP address that stays the same until you explicitly terminate the instance. Rebooting an instance has no effect on the private IP address.
A pool of floating IPs, configured by the cloud operator, is available in OpenStack Compute.
You can allocate a certain number of these to a project: The maximum number of floating IP addresses per project is defined by the quota.
You can add a floating IP address from this set to an instance of the project. Floating IP addresses can be dynamically disassociated and associated with other instances of the same project at any time.
Before you can assign a floating IP address to an instance, you first must allocate floating IPs to a project. After floating IP addresses have been allocated to the current project, you can assign them to running instances.
You can assign a floating IP address to one instance at a time."
There are of course deeper layers to look at in this section of the Admin Guide
Regarding how to find out about ports and IPs, you have two options: command line interface or API.
For example, if you are using Neutron* and want to find out the IPs or networks in use with the API:
GET v2.0/networks
And using the CLI:
$ neutron net-list
You can use similar commands for ports and subnets, however I haven't personally tested if you can get information about the application running in the VM this way.
*Check out which OpenStack release you're running. If it's an old one, chances are it's using the Compute node (Nova) for networking.

VirtualBox networking for an NGINX client having multiple hostnames

I have a host laptop running Debian, and a client VM running Debian. On the client, I run NGINX, and it serves up a complex web application with several hostnames (e.g. www.host, api.host, blog.host). The laptop moves between several different networks, with a seemingly ever-changing IP address.
I'm trying to meet the following conditions with this VM:
The IP address of the client shouldn't change (e.g. always 192.168.10.10)
With a static IP, I could edit the host /etc/hosts file and keep complex hostnames
The client should have access to the Internet
No other machines need to access the client
What is the best way to set up the Attached to settings for this client?
To do this, simply add two network interfaces to the box.
The first interface will use Host-Only, and that is how your host can connect to the client. This will create an additional network adapter on the host.
The second interface will use NAT, and that is the gateway to the internet. This will create an additional network adapter on the client.
If you've already got a client running, you'll need to get the next network adapter up and running by executing sudo ifconfig eth1 up and to get an IP address, run sudo dhclient eth1.

Resources