Migrating cloud VMs while maintaining internal IPs - networking

I'm working on a migration plan in GCP where we have some VMs in a project that has its own VPC. We are setting up a Shared VPC and want to move the VMs to the new VPC. However, the system owners want to maintain the existing IPs (i.e. the VPCs each have the same subnet IP ranges). There are about 30 machines that need to be migrated so shutting everything off and migrating them would be challenging. The owners want us to migrate some of the VMs each day.
Of course, the current project has a VPN configured to connect the On-prem. When we stand up the VPN in the Shared VPC I believe that, alone, will cause problems, because the routes that are exchanged will cause the On-Prem to have two routes to the same subnet IP range.
Are there ways to configure the routes to tightly restrict this? For example, define routes for each IP as we move it from one VPC to another?

Scenario: The VMs are located in a Shared VPC.
Shared VPCs cannot have overlapping subnets. Therefore, you cannot migrate VMs between subnets and maintain the same private IP address.
Scenario: The VMs are located in independent VPCs.
You can allocate a private IP address when creating a new VM instance. Shut down the existing VM, create an image of the VM. Then create a new VM, reserve a static private IP address (under Primary Internal IP), and specify the image for the source boot disk.
However, you cannot specify overlapping or duplicate addresses for your VPN. This means that the migrated VMs will not be accessible to the VPN until you reconfigure the VPN.
My recommendation is to not even try to maintain the same private IP address. Migrate the VMs to the new VPC and reconfigure name resolution to use the new IP addressses.

Related

Connect to OpenStack instance via the internet through the router

I've recently found out that the external network for our OpenStack (Ocata) setup has maxed out on the available IP addresses in its allocation table. In fact, it has over-allocated with -9 free IPs. So, to manage the limited IP addresses, is it possible to access an instance in a project directly from an external network (internet) via the project's router? This way only a single IP address needs to be allocated per project instead of allocating to multiple instances per project.
The short answer would be NO, but there are couple of workarounds that came to my mind (not that they will be good, but they will work).
In case any instance in your private network has floatingIP, you can use that host as a jump-host (bastion-host) to SSH into the target host. This also brings the benefits of port forwarding/SSH tunneling to the table if you want to access to some other port.
You can always access to any host on private networks through qdhcp or qrouter namespace from the network node
ip netns exec qdhcp-XXXXXXX ssh user#internal-IP

How can public IP address of cloud VMs remains same even if cloud VM gets migrated to another data centre?

Let us assume, I have hosted my Application on any cloud and want to migrate to AWS for the XYZ reason.
In the previous DATA centre, I have the public IP address assign for cloud VMs (Application).
Now if I'm migrating to AWS I want the same Ip address as the existing one.
So how we can achieve this as a network engineer.
CSP must ensure that the public IP address of cloud VMs remains the same even if the cloud VM network is being served from multiple CSP data centres.
There may be two situation
You want to change your cloud vendor. In this case, public IP can't be the same.
If you want to change the region to a single Cloud provider and use the same IP across all regions e.g; AWS.
then you have to create the Elastic IP with Global static IP addresses.

ECS EC2 instance needs to be private and connect to ECS endpoint for container internet access

My question is similar to this one, except the measures taken are not enough to solve my problem.
The aim is to run containers in ECS on EC2, which need to have internet access, but do not need incoming access.
My reading suggests that in order to launch containers in ECS on EC2 and still have internet access, the container must be run in a subnet where 0.0.0.0/0 is routed to a NAT gateway on a different subnet. I have set this up, and this works as expected, an EC2 instance in that subnet has access to the internet, and even if you give it a public IP address and add rules to the security group, you can't SSH to it from outside as there is no IGW for the subnet.
The problem is that the EC2 instance has to be in the same subnet as the containers. When launching the instance in a subnet that has no internet gateway, it can't connect to the ECS endpoint and so never registers in ECS (regardless of whether it has a public ip).
Changing the subnet to one with an internet gateway allows it to register to ECS, but then the containers either can't launch as they are in a different subnet, or if I use the same subnet as the host, they launch and have no internet connection.
In the end the issue was due to me trying to run the containers in awsvpc mode, which I was trying to do for cross compatibility with fargate mode.
So the workaround was to run the service and task in bridge mode, with the EC2 instance with a public IP and in a subnet with 0.0.0.0/0 pointing to an internet gateway.

Google Cloud Platform networking: Resolve VM hostname to its assigned internal IP even when not running?

Is there any way in the GCP, to allow VM hostnames to be resolved to their IPs even when the VMs are stopped?
Listing VMs in a project reveals their assigned internal IP addresses even when the VMs are stopped. This means that, as long as the VMs aren't re-created, their internal IPs are statically assigned.
However, when our VMs are stopped, the DNS resolution stops working:
ping: my-vm: Name or service not known
even though the IP is kept assigned to it, according to gcloud compute instances list.
I've tried reserving the VM's current internal IP:
gcloud compute addresses create my-vm --addresses 10.123.0.123 --region europe-west1 --subnet default
However, the address name my-vm above is not related to the VM name my-vm and the reservation has no effect (except for making the IP unavailable for automatic assignment in case of VM re-creation).
But why?
Some fault-tolerant software will have a configuration for connecting to multiple machines for redundancy, and if at least one of the connections could be established, the software will run fine. But if the hostname cannot be resolved, this software would not start at all, forcing us to hard-code the DNS in /etc/hosts (which doesn't scale well to a cluster of two dozen VMs) or to use IP addresses (which gets hairy after a while). Specific example here is freeDiameter.
Ping uses the IP ICMP protocol. This requires that the target is running and responding to network requests.
Google Compute Engine VMs use DHCP for private IP addresses. DHCP is integrated with (communicates with) Google DNS. DHCP informs DNS about running network services (VM IP address and hostname). If the VM is shutdown, this link does not exist. DHCP/DNS information is updated/replaced/deleted hourly.
You can set up Google Cloud DNS private zones, create entries for your VPC resources and resolve private IP addresses and hostnames that persist.

How to access specific host and port of an environment's node on jelastic, from another environment?

I have two environments on jelastic 4.7. On one of them I have a Java Stack and a Redis server that need to be kept private without a public IP address. On the other environment, I have a Node.js Stack that have a Public IP.
So, Im searching the docs exhaustively and can't find the answer to the question.
Can I access the private IP and port of my Redis from the node app?? Every node on Jelastic has a local ip address. Can I access those between environments??
I think it's a simple question. I'm trying to avoid the overhead of creating a public IP Address for Redis.
Can I access the private IP and port of my Redis from the node app??
Every node on Jelastic has a local ip address. Can I access those
between environments??
Yes, you can connect to different nodes of different environments using just a local IP within one hosting provider or its regions (depends on providers setup). Also, you can use Endpoints in order to connect to local IPs of other providers or to the regions within one provider, if direct connection can't be established.
Besides that, you can use, for example, CNAME of database instead of a local IP.

Resources