I have a configuration where I have:
Pods managed by OpenShift on GCP in a zone/region
VM on GCP in same zone/region
I need to reduce as much as possible the latency between those pods and the VM on GCP.
What are the available options for that ?
My understanding is that they would need to be in same VPC but I don't know how to do that.
If you can point me to reference documentation, it would help me a lot.
Thanks for your help
You have 2 options for that:
The best one is to create a sub-project in the OpenShift project that shares the same VPC. This way the machines are in the same network, so the latency is as low as possible. However, this leads to management constraints for (firewall rules...). The average latency should be very low (< 1ms).
Another option is to use a dedicated OpenShift project. This leads to higher latency because the path is longer (VPN => Shared Services => VPN). You need to take care of flows between regions as it is not because the machines are in the same project that the flows do not pass through another region. You must therefore set up an optimization of network routing through a tag that must be present on the MySQL machine. The latency in this case would vary between 2 to 10ms. Of course, this latency can vary because the flows go through VPNs.
Setting your source and destination in the same VPC region, it will definitely reduce your latency. Even though latency is not only affected by distance I have found this documentation regarding GCP Inter Region Latency which could help you deciding your best scenario.
Now, going to your question, I understand you have created a GCP cluster and a VM instance in the same zone/region but in different networks (VPC) ? If possible, could you please clarify a bit more your scenario?
Related
I am looking for a basic thing yet I have not found not even a single good documentation on getting it done.
I want to allocate a floating IP, then associate it to a network interface of a droplet other than eth0.
The reason is I want to have the ability to very easily switch from one IP to the other with a programming language.
In a few words, I want to be able to do these two commands and both should provide a different response.
curl --interface eth0 https://icanhazip.com
curl --interface eth1 https://icanhazip.com
Also, I want to know what to do once I release the Floating IP, how do I roll back to the starting point.
All documentation I read, rely heavily on "ip route" and "route", most did not even work, some worked but replaced completely the old IP by the floating and that's not what I want, and also they did not show how to rollback the introduced configuration changes.
Please help, I spent 1 whole day now trying to get this to work for a project, and no results so far.
I guess there is no need to know DigitalOcean, how to make this work on other Cloud Providers would apply here too I think.
Update
After asking this on DigitalOcean community forum (https://www.digitalocean.com/community/questions/clear-guide-on-outbound-network-through-floating-ip), they claim that is not supported, although there may be some solutions to this if somebody can provide such a "hacky" solution I would take it too. Thanks
In the cloud (AWS. GCP etc.) ARP is emulated by the virtual network layer, meaning that only IPs assigned to VMs by the cloud platform can be resolved. Most of the L2 failover protocols do break for that reason. Even if ARP worked,the IP allocation process for these IPs (often called “floating IPs”) would not integrate with the virtual network in a standard way, so your OS can't just "grab" the IP using ARP and route the packets to itself.
I have not personally done this on Digital Ocean, but I assume that you can call the cloud's proprietary API to do this functionality if you would like to go this route.
See this link on GCP about floating IPs and their implementation. Hope this is helpful.
Here's an idea that needs to be tested:
Let's say you have Node1(10.1.1.1/24) and Node2(10.1.1.2/24)
Create a loopback interface on both VMs and set the same IP address for both like (10.2.1.1/32)
Start a heartbeat send/receive between them
When NodeA starts it automatically makes an API call to create a route for 10.2.1.1/32 and points to itself with preference 2
When NodeB starts it automatically makes an API call to create a route for 10.2.1.1/32 and points to itself with preference 1
The nodes could monitor each other to withdraw the static routes if the other fails. Ideally you would need a 3rd node to reach quorum and prevent split brain scenarios, but you get the idea right?
I'm in the process of starting the design of the networks (VPC, subnetworks and such) as part of the process of moving a rather complex organization on-premise structure, on the cloud.
The chosen provider is GCP and I read and taken the courses to be associate engineer. However, the courses I've followed don't go into details of the technical aspects of doing something like this, just present you with the possible options.
My background is of a senior backend, then fullstack, developer. So I lack some of the very interesting and useful knowledge of a sysadmin unfortunately.
Our case is as follows:
On premise VMs on several racks, reachable only inside a VPN
Several projects on the GCP Cloud
Two of them need to connect to the on-premise VPN but there could be more
Some projects see each other resources (VMs, SQL, etc) using VPC Peering
Gradually we will abandon the on-premise, unless we find some legacy application that really is messed up
Now, I could just create a new VPN connection for every project from Hybrid Connectivity -> VPN but I'd rather create a project dedicated to having the VPN gateway set up and allow other projects to use that resources.
Is this a possible configuration? Is it a valid design? As far as I explored the VPN creation, it seems that I'll have to create a VM that will expose an IP acting as gateway, if that's the case I was thinking to be using the VPC peering to allow other projects to exit into the on premise VPN. No idea if I'm talking gibberish here. I'm still waiting for some information (IKE shared key, etc) before attempting anything, so I'm rather lost at this point.
You have to take in consideration several aspect:
Cost: if you set up a VPN in each project, and if you have to double your connectivity for HA, it will be expensive. If you have only 1 gateway project, it's cheaper
Cheaper, imply trade off. VPN have limited bandwidth: 3Gbps (Cloud Interconnect also, but higher and more expensive). If all your projects use the same VPN thanks to mutualization, take care at this bottleneck.
If you want to mutualise, at least for DEV/UAT project, I recommend you to use VPC Peering, I mean 1 VPN project, and others with VPC peering. Take care at your IP range assign for peering. If you are interested, I wrote an article on this
It's also possible to use Shared VPC, which is great! But there is less compatibility with several product (for example, serverless VPC Connector for Cloud Function and App Engine isn't yet compliant with shared VPC).
I'm in a high availability project which includes deployment of 2-node high availability cluster for hot replacement of services (applications) running on the cluster nodes. The applications have inbound and outbound tcp connections as well as process udp traffic (mainly for communicating with ntp server).
The problem is pretty standard until one needs to provide a hot migration of services to backup node with all the data stored in RAM. Applications are agnostic of backup mechanisms and it is highly undesirable to modify them.
As only approach to this problem, I've come off with a duplication approach assuming that both cluster nodes will run the same applications repeating calculations of each other. In case of failure the primary server the backup server will become a primary.
However, I have not found any ready solution for proxy which will have synchronous port mirroring. No existing proxy servers (haproxy, dante, 3proxy etc.) support such feature as far as I know. Have I missed something, or I should write a new one from scratch?
A rough sketch of the functionality can be found here:
p.s. I assume that it is possible to compare traffic from the two clones of the same application...
Long story short - I need to use networking between projects to have separate billing for them.
I'd like to reach all the VMs in different projects from a single point that I will use for provisioning systems (let's call it coordinator node).
It looks like VPC network peering is a perfect solution to this. But unfortunately one of the existing networks is "legacy". Here's what google docs state about legacy networks.
About legacy networks
Note: Legacy networks are not recommended. Many newer GCP features are not supported in legacy networks.
OK, naturally the question arises: how do you migrate out of legacy network? Documentation does not address this topic. Is it not possible?
I have a bunch of VMs, and I'd be able to shutdown them one by one:
shutdown
change something
restart
unfortunately it does not seem possible to change network even when VM is down?
EDIT:
it has been suggested to recreate VMs keeping the same disks. I would still need a way to bridge legacy network with new VPC network to make migration fluent. Any thoughts on how to do that using GCE toolset?
One possible solution - for each VM in the legacy network:
Get VM parameters (API get method)
Delete VM without deleting PD (persistent disk)
Create VM in the new VPC network using parameters from step 1 (and existing persistent disk)
This way stop-change-start is not so different from delete-recreate-with-changes. It's possible to write a script to fully automate this (migration of a whole network). I wouldn't be surprised if someone already did that.
UDPATE
https://github.com/googleinterns/vm-network-migration tool automates the above process, plus it supports migration of a whole Instance Group or Load Balancer, etc. Check it out.
Is there any way to know any information about the Network Interface Cards (NIC) of servers in EC2?
I've tried a lot of commands that typically work in Linux, but seems it's all abstracted out when you try them on EC2 VMs.
Alternately, is there any way to characterize the performance of a NIC on a physical server that is hosting my VM (eg, to measure max throughput)? I was thinking there should be some tools for testing such things on a single server but I couldn't find any! (tools like iperf measure the bandwidth between two machines).
Thanks!
I'm not entirely sure testing the throughput of a nic would do much good since it seems to be variable. There is no official documentation on the subject. If you are service static content, S3 is your best bet. Otherwise use some sort of caching with varnish or something similar that you can scale out incase you are running into bandwidth issues.