Long story short - I need to use networking between projects to have separate billing for them.
I'd like to reach all the VMs in different projects from a single point that I will use for provisioning systems (let's call it coordinator node).
It looks like VPC network peering is a perfect solution to this. But unfortunately one of the existing networks is "legacy". Here's what google docs state about legacy networks.
About legacy networks
Note: Legacy networks are not recommended. Many newer GCP features are not supported in legacy networks.
OK, naturally the question arises: how do you migrate out of legacy network? Documentation does not address this topic. Is it not possible?
I have a bunch of VMs, and I'd be able to shutdown them one by one:
shutdown
change something
restart
unfortunately it does not seem possible to change network even when VM is down?
EDIT:
it has been suggested to recreate VMs keeping the same disks. I would still need a way to bridge legacy network with new VPC network to make migration fluent. Any thoughts on how to do that using GCE toolset?
One possible solution - for each VM in the legacy network:
Get VM parameters (API get method)
Delete VM without deleting PD (persistent disk)
Create VM in the new VPC network using parameters from step 1 (and existing persistent disk)
This way stop-change-start is not so different from delete-recreate-with-changes. It's possible to write a script to fully automate this (migration of a whole network). I wouldn't be surprised if someone already did that.
UDPATE
https://github.com/googleinterns/vm-network-migration tool automates the above process, plus it supports migration of a whole Instance Group or Load Balancer, etc. Check it out.
Related
I'm in the process of starting the design of the networks (VPC, subnetworks and such) as part of the process of moving a rather complex organization on-premise structure, on the cloud.
The chosen provider is GCP and I read and taken the courses to be associate engineer. However, the courses I've followed don't go into details of the technical aspects of doing something like this, just present you with the possible options.
My background is of a senior backend, then fullstack, developer. So I lack some of the very interesting and useful knowledge of a sysadmin unfortunately.
Our case is as follows:
On premise VMs on several racks, reachable only inside a VPN
Several projects on the GCP Cloud
Two of them need to connect to the on-premise VPN but there could be more
Some projects see each other resources (VMs, SQL, etc) using VPC Peering
Gradually we will abandon the on-premise, unless we find some legacy application that really is messed up
Now, I could just create a new VPN connection for every project from Hybrid Connectivity -> VPN but I'd rather create a project dedicated to having the VPN gateway set up and allow other projects to use that resources.
Is this a possible configuration? Is it a valid design? As far as I explored the VPN creation, it seems that I'll have to create a VM that will expose an IP acting as gateway, if that's the case I was thinking to be using the VPC peering to allow other projects to exit into the on premise VPN. No idea if I'm talking gibberish here. I'm still waiting for some information (IKE shared key, etc) before attempting anything, so I'm rather lost at this point.
You have to take in consideration several aspect:
Cost: if you set up a VPN in each project, and if you have to double your connectivity for HA, it will be expensive. If you have only 1 gateway project, it's cheaper
Cheaper, imply trade off. VPN have limited bandwidth: 3Gbps (Cloud Interconnect also, but higher and more expensive). If all your projects use the same VPN thanks to mutualization, take care at this bottleneck.
If you want to mutualise, at least for DEV/UAT project, I recommend you to use VPC Peering, I mean 1 VPN project, and others with VPC peering. Take care at your IP range assign for peering. If you are interested, I wrote an article on this
It's also possible to use Shared VPC, which is great! But there is less compatibility with several product (for example, serverless VPC Connector for Cloud Function and App Engine isn't yet compliant with shared VPC).
We want to use zero tier to connect from one cloud machine to multiple remote machines. We do not want remote machines to access each other. What would be a good approach?
Use a single network and set rules based on tags to restrict access
Run multiple networks, each having cloud machine and a remote machine
Are there limits to
Number of members in zerotier network
Number of zerotier networks a machine can connect to at a time - tun interfaces, ip conflicts or performance impact
I would use a single network and use rules to prevent peering between the machines. For instance, you could set the 192.168.141.0/25 portion of the network to prevent peering, and allow only defined network paths between hosts.
Just a personal rant here: You don't want to do that. Really. You're going to make a headache for yourself when you have to scale horizontally (which you will if you're successful). I would STRONGLY recommend taking a mTLS approach to service authentication instead. Somewhat more work at the start, but a lot easier in the long run.
How to make two VMs communicate with each other? I have to split a task between two VMs, so I think MPI has to be used, If so are there any useful resources that I can use to get started? Any help would be appreciated.
P.S : I have instaled devstack juno
Your question is not really clear.
Openstack is just a virtualization technology. There's almost no difference between having two hardware servers and two VMs. E.g. normally if two servers belong to the same network segment they will have access to each other's open ports. Openstack works just in the same way - if you assign the same network to VMs then this will also work.
However if you wish to install two VMs that will consume from a list of tasks and do them in parallel I would recommend you to read about Enterprise Integration Patterns (e.g. here). Technically this is implemented by using one or several messaging middleware servers such as ActiveMQ or ZeroMQ.
Is there a way to start multiple virtual machines (instances) simultaneously in CloudStack?
Apparently this can't be done using the http user interface. Also, the http API request specifies only one id for targeting the virtual machine.
All I can think to solve this problem is to fire multiple individual start requests for each instance, then polling each of the job for results. Is there a better way?
CloudStack is an API driven system, if there is no API call where you can specify multiple VMs to be created (and I don't think there is), then it is not possible.
If you do need to create multiple machines (nearly) simultaneously, the only option I see is to fire multiple API calls, as you already mentioned.
See this answer on another question for a list of tools that make interfacing with CloudStack easier.
To start VM on cloudstack simultaneously tho in serial, I used cloudmonkey and created a bash script to setup a group of know VM UUID. See here for my experience
https://sites.google.com/site/cloudfyp/tutorial/cloudmonkey/commands-on-cloudmonkey
Consider the following situation:
You have an application you are tesing, but in order to test the networking functionality of said program, you are required to run multiple instances of it and have them communicate with one another.
Possible solutions are:
- Run software on individual machines connected by WAN or LAN.
- Run the software on virtual machines, all on the same computer.
I do not want to use either of these methods (the reasoning is irrelevant). I want to know if there is a way that I can reroute network transmissions from the test application (ideally in any programmming language) in a way such that I can run multiple instances of the same software on one computer, and have them behave as if they were the only instance running on that computer.
In other words, I want to be able to code the application so that each instance listens on the same "listening" port (since only one instance will be running on each computer when in production). Then, I want to know if I can reroute the network requests at a lower level then the application so that they do not interfere with eachother (clash over the same port number).
Essentially, I want to build a virtual environment which only redirects the network calls (whereas a virtual machine takes far more resources, and has way more involved). Is this possible, and how might I approach this problem?
Thank you!
UPDATE: This is a more accurate idea of what I want to accomplish:
Basically, I want to program another application which TRANSPARENTLY redirects bind requests to available ports, and manages which applications are bound where... So from the applications perspective, all the instances are bound to port 1000, but in reality, this other application is automatically managing which instance is bound where, and avoiding potential conflicts. I feel like this could be accomplished with Windows Hooks, but I'm not sure how you could implement this?
As far as I know, there is no sane way to multiplex the same port on the same network device. At the very minimum, you will need to choose on of the following:
Run each instance of your program on a different port
Create multiple virtual network interfaces
The first choice is easy and may be the one I would choose. The second one is more towards what you are looking for but it would be a true PITA to set up - you can look into VirtualBox and its host-only networks for inspiration. If you are writing things on linux you might look into pipes and chrooting but you'll be spending more time setting up this environment than writing your software.