in my OpenStack setup I have one tenant with different projects and I'm trying to figure out what would be the best approach to make communication between the
subnets 10.200.0.0/24 <-> 10.202.1.0/24 of two projects
possible (see picture below).
Would that be by creating a shared network and connect GW01 and GW02 to it with static routes?
or is there a thing such as "shared router"?
I am a bit lost in the endless possibilities of OpenStack and would appreciate any help/hints.
I would not dare to claim that I have the 'best' approach for you. I can tell what my initial idea would be.
You can share networkB with ProjectA using the instructions here: https://docs.openstack.org/neutron/latest/admin/config-rbac.html. Now if you create a router in ProjectA, you can add interfaces for it in the subnets in networkA as well ass the subnets in networkB. Because the router knows both networks, you don't need to add static routes to route between them.
This is correct solution:
Create a port on shared network
Attached the port by openstack CLI to each tenant ( project ) router
Related
I'm in the process of starting the design of the networks (VPC, subnetworks and such) as part of the process of moving a rather complex organization on-premise structure, on the cloud.
The chosen provider is GCP and I read and taken the courses to be associate engineer. However, the courses I've followed don't go into details of the technical aspects of doing something like this, just present you with the possible options.
My background is of a senior backend, then fullstack, developer. So I lack some of the very interesting and useful knowledge of a sysadmin unfortunately.
Our case is as follows:
On premise VMs on several racks, reachable only inside a VPN
Several projects on the GCP Cloud
Two of them need to connect to the on-premise VPN but there could be more
Some projects see each other resources (VMs, SQL, etc) using VPC Peering
Gradually we will abandon the on-premise, unless we find some legacy application that really is messed up
Now, I could just create a new VPN connection for every project from Hybrid Connectivity -> VPN but I'd rather create a project dedicated to having the VPN gateway set up and allow other projects to use that resources.
Is this a possible configuration? Is it a valid design? As far as I explored the VPN creation, it seems that I'll have to create a VM that will expose an IP acting as gateway, if that's the case I was thinking to be using the VPC peering to allow other projects to exit into the on premise VPN. No idea if I'm talking gibberish here. I'm still waiting for some information (IKE shared key, etc) before attempting anything, so I'm rather lost at this point.
You have to take in consideration several aspect:
Cost: if you set up a VPN in each project, and if you have to double your connectivity for HA, it will be expensive. If you have only 1 gateway project, it's cheaper
Cheaper, imply trade off. VPN have limited bandwidth: 3Gbps (Cloud Interconnect also, but higher and more expensive). If all your projects use the same VPN thanks to mutualization, take care at this bottleneck.
If you want to mutualise, at least for DEV/UAT project, I recommend you to use VPC Peering, I mean 1 VPN project, and others with VPC peering. Take care at your IP range assign for peering. If you are interested, I wrote an article on this
It's also possible to use Shared VPC, which is great! But there is less compatibility with several product (for example, serverless VPC Connector for Cloud Function and App Engine isn't yet compliant with shared VPC).
I followed the documentation from docs.corda.net to setup 3 node dev corda network on a single machine.
My goal is to setup multinode production level corda network that involves multiple physical machines. Can someone please help me how can I achieve this?
I want to learn about the corda network capabilities, its different configuration modes etc etc.
I've already setup 3 node dev corda network on a single machine
There are two approaches with which you can achieve the above
Using the network bootstrapper , refer : https://docs.corda.net/network-bootstrapper.html
Using the Network Map Service
For a production level it is preferable to use the network map service as you can manage the nodes dynamically. This is not possible with the networkbootstrapper as the nodes informations are shared within the nodes during the boostrapping which cannot be changed
For NetworkMap Service you can refer Cordite NetwokMapService.
Long story short - I need to use networking between projects to have separate billing for them.
I'd like to reach all the VMs in different projects from a single point that I will use for provisioning systems (let's call it coordinator node).
It looks like VPC network peering is a perfect solution to this. But unfortunately one of the existing networks is "legacy". Here's what google docs state about legacy networks.
About legacy networks
Note: Legacy networks are not recommended. Many newer GCP features are not supported in legacy networks.
OK, naturally the question arises: how do you migrate out of legacy network? Documentation does not address this topic. Is it not possible?
I have a bunch of VMs, and I'd be able to shutdown them one by one:
shutdown
change something
restart
unfortunately it does not seem possible to change network even when VM is down?
EDIT:
it has been suggested to recreate VMs keeping the same disks. I would still need a way to bridge legacy network with new VPC network to make migration fluent. Any thoughts on how to do that using GCE toolset?
One possible solution - for each VM in the legacy network:
Get VM parameters (API get method)
Delete VM without deleting PD (persistent disk)
Create VM in the new VPC network using parameters from step 1 (and existing persistent disk)
This way stop-change-start is not so different from delete-recreate-with-changes. It's possible to write a script to fully automate this (migration of a whole network). I wouldn't be surprised if someone already did that.
UDPATE
https://github.com/googleinterns/vm-network-migration tool automates the above process, plus it supports migration of a whole Instance Group or Load Balancer, etc. Check it out.
How to make two VMs communicate with each other? I have to split a task between two VMs, so I think MPI has to be used, If so are there any useful resources that I can use to get started? Any help would be appreciated.
P.S : I have instaled devstack juno
Your question is not really clear.
Openstack is just a virtualization technology. There's almost no difference between having two hardware servers and two VMs. E.g. normally if two servers belong to the same network segment they will have access to each other's open ports. Openstack works just in the same way - if you assign the same network to VMs then this will also work.
However if you wish to install two VMs that will consume from a list of tasks and do them in parallel I would recommend you to read about Enterprise Integration Patterns (e.g. here). Technically this is implemented by using one or several messaging middleware servers such as ActiveMQ or ZeroMQ.
I was just curious if i can connect any two interfaces of any two vms running on the same box using vmware player as if the two interfaces were connected by network wire. I need this setup for simulating some networking test.
Under workstation you can create very easily what you're after, however, in player, if you select host only there would be 3 things on the network, the host, and the 2 guests.
I don't know the reason of down voting.
Anyway i think that creating one more virtual network like vmnetX and then let the two interfaces connect to the same will solve the problem. Please rectify if i am wrong.
P.S: It is obvious you cannot connect the two vms using 'Wire' :)