Unable to SSH/Ping to VMs on Private Network of Openstack/packstack - openstack

We are using a setup of Openstack-Train through a Packstack installation and Openvswitch as the backend of neutron.
We have created an external network (10.5.0.0/22), which is an internal network of our org. and an private network (10.3.0.0/22) linked via a router.
Our org. network is connected with a Pfsense firewall which has been given permission to connect the network 10.5.0.0/22 to 10.3.0.0/22 of openstack and vice versa.
In the security group of openstack, we have added the egress and ingress rule to allow traffic between the two networks.
However, we are unable to ping or SSH any VMs that are built on the private network (10.3.0.0/22) from our org. network (10.5.0.0/22).
VMs on the private network have internet connectivity and can ping google and ssh into our org. machines that are on the 10.5.0.0/22 ip range.
The only way to SSH into private network VMs seem to via a floating IP.
Is there a way to directly SSH into the private network VMs without using the floating IP?
Or is this part of openstack design?
Thank you

Do you have any physical network hardware like Switches that are configured to only allow a specific VLAN or subnet traffic?
Can you also share how your subnet is configured "openstack subnet show"
Security does isolate traffic outside a subnet so floating IP is alternative way in, but it's possible to have multiple ports on a vm with different subnets and access.

Related

OpenStack: what's the difference between management network and admin network in Neutron?

I'm not sure if I understand the purpose of OpenStack Neutron management subnet right.
OpenStack docs suggest that it is a VLAN that is created to let OpenStack components to talk to each other and also allows me to SSH into the host (physical machine).
I assumed that upon splitting a network interface into VLANs for OpenStack, I abandon the IP address, assigned to that physical interface in untagged l3 network (say, 10.100.70.), and instead split it into 3 VLANs, and again get an IP address from my provider infrastructure in another provider subnet on this logical interface (say, 10.100.71.).
But here is a page that explains how to install OpenStack with InfiniBand, and it makes use of both management VLAN and PXE/admin interface. So I keep an IP in the untagged PXE network and also create a tagged management VLAN and get IP addresses on both.
Aren't PXE/admin network and management VLAN network redundant here?

ECS EC2 instance needs to be private and connect to ECS endpoint for container internet access

My question is similar to this one, except the measures taken are not enough to solve my problem.
The aim is to run containers in ECS on EC2, which need to have internet access, but do not need incoming access.
My reading suggests that in order to launch containers in ECS on EC2 and still have internet access, the container must be run in a subnet where 0.0.0.0/0 is routed to a NAT gateway on a different subnet. I have set this up, and this works as expected, an EC2 instance in that subnet has access to the internet, and even if you give it a public IP address and add rules to the security group, you can't SSH to it from outside as there is no IGW for the subnet.
The problem is that the EC2 instance has to be in the same subnet as the containers. When launching the instance in a subnet that has no internet gateway, it can't connect to the ECS endpoint and so never registers in ECS (regardless of whether it has a public ip).
Changing the subnet to one with an internet gateway allows it to register to ECS, but then the containers either can't launch as they are in a different subnet, or if I use the same subnet as the host, they launch and have no internet connection.
In the end the issue was due to me trying to run the containers in awsvpc mode, which I was trying to do for cross compatibility with fargate mode.
So the workaround was to run the service and task in bridge mode, with the EC2 instance with a public IP and in a subnet with 0.0.0.0/0 pointing to an internet gateway.

openstack instance can not access internet

An instance created in the OpenStack can not access the internet. I have created an instance from the ubuntu cloud image.
In the security groups, I allowed all the ports for ingress and egress request of ICMP, TCP and UDP. I can ssh the instance and ping the floating IP of the instance and all the other instances on the private network but I can not ping any other IP address outside the network. In the network topology, the router is connecting the public and private network but the instance can not access the internet and i can not ping 8.8.8.8.
Does anyone know how to resolve this issue?
check you ml2 and linuxbridge or ovs agent. this is because of miss-configuration. presumably type-driver and mechanism-driver mismatch, or provider network is not set correctly.
please post your config here, so we can find the problem.
Thanks for your answer. I was able to resolve this issue by allowing ICMP ingress requests because of port 22 in the security groups.

Please Example Kubernetes External Address vs Internal Addresses

In a vmware environment, should the external address become populated with the VM's (or hosts) ip address?
I have three clusters, and have found that only those using a "cloud provider" have external addresses when I run kubectl get nodes -o wide. It is my understanding that the "cloud provider" plugin (GCP, AWS, Vmware, etc) is what assigns the public ip address to the node.
KOPS deployed to GCP = external address is the real public IP addresses of the nodes.
Kubeadm deployed to vwmare, using vmware cloud provider = external address is the same as the internal address (a private range).
Kubeadm deployed, NO cloud provider = no external ip.
I ask because I have a tool that scrapes /api/v1/nodes and then interacts with each host that is finds, using the "external ip". This only works with my first two clusters.
My tool runs on the local network of the clusters, should it be targeting the "internal ip" instead? In other words, is the internal ip ALWAYS the IP address of the VM or physical host (when installed on bare metal).
Thank you
Baremetal will not have an "extrenal-IP" for the nodes and the "internal-ip" will be the IP address of the nodes. You are running your command from inside the same network for your local cluster so you should be able to use this internal IP address to access the nodes as required.
When using k8s on baremetal the external IP and loadbalancer functions don't natively exist. If you want to expose an "External IP", quotes because most cases it would still be a 10.X.X.X address, from your baremetal cluster you would need to install something like MetalLB.
https://github.com/google/metallb

How do I configure vpc to allow outbound traffic over customer gateway

I have configured a vpc to communicate with an on-prem private network as outlined here I am able to ping servers in my on-prem network through the virtual gateway. I have two private subnets and my route table associated with each of those subnets is configured as below:
10.255.254.0/23 local
0.0.0.0/0 vgw-xxxxxxx
My expectation is that all of my traffic, internet or otherwise is being communicated over the vgw to the cgw and then be subject to our on-premise firewall policies. In fact the article linked above specifically says that is the case:
The instances in the VPN-only subnet can't reach the Internet directly; any Internet-bound traffic must first traverse the virtual private gateway to your network, where the traffic is then subject to your firewall and corporate security policies.
When running a server on one of the private subnets the output from traceroute looks like this:
My traceroute to www.google.com looks like this:
as you can see from above traffic to www.google.com is just dying on the first hop.
I know that this can be achieved by adding a NAT to the public subnet, but I would prefer that all traffic flow through the on prem network instead.
What piece am I missing to make this work?

Resources