The VM can ping the host machine, but can not ping other public IP - openstack

I in a remote Server (I call it host machine) setup the OpenStack Ocata.
And in the OpenStack Ocata I created a VM, the VM use the Security Group (named allow ping & ssh), which is created by myself:
Now, I can use my Mac ping the VM. but can not ssh connect to the VM.
And in the VM(it's IP is 192.168.1.4 and floating IP is 103.35.202.3), I can ping 192.168.1.1 and 103.35.202.1(the host machine's public IP), but can not ping google.com or other public IP.
Why in my Mac I can ping the VM but can not ssh to it?
Why in the VM I can ping the host machine, but can not ping other public IP?
where is the issue?

Currently the only Egress traffic allowed out is for ICMP. Egress is missing for TCP/UDP. Add in Egress rules for both UDP (should help resolve the DNS issue) and TCP (should resolve the SSH issue.)
After adding in the Egress rules for TCP - test ssh again.
After adding in the Egress rules for UDP - test DNS resolution, if you are still running into issues then you may want to verify the DNS servers used when configuring the network.

Related

Tailscale doesn't reconnect after WAN failover on upstream router

QUESTION:
Is there a way to trigger Tailscale to restart in a scenario like the following so that packets again flow to a remote Tailscale subnet over a backup ISP connection?
Scenario: Tailscale does not reconnect after my upstream router fails over to its backup ISP connection.
Prior to failover, local client machines can ping public IP addresses -- 8.8.8.8 for example -- as well as private IP addresses on the other side of a Tailscale subnet router -- 10.0.0.2 for example.
After failover, local clients regain public Internet access, but the private network on the other side of the Tailscale subnet router remains unreachable. The remote Tailscale subnet never becomes reachable again, even after waiting over 15 minutes.
The upstream router fails back after plugging the local WAN1 ethernet cable back in. Clients can still access the public Internet and can again reach the remote Tailscale subnet.
Test configuration:
Tailscale is running on a local Linux machine with IP forwarding enabled.
IP address is 192.168.0.2.
Default route is via 192.168.0.1.
Tailscale flags:
--advertise-routes=192.168.0.0/24
--snat-subnet-routes=false
--accept-routes
Local upstream router has two WAN ports configured for failover only.
WAN1 connects to a cable modem in bridge mode.
WAN2 connects to an LTE router in bridge mode.
LAN IP address is 192.168.0.1.
Static route to 10.0.0.0/8 via 192.168.0.2.
Tailscale is running on a remote EC2 instance in an AWS VPC with IP forwarding enabled.
IP address is 10.0.0.2.
Default route is via 10.0.0.1.
Tailscale flags:
--advertise-routes=10.0.0.0/8
--snat-subnet-routes=false
--accept-routes
tailscaled generally reacts to linkchange events, like links going up or down, and figures out which interface has the default route. If both interfaces remain up and both interfaces have a default route, it may not know which one to use.

How to ping instance's internal network from Host on Devstack

I am running Devstack on my machine and i would like to know if it is possible to ping an instance from Host. The default external network of Devstack is 172.24.4.0/24 and br-ex on Host has the IP 172.24.4.1. I launch an instance using the internal network of Devstack (192.168.233.0/24) and the instance gets the IP 192.168.233.100. My Host's IP is 192.168.1.10. Is there a way to ping 192.168.233.100 from my Host? Another thing i thought is to boot up a VM directly to the external network (172.24.4.0/24) but the VM does not boot up correctly. I can only use that network for associating floating IP's.
I have edited the security group and i have allowed ICMP and SSH, so this is not a problem.

Boot2Docker: how to access container with Bridged Networking

I am running Boot2Docker in Virtual Box on Windows, using VB bridged networking. The IP address of my PC (192.168.2.2) and of the VM (192.168.2.30) is determined by the DHCP server.
I have configured the docker bridge as follows:
File /var/lib/boot2docker/profile:
EXTRA_ARGS='--bip=192.168.2.192/25 --fixed-cidr=192.168.2.224/27'
From my Windows PC I can successfully ping the folloing IP addresses:
192.168.2.30 (ip address of eth1 in the Docker Host)
192.168.2.192 (ip address of docker0)
However I cannot ping any container that I start. E.g. for container IP 192.168.2.226,
I get a reply from 192.168.2.2 (my PC address) that the Desitination Host is unreachable.
How can I get this to work?
I figured it out in the meantime:
On Windows 7, from an elevated cmd shell do:
route add 192.168.2.224/27 192.168.2.30
This way the IP packets find their way to the containers!

arp response received but ICMP packets are not reaching to own host even

My environment has 2 hosts and a VM on each host. NVGRE tunnel is created, as VMs and Hosts belongs to different subnets. I am using Windows server 2012 R2 hosts and same VMs. Hosts are connected back to back. If I put VMs and Hosts in same subnet Ping works.
Both the VMs are receiving ARP requests and responses from each other. ARP cache of each VM is having dynamic entry of other VM.
BUT ICMP request packet from VM is not even seen on its Host.
You cannnot just ping from one host to another host.
To ping provider address from your host, -p option is needed.
Example:
$address = (Get-NetVirtualizationProviderAddress).ProviderAddress
ping -p $address
Please put virtualization lookup records when you need more help.
Run following commands as administrator.
Get-NetVirtualizationLookupRecord
Get-NetVirtualizationCustomerRoute
Also make sure your VM's firewall allows ICMP echo.

Connection to a local Tomcat server through virtual interfaces

I am hoping to connect to a Tomcat server on a local host from virtual machines running on VMWare Station that connects to the host with a NAT virtual network.
I started a Tomcat server with port 8080 on my host PC.
The host normally has the following interfaces:
Loopback interface, IP: 127.0.0.1
An interface for the ethernet, IP: 10.10.31.194 Gateway: 10.10.31.254
The IP and DNS values are automatically assigned.
A virtual interface for the virtual network VNet8, IP: 192.168.129.1 Gateway: 192.168.129.2
The IP and DNS values are automatically assigned. (This interface appears after the VMNet 8 is setup, to reduce confusion)
I can connect to a webpage (say /helloProject/helloPage.html) on the Tomcat server with the following URLs:
http://127.0.0.1:8080/helloProject/helloPage.html
http://10.10.31.194:8080/helloProject/helloPage.html
Then I setup my VMWare Station and opened a Network Address Translation network with the following configurations:
VMNet8
DHCP: Enabled
Subnet Address: 192.168.129.0
Subnet Mask: 255.255.255.0
Gateway IP: 192.168.129.2
But I cannot connect to the helloPage.html webpage through:
http://192.168.129.1:8080/helloProject/helloPage.html
Either from the host itself with IP 192.168.129.1, or from a Linux CentOS virtual machine with IP 192.168.129.128 on the same network.
However, pinging the host from the host or the Linux VM I get response:
ping 192.168.129.1
Reply from 192.168.129.1: bytes=32 time<1ms TTL=128
...
Can anyone suggest something to try so as to make the connection work?
In addition: VM (192.168.129.128) can reach Gateway (192.168.129.2) as well as host (192.168.129.1). But host (192.168.129.1) cannot reach Gateway (192.168.129.2), ping no response. Strange.
Check if you have adress=0.0.0.0 in server.xml Connector's tag for port 8080. It will tell Tomcat to listen on all interfaces available on the host.
Restart Tomcat after the change.

Resources