gre tunnel on OVS - vpn

I have problem with openvswitch GRE tunnel connection
I want to connect two different network with OVS as CPE on gns3 and I want to config GRE tunnels between two OVS.
I used these below commands on ovs:
ovs-vsctl add-port br2 gre0 -- set interface gre0 type=gre \
options:remote_ip=<GRE tunnel endpoint on other hypervisor>
then I tested with Wireshark but GRE tunnel didn’t work .
where I did wrong ? how can I establish gre tunnel ?
I appreciate your help as soon as possible.

Related

Can an OpenVPN Route over TEST-NET-1 (RFC 5735)

Background
I have a strange use-case where my VPN cannot be on any of the private subnets, but, also cannot use a TAP interface. The machine will be moving through different subnets, and requires access to the entire private address space by design. A single blocked IP would be considered a failure of design.
So, these are all off limits:
10.0.0.0/8
172.16.0.0/12
192.168.0.0/16
169.254.0.0/16
In searching for a solution, I came across RFC 5735, which defines:
192.0.2.0/24 TEST-NET-1
198.51.100.0/24 TEST-NET-2
203.0.113.0/24 TEST-NET-3
As:
For use in documentation and example code. It is often used in conjunction with domain names
example.com or example.net in vendor and protocol documentation. As described in [RFC5737], addresses within this block do not legitimately appear on the public Internet and can be used without any coordination with IANA or an Internet registry.
Which, was a "Jackpot" moment for me and my use case.
Config
I configured an OpenVPN server as such:
local 0.0.0.0
port 443
proto tcp
dev tun
topology subnet
server 203.0.113.0 255.255.255.0 # TEST-NET-3 RFC 5735
push "route 203.0.113.0 255.255.255.0"
...[Snip]...
With Client:
client
nobind
dev tun
proto tcp
...[Snip]...
And ufw rules:
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING -s 203.0.113.0/24 -o ens160 -j MASQUERADE
COMMIT
However, upon running I get /sbin/ip route add 203.0.113.0/24 via 203.0.113.1 RTNETLINK answers: File exists in the error logs. While the VPN completes the rest of its connection successfully.
No connection
Running the following commands:
Server: sudo python3 -m http.server 80
Client: curl -X GET / 203.0.113.1
Results in:
curl: (28) Failed to connect to 203.0.113.1 port 80: Connection timed out
I have tried:
/sbin/ip route replace 203.0.113.0/24 dev tun 0 on client and server.
/sbin/ip route change 203.0.113.0/24 dev tun 0 on client and server.
Adding route 203.0.113.0 255.255.255.0 to the server.
Adding push "route 203.0.113.0 255.255.255.0 127.0.0.1" to server
And none of it seems to work.
Does anyone have any idea how I can force the client to push this traffic over the VPN to my server, instead of to the public IP?
This does actually work!
Just dont forget to allow connections within your firewall. I fixed my config with:
sudo ufw allow in on tun0
However, 198.18.0.0/15 and 100.64.0.0/10 defined as Benchmarking and Shared address space respectively, may be more appropriate choices, since being able to forward TEST-NET addresses may be considered a bug.

Mininet+ GNS3: pingall fails + dhcp doesn't work

I'm still a beginner, I'm facing some issues and I need your help.
1
I integrated mininet to gns3 successfully, the mininet VM can ping all the routers and other VMs, Also it can get an address through dhcp immediately without problems. However, when I run this command,
sudo mn --controller=remote,ip=192.168.1.10, port=6653
the ovswitch connects to the floodlight controller but pingall fails.
2
I tried to add my network interface (eth0) to the my bridge (s1) in order to connect the mininet host to the internet, the dhclient takes long time and can't assign an IP address to the bridge.
add eth0: ovs-vsctl add-port s1 eth0
remove eth0's IP addressing: ifconfig eth0 0
make s1 interface get a DHCP IP: dhclient s1
Im using:
floodlight master
ovs_version 2.5.4
GNS3 version 2.1.8 on Windows (64-bit) with Python 3.6.5 Qt 5.8.0 and PyQt 5.8.
ubuntu 16.04.4
Please can someone help me to solve these issues.
Thanks in advance.

Configure LXC to use wireless hosted network

I found most of the configuration is for giving static or private network. But I want it to act as a different machine so it will get a separate IP address from the DHCP and I want to do it through nmcli.
Thanks in advance.
If you are using docker as tagged, rather than LXC, use pipework to map the wlan interface from the host to the container
pipework eth2 $CONTAINERID 10.10.9.9/24
or alternatively let the container do the dhcp negotiation for you
pipework eth1 $CONTAINERID dhclient
This setup is based on a macvlan interface so the same concept should work with LXC you just won't get the easy front end.
I'm confused if this is a docker question or an LXC question.
EDIT: as per the comments, wlan interface support in a bridge depends on the wlan vendor. It may work, or it may not work at all.
In any case, you should be able to create a bridge, add your wlan0 interface to the bridge, and then have your LXC container connect to this bridge directly. Then, when you run your DHCP client in the container, it will grab it from the wlan0 interface.
Configure bridge (manually for now)
# ifconfig wlan0 up
# brctl addbr br0
# brctl addif br0 wlan0
# ifconfig br0 up
# dhclient br0
Configure LXC configuration
If using traditional priviliged LXC, edit the container's config file at /var/lib/lxc/$NAME/config,
and update this value to point to your new bridge.
lxc.network.link = br0
Run DHCP in container
# lxc-attach -n $NAME
# dhclient eth0
# ip a
If the output to ip a shows the desired IP, you're all set!
If you want to make the configuration persistent, you'll have to add the bridge to your /etc/network/interfaces file.
IEEE 802.11 doesn’t like multiple MAC addresses on a single client, so bridge and macvlans are not the right solution here.
Use ipvlan in L2 mode.

Openstack VM is not accessible on LAN

I am facing issue with accessing Open stack VM's on LAN.
I have setup single machine(192.168.2.15) opensatck using devstack, so
all VM's are running inside this machine
My machine(192.168.2.15) has one network card(eth0) and
I have nova networking, have not installed neutron.
I have assigned static IP on eth0 of all the LAN machine( such as 192.168.2.15 and 192.168.2.16) in /etc/network/interfaces file.
System information of the Openstack Machine is as below:
Memory usage: 19% IP address for virbr0: 192.168.122.1
Swap usage: 0% IP address for br100: 10.0.0.1
Below works fine
I can access internet from VM1(10.0.0.2 which is auto assigned IP).
I can ping LAN machine(192.168.2.16) from VM1.
Openstack machine(192.168.2.15) can ping VM1(10.0.0.2).
VM1(10.0.0.2) can ping VM2(10.0.0.3).
But LAN machine 192.168.2.16 is not able to ping VM1(10.0.0.2)
So please suggest how can it be achieved ? And Please consider me as very new to Openstack and networking.
Thanks !!!
You need to assign a floating IP to the VMs you create if you want a host from outside the openstack network to connect to it. The internal IPs are only accessible from inside the openstack network.
See how to assign a floating IP to a VM here: http://docs.openstack.org/user-guide/content/floating_ip_allocate.html
To access the VM's floating IP from another host (that is not the devstack host) you should make sure that the devstack host is configured to forward packets. You can do this with:
sudo bash
echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
See more details here:
http://barakme.tumblr.com/post/70895539608/openstack-in-a-box-setting-up-devstack-havana-on-your
Adding a route to client machine to openstack VM, helped me.

How to send data to cloud instance in opestack

Context: I have setup a demo cloud in my laptop using VirtualBox and have two virtual machines - one has the client and other as server. Create a small instance using the server and running instance is TinyLinux.
Problem: How shall I send data to that instances and stores in that instance.
Some pointers would be very helpful.
Well, with libvirt, you have several options how to do the networking. The default is to use NATing. In that case libvirt creates a bridge and virtual nics for every so configured virtual nic:
$ brctl show
bridge name bridge id STP enabled interfaces
virbr0 8000.525400512fc8 yes virbr0-nic
vnet0
Then sets-up iptables rules to NAT (masquerade) the packets on such bridge.
Chain POSTROUTING (policy ACCEPT 19309 packets, 1272K bytes)
pkts bytes target prot opt in out source destination
8 416 MASQUERADE tcp -- any any 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
216 22030 MASQUERADE udp -- any any 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
11 460 MASQUERADE all -- any any 192.168.122.0/24 !192.168.122.0/24
enables forwarding
# cat /proc/sys/net/ipv4/ip_forward
1
and spawns DHCP server (dnsmasq is both DHCP and DNS in one)
ps aux | grep dnsmasq
nobody 1334 0.0 0.0 13144 568 ? S Feb06 0:00 \
/sbin/dnsmasq --strict-order --local=// --domain-needed \
--pid-file=... --conf-file= --except-interface lo --bind-dynamic \
--interface virbr0 --dhcp-range 192.168.122.2,192.168.122.254 \
--dhcp-leasefile=.../default.leases --dhcp-lease-max=253 --dhcp-no-override
If I had two virtual network interfaces (two machines with one NIC on same network, there would be two nics in that bridge. The machines gets the address from the range 192.168.122.2-254 from the dnsmasq DHCP server. So if you know that addresses, you should be able to connect from one to the other VM as both are on same broadcast domain (connected by the bridge). To the outside of your computer the machines all appear as "one IP address".
The more "advanced" option is to use Bridged networking, which again puts the virtual interfaces into one bridge, but it puts some physical device there as well, so the machines appears as if there were several machines connected to some switch...
I usually bind a web server to the gateway interface the VMs use to NAT with the physical host.

Resources