Can I change the router's interface ip? - openstack

Can I change the router's interface ip to a new one?
No matter in the dashboard or terminal, if we can change the ip?

Yes, you can.
1: neutron router-list
note down the id of router of whose you want to change interface IP address.
2: ip netns
to see the namespace of router. It would be qrouter-{idOfRouter}
3: ip netns exec qrouter-{idOfRouter} ifconfig
to check if the selected namespace if of the desired router or not.
4: ip netns exec qrouter-{idOfRouter} ifconfig <port-name> <newIPOfSameClass>
to change the ip address of the interface
5: ip netns exec qrouter-{idOfRouter} ifconfig <port-name>
to check if interface ip has changed or not.
This might help.

Related

Ping external IPv6 address from a network namespace

I need to reach an external IPv6 address from a network namespace.
In my host I have setup a SIT tunnel (IPv6-in-IPv4) that does tunnelling of IPv6 packets and sends it through the default interface (eth0). The SIT tunnel relies on the Hurricane Electric tunnel broker service. I can ping an external IPv6 address from the host.
$ ping6 ipv6.google.com
PING ipv6.google.com(lis01s13-in-x0e.1e100.net) 56 data bytes
64 bytes from lis01s13-in-x0e.1e100.net: icmp_seq=1 ttl=57 time=98.1 ms
64 bytes from lis01s13-in-x0e.1e100.net: icmp_seq=2 ttl=57 time=98.7 ms
Here are some details about the tunnel:
$ ip -6 route sh
2001:470:1f14:10be::/64 dev he-ipv6 proto kernel metric 256
default dev he-ipv6 metric 1024
Here comes the interesting part. For reasons that are beyond the scope of this question, I need to do the same thing (ping ipv6.google.com) from within a network namespace.
Here is how I create and setup my network namespace:
ns1.sh
#!/bin/bash
set -x
if [[ $EUID -ne 0 ]]; then
echo "You must run this script as root."
exit 1
fi
# Create network namespace 'ns1'.
ip netns del ns1 &>/dev/null
ip netns add ns1
# Create veth pair.
ip li add name veth1 type veth peer name vpeer1
# Setup veth1 (host).
ip -6 addr add fc00::1/64 dev veth1
ip li set dev veth1 up
# Setup vpeer1 (network namespace).
ip li set dev vpeer1 netns ns1
ip netns exec ns1 ip li set dev lo up
ip netns exec ns1 ip -6 addr add fc00::2/64 dev vpeer1
ip netns exec ns1 ip li set vpeer1 up
# Make vpeer1 default gw.
ip netns exec ns1 ip -6 route add default dev vpeer1
# Get into ns1.
ip netns exec ns1 /bin/bash --rcfile <(echo "PS1=\"ns1> \"")
Then I run ns1.sh and ping veth1 (fc00::1) vpeer1 (fc00::2) from 'ns1'.
ns1> ping6 fc00::1
PING fc00::1(fc00::1) 56 data bytes
64 bytes from fc00::1: icmp_seq=1 ttl=64 time=0.075 ms
^C
ns1> ping6 fc00::2
PING fc00::2(fc00::2) 56 data bytes
64 bytes from fc00::2: icmp_seq=1 ttl=64 time=0.056 ms
However, if I try to ping an external IPv6 address:
ns1> ping6 2a00:1450:4004:801::200e
PING 2a00:1450:4004:801::200e(2a00:1450:4004:801::200e) 56 data bytes
^C
--- 2a00:1450:4004:801::200e ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1008ms
All packets are loss.
I opened veth1 with tcpdump and checked what's going on. What I'm seeing is that Neighbor Solicitation packets are reaching the interface. These packets are trying to resolve the MAC address of the the IPv6 destination address:
$ sudo tcpdump -qns 0 -e -i veth1
IPv6, length 86: fc00::2 > ff02::1:ff00:200e: ICMP6, neighbor solicitation,
who has 2a00:1450:4004:801::200e, length 32
IPv6, length 86: fc00::2 > ff02::1:ff00:200e: ICMP6, neighbor solicitation,
who has 2a00:1450:4004:801::200e, length 32
I don't really understand why this is happening. I have enabled IPv6 forwarding in the host too but it had no effect:
$ sudo sysctl -w net.ipv6.conf.default.forwarding=1
Thanks for reading this question. Suggestions welcome :)
EDIT:
Routing table in the host:
2001:470:1f14:10be::/64 dev he-ipv6 proto kernel metric 256
fc00::/64 dev veth1 proto kernel metric 256
default dev he-ipv6 metric 1024
I added a NDP proxy in the host, that solves the NDP solicitations. Still the address is not reachable from the nsnet (looking into this):
sudo sysctl -w net.ipv6.conf.all.proxy_ndp=1
sudo ip -6 neigh add proxy 2a00:1450:4004:801::200e dev veth1
ULAs are not routable
You have given an Unique Local Address (fc00::2) to the network namespace: this IP address is not routable in the global internet but only in your local network.
When your ISP receives the ICMP packet coming from this address it will drop it. Even if this packet was successfully reaching ipv6.google.com, it could not possibly send the answer back to you because there is no announced route for this IP address.
Routing table problem (NDP notications)
You get NDP notifications because of this line:
ip netns exec ns1 ip -6 route add default dev vpeer1
which tells the kernel that (in the netns) all IP address are directly connected on the vpeer1 interface. The kernel thinks that this IP address is present on the Ethernet link: that's why it's trying to resolve its MAC address with NDP.
Instead, you want to say that they are reachable through a given router (in your case, the router is your host):
ip netns exec ns1 ip -6 route add default dev vpeer1 via $myipv6
Solutions
You can either:
associate an public IPv6 address (of your public IPv6 prefix) to your netns and setup a NDP proxy on the host for this address;
subnet your IPv6 prefix and route a subnet to your host (if you can);
use NAT (bad, ugly, don't do that).
You should be able to achieve the first one using something like this:
#!/bin/bash
set -x
myipv6=2001:470:1f14:10be::42
peeripv6=2001:470:1f14:10be::43
#Create the netns:
ip netns add ns1
# Create and configure the local veth:
ip link add name veth1 type veth peer name vpeer1
ip -6 address add $myipv6/128 dev veth1
ip -6 route add $peeripv6/128 dev veth1
ip li set dev veth1 up
# Setup vpeer1 in the netns:
ip link set dev vpeer1 netns ns1
ip netns exec ns1 ip link set dev lo up
ip netns exec ns1 ip -6 address add $peeripv6/128 dev vpeer1
ip netns exec ns1 ip -6 route add $myipv6/128 dev vpeer1
ip netns exec ns1 ip link set vpeer1 up
ip netns exec ns1 ip -6 route add default dev vpeer1 via $peeripv6
# IP forwarding
sysctl -w net.ipv6.conf.default.forwarding=1
# NDP proxy for the netns
ip -6 neigh add proxy $myipv6 dev veth1

Configure LXC to use wireless hosted network

I found most of the configuration is for giving static or private network. But I want it to act as a different machine so it will get a separate IP address from the DHCP and I want to do it through nmcli.
Thanks in advance.
If you are using docker as tagged, rather than LXC, use pipework to map the wlan interface from the host to the container
pipework eth2 $CONTAINERID 10.10.9.9/24
or alternatively let the container do the dhcp negotiation for you
pipework eth1 $CONTAINERID dhclient
This setup is based on a macvlan interface so the same concept should work with LXC you just won't get the easy front end.
I'm confused if this is a docker question or an LXC question.
EDIT: as per the comments, wlan interface support in a bridge depends on the wlan vendor. It may work, or it may not work at all.
In any case, you should be able to create a bridge, add your wlan0 interface to the bridge, and then have your LXC container connect to this bridge directly. Then, when you run your DHCP client in the container, it will grab it from the wlan0 interface.
Configure bridge (manually for now)
# ifconfig wlan0 up
# brctl addbr br0
# brctl addif br0 wlan0
# ifconfig br0 up
# dhclient br0
Configure LXC configuration
If using traditional priviliged LXC, edit the container's config file at /var/lib/lxc/$NAME/config,
and update this value to point to your new bridge.
lxc.network.link = br0
Run DHCP in container
# lxc-attach -n $NAME
# dhclient eth0
# ip a
If the output to ip a shows the desired IP, you're all set!
If you want to make the configuration persistent, you'll have to add the bridge to your /etc/network/interfaces file.
IEEE 802.11 doesn’t like multiple MAC addresses on a single client, so bridge and macvlans are not the right solution here.
Use ipvlan in L2 mode.

openstack instance getting ip and not getting ip

I am new to openstack and I followed the installation guide of icehouse for ubuntu 12.04/14.04
I chose 3 node architecture. Controller, Nova, Neutron.
The 3 nodes are installed in VM's. I used nested KVM. Inside VM's kvm is supported so nova will use virt_type=kvm. In controller I created 2 nics. eth0 is a NAT interface with ip 203.0.113.94 and eth1 a host only interface with ip 10.0.0.11.
In nova there are 3 nics. eth0 NAT - 203.0.113.23, eth1 host only 10.0.0.31 and eth2 another host only 10.0.1.31
In neutron 3 nics. eth0 NAT 203.0.113.234, eth1 host only 10.0.0.21 and eth2 another hosty only 10.0.1.21 (during installation guide in neutron node i created a br-ex (and a port to eth0) which took the settings of eth0 and eth0 settings are:
auto eth0 iface eth0 inet manual up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down)
Everything seemed fine. I can create networks, routers etc, boot instances but I have this error.
When I launch an instance it takes a fixed ip but when I log in into instance (cirros) can't ping anything. ifconfig with no ip.
I noticed that in demo-net (tenant network) properties under subnet in the ports field it has 3 ports. 172.16.1.1 network:router_interface active 172.16.1.3 network:dhcp active 172.16.1.6 compute:nova down
I searched for solutions over the net but couldn't find anything!
Any help?
Ask me if you want specific logs because I don't know which ones to post!
Thanks anyway!
Looks like you are using Fixed IP to ping..If so please assign floating IP to your instance, and then try to ping..
If you have already assigned floating IP and you are pinging using that IP..please upload log of your instance

XEN VM networking, public IP binding

I need some information about routing public IP addresses assigned to the hypervisor into a VM.
I have installed XEN hypervisor on Centos 6.5, I have one NIC with IP 80.86.84.34 & Mask:255.255.255.0 I have an additional IP 85.25.14.195 & Mask: 255.255.255.255
Dom0 has eth0 & virbr0 with a virtual dhcp, the VM has address 192.168.122.4 & Mask:255.255.255.0 the VM has working outbound internet connection.
How do I correctly set dom0 to route connections for 85.25.14.195 into the VM?
Many thanks for your help and apologies if this is a basic question that has been answered before, please point me in the right direction.
First EDIT
I have managed to route the public IP by adding the below route in Dom0, DomU now correctly responds to packets received by Dom0 for the public IP forwarded over virbr0.
route add -net 85.25.14.195 gw 192.168.122.1 netmask 255.255.255.255
My follow up question is what rule is required in IP tables to allow traffic? As currently it is blocked when the firewall is running.
Second EDIT
OK, so I figured out the iptables, I had to remove the REJECT line on virbr0, I also had to add the following rule to make the outbound IP from Dom0 appear correctly:
-A POSTROUTING -s 192.168.122.2 -p tcp -j SNAT --to 85.25.14.195
You should be able to assign 85.25.14.195 as alias IP on virbr0 ( may be virbr0:1 ) and do simple IP Nat or forwarding. You need to do # sysctl -w net.ipv4.ip_forward=1 to be able to forward traffic coming on Public IP to internal Private IP.

How to forward packet from eth0 to a tun/tap device?

Here is the topo: HostA(eth0) ---- (eth0)HostB
I have created a tun/tap device on HostB, for say tun0 or tap0. When eth0 of HostB receives a packet from HostA, maybe a ICMPv6(NS, echo request, etc.) or a UDP/TCP packet(encapsulated with IPv6 header), I want to forward this packet from eth0 to tap0. After doing something to this packet, I also want to send a reply back to HostA, through tap0 and eth0.
I cannot find a way to do that, can some one help me or give some hints?
This is an extremely basic routing question, probably unsuitable for Stack Overflow.
You need something like this on Host B:
HostB# sysctl -w net.ipv6.conf.all.forwarding=1
HostB# ip -6 addr add 2001:db8:0:0::1/64 dev eth0
HostB# ip -6 addr add 2001:db8:0:1::1/64 dev tun0
Then on Host A:
HostA# ip -6 addr add 2001:db8:0:0::2/64 dev eth0
HostA# ip -6 route add default via 2001:db8:0:0::1 dev eth0
HostA# ping6 2001:db8:0:1::2 # <-- should work if that host exists on tun0

Resources