I would like to configure my openstack to use IP's from the same network as my physical server. I do not want to use dhcp or floating IP's.
neutron net-create --tenant-id TENANT-ID --shared sharednet1 --provider:network_type flat --provider:physical_network physnet1
neutron subnet-create sharednet1 10.68.10.0/24 --gateway-ip 10.68.10.11 --diable-dhcp
When create an instance , Nova should able to 'inject a ip' to instance.
Is that possiable ?
How to configure it?
Please follow the basic installation guidehttp://docs.openstack.org/ for grizzly or havana in the networksection.
You can use the static IPs of your physical servers by hard coding the static values in /etc/network/interface files so that when you create your router, it will use that IP
Related
I installed Octavia service in Openstack and it worked! But in my openstack port list there are two related ports (amphora and loadbalancer) and LB port is down! What's wrong?
Note that my loadbalancer has active and online statuses, but I don't know why its port is down or what is its effect.
Summary: This is completely normal and how Octavia manages high availability.
Octavia uses a VIP address that can be moved between amphora (service VMs) for recovery from hypervisor failures. Inside the amphora, the port gets this VIP address assigned as a "secondary" IP.
In neutron, this address is handled with a VIP port, which reserves the VIP IP address (so we can move it to a replacement amphora VM if needed). This port is in the "DOWN" status.
To allow multiple IPs on a neutron port you have to use what neutron calls an "allowed address pair". If you look at the details of the port that is up, you will see the "allowed address pair" setting on the port that references the VIP port information and IP.
I am running Devstack on my machine and i would like to know if it is possible to ping an instance from Host. The default external network of Devstack is 172.24.4.0/24 and br-ex on Host has the IP 172.24.4.1. I launch an instance using the internal network of Devstack (192.168.233.0/24) and the instance gets the IP 192.168.233.100. My Host's IP is 192.168.1.10. Is there a way to ping 192.168.233.100 from my Host? Another thing i thought is to boot up a VM directly to the external network (172.24.4.0/24) but the VM does not boot up correctly. I can only use that network for associating floating IP's.
I have edited the security group and i have allowed ICMP and SSH, so this is not a problem.
I created a vm in horizon, I use dhcp server of neutron to allocate IP and the following is the horizon show:
But in my vm console,it display:
so i want to know why it don't have an ip address?
Check out neutron logs, it should give some clues. Mostly the log file will be located in /var/log folder of neutron host.
Probably you should also check for "dnsmasq" related logs and errors in /var/log/ folders
I have a server VLAN of 10.101.10.0/24 and my Docker host is 10.101.10.31. How do I configure a bridge network on my Docker host (VM) so that all the containers can connect directly to my LAN network without having to redirect ports around on the default 172.17.0.0/16? I tried searching but all the howtos I've found so far have resulted in losing SSH session which I had to go into the VM from a console to revert the steps I did.
There's multiple ways this can be done. The two I've had most success with are routing a subnet to a docker bridge and using a custom bridge on the host LAN.
Docker Bridge, Routed Network
This has the benefit of only needing native docker tools to configure docker. It has the down side of needing to add a route to your network, which is outside of dockers remit and usually manual (or relies on the "networking team").
Enable IP forwarding
/etc/sysctl.conf: net.ipv4.ip_forward = 1
sysctl -p /etc/sysctl.conf
Create a docker bridge with new subnet on your VM network, say 10.101.11.0/24
docker network create routed0 --subnet 10.101.11.0/24
Tell the rest of the network that 10.101.11.0/24 should be routed via 10.101.10.X where X is IP of your docker host. This is the external router/gateway/"network guy" config. On a linux gateway you could add a route with:
ip route add 10.101.11.0/24 via 10.101.10.31
Create containers on the bridge with 10.101.11.0/24 addresses.
docker run --net routed0 busybox ping 10.101.10.31
docker run --net routed0 busybox ping 8.8.8.8
Then your done. Containers have routable IP addresses.
If you're ok with the network side, or run something like RIP/OSPF on the network or Calico that takes care of routing then this is the cleanest solution.
Custom Bridge, Existing Network (and interface)
This has the benefit of not requiring any external network setup. The downside is the setup on the docker host is more complex. The main interface requires this bridge at boot time so it's not a native docker network setup. Pipework or manual container setup is required.
Using a VM can make this a little more complicated as you are running extra interfaces with extra MAC addresses over the main VM's interface which will need additional "Promiscuous" config first to allow this to work.
The permanent network config for bridged interfaces varies by distro. The following commands outline how to set the interface up and will disappear after reboot. You are going to need console access or a seperate route into your VM as you are changing the main network interface config.
Create a bridge on the host.
ip link add name shared0 type bridge
ip link set shared0 up
In /etc/sysconfig/network-scripts/ifcfg-br0
DEVICE=shared0
TYPE=Bridge
BOOTPROTO=static
DNS1=8.8.8.8
GATEWAY=10.101.10.1
IPADDR=10.101.10.31
NETMASK=255.255.255.0
ONBOOT=yes
Attach the primary interface to the bridge, usually eth0
ip link set eth0 up
ip link set eth0 master shared0
In /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
TYPE=Ethernet
IPV6INIT=no
USERCTL=no
BRIDGE=shared0
Reconfigure your bridge to have eth0's ip config.
ip addr add dev shared0 10.101.10.31/24
ip route add default via 10.101.10.1
Attach containers to bridge with 10.101.10.0/24 addresses.
CONTAINERID=$(docker run -d --net=none busybox sleep 600)
pipework shared1 $CONTAINERID 10.101.10.43/24#10.101.10.Y
Or use a DHCP client inside the container
pipework shared1 $CONTAINERID dhclient
Docker macvlan network
Docker has since added a network driver called macvlan that can make a container appear to be directly connected to the physical network the host is on. The container is attached to a parent interface on the host.
docker network create -d macvlan \
--subnet=10.101.10.0/24 \
--gateway=10.101.10.1 \
-o parent=eth0 pub_net
This will suffer from the same VM/softswitch problems where the network and interface will need be promiscuous with regard mac addresses.
I am deploying OpenStack Havana over Ubuntu Server 12.04 LTS following the official documentation (http://docs.openstack.org/havana/install-guide/install/apt/content/index.html). I'm using a single-node installation, so one physical machine is acting as controller node and compute node at the same time.
Right now I have everything working except for the network. I should remark that I am not using Neutron, just Nova Network. Also, I should say I'm far from being a networking expert.
The problem is the next one: in my enterprise, as far as I know, every device has a public IP. This is, there are no IPs such as 192.168.X.X or 10.0.X.X. Rather, all IPs are located in a public subnet, to say, A.B.0.0/16. In particular, my department has the subnet A.B.C.0/24 assigned, so all our devices should be assigned an IP in that range. The gateway has assigned the IP A.B.C.2.
So far, I have not been able to configure the network correctly. What I would like to do is the following:
Using, nova network create, create a new network which is the same one that the physical machine:
nova network-create vmnet --fixed-range-v4=A.B.C.0/24 --gateway=A.B.C.2 --dns1=8.8.8.8 --dns2=4.4.4.4
Then, assign IPs manually to each virtual machine. If IPs were assigned in that subnet, it would override other IPs from existing computers. So what I would like is doing pretty much what I can do with VirtualBox when I setup the adapter as a "Bridge Adapter", i.e., assigning an IP manually in the guest OS.
Is that even possible?
Thanks a lot.
Use Neutron network and specifically go for ovs plugin. Because the instructions I am giving below will only work for it.
You have to setup the ovs plugin with the following configuration in '/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini'
[OVS]
tenant_network_type = gre
network_vlan_ranges = EXTNet
enable_tunneling = True
tunnel_type = gre
tunnel_id_ranges = 1:1000
integration_bridge = br-int
tunnel_bridge = br-tun
bridge_mappings = EXTNet:br-ex
local_ip = <your machine IP here>
Note the Bridge mappings entry. It maps the EXTNet to br-ex. Later you will use this EXTNet as provider physical network while creating your network in Openstack. For now you have to add one of your host's interfaces that is connected to your enterprise networks to br-ex. After adding it you may not be able to access your host through that interface so always use a secondary interface for this.
Once you are done with the setup do the following.
quantum net-create EXTNet --provider:physical_network EXTNet --provider:network_type flat
quantum net-update EXTNet --router:external True
quantum net-update EXTNet --shared True
quantum subnet-create --name EXTSubnet --gateway <external network gateway> EXTNet <external network CIDR> --enable_dhcp False
There may be other ways of doing. But I have tested this approach and hence recommend.
Once you have successfully created a subnet, just lauch instances in it.
One thing to note here is since you have disabled dhcp in your subnet openstack will not run dnsmasq on it ahd hence you should have to provide your own dhcp server.
Second since the network_type is flat there wont be any vlan packets. The packets from your instance will flow as it is on your external network, which is what you want.