I was setting up a openstack configuration using this document:
http://docs.openstack.org/juno/install-guide/install/apt/openstack-install-guide-apt-juno.pdf
and got to page 72 where it says to do this:
# ovs-vsctl add-port br-ex INTERFACE_NAME
but i put the wrong INTERFACE_NAME, and now i have to correct that mistake (it is a real interface, but the wrong one, i was supposed to put another one in there).
But i have trouble undoing that.
I tried:
# ovs-vsctl del-port br-ex INTERFACE_NAME
but it tells me that /etc/openvswitch/conf.db (or something like that) is read-only
i then tried
# ovs-vsctl del-br br-ex
Bu then it says that you cant just delete a port, you need to delete the entire bridge (or something like that). Which is wierd to me, i thought that that command would delete the bridge...
So does anyone know the correct way to delete the port i mistakenly made?
EDIT: And i tried all that as root.
EDIT2: I just tried on a practice machine do the same thing, make the same error and then fix it with:
# ovs-vsctl del-port br-ex INTERFACE_NAME
and it worked, no read-only nonesense, so i really dont get it.
Any suggestions?
Related
I am not able to find anything related to packstack answer file. Lets says in normal ovs , eno2 and eno3 were mapped to physnet1 but now I am using above ports with below conf:
ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
ovs-vsctl add-port br0 eno2 -- set Interface eno2 type=dpdk options:dpdk-devargs=0000:02:00.1
ovs-vsctl add-port br0 eno3 -- set Interface eno3 type=dpdk options:dpdk-devargs=0000:02:00.2
How to do I proceed ahead with answer file ?
Can I keep it same and same way configure network on physnet1 ?
BTW, I have installed and enabled ovs-dpdk on compute m/c but havent done any change in controller, do I need any change there also ?
My controller node is showing compute node status as down after compute node upgrade/conf to ovs-dpdk. Though it is able to ping it. I restarted rabbitmq-server also but that didn't help.
If no change in controller, then How Can I associate above created bridge to Vm instance as those cmd ie openstack server add port needs to be executed in controller. Looks like I am missing on reading fully on ovs-dpdk usage.
I'm starting to "play" with openvswitch and networking staff so I am a little bit newbie. I'm trying to have the following implementation as it depicted in this figure:
implementation
Both interfaces are physical with a private IP assignement.
*The enO interface is a managament interface
I use this to create the ovs-bridge
ovs-vsctl add-br ovsBr
ip add add 192.168.200.1/24 dev ovsBr
What should I do next?
Thank you in advance
If it's an educational task of sorts, then I believe one may not suggest a correct solution without seeing the textual description of the task. However, if you need to create a bridge and add enp6s0f1 and enp6s0f2 to it, then you probably want to do the following:
ovs-vsctl add-br ovsBr
ovs-vsctl add-port ovsBr enp6s0f1
ovs-vsctl add-port ovsBr enp6s0f2
I am facing strange problem.
What I've done:
I deployed Rancher K3S cluster and there is a problem in dns resolving with debian based images. Domains are not resolved properly - it adds suffix to it with one of ours domain.
What I've found:
Debian based image adds suffix with domain to the end. e.g. I ping google.com and its pinging google.com.example.com. (example.com is one of our domains - not specifing it because it is not important imo)
The same for curl google.com makes request to ip address of example.com . Even tried pure debian image and it is still doing the same issue.
Alpine based images works fine (ping to google.com pings google.com, nslookup shows right ip address).
Host server where k3s is installed also works fine (redhat os). Ping to google.com pings google.com.
Some additional data that can maybe help you:
CoreDNS configmap
kubectl -n kube-system get configmap coredns -o go-template={{.data.Corefile}}
.:53 {
errors
health
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
hosts /etc/coredns/NodeHosts {
reload 1s
fallthrough
}
prometheus :9153
forward . 8.8.8.8
cache 30
loop
reload
loadbalance
}
Does anyone faced the same or similar problem?
Do you have some points to push me towards solving?
Thanks,
David
I faced similar issues with k3s (v.v1.19.3+k3s3) on centos 8 (not quite sure it has anything to do with the images' OS, though). k3s is a bit less plug and play that other distro like microk8s.
Use local DNS parameter
On each node, you could say that you want to use the host's resolv parameters. If k3s is managed as systemd service (which is probably the case), you could just edit /etc/systemd/system/k3s.service.env to add you system's resolv.conf
K3S_RESOLV_CONF=/etc/resolv.conf
and then restart the service
sudo systemctl status k3s
plus: the easiest solution, easily scriptable
cons: you'll need to do it on each of your nodes (from what I understand). Different resolv.conf on different systems involves that the very same deployment might not act the same way depending on the nodes used by kube
relevant documentation
Use Global DNS
Haven't tried but here is the doc
Following official documentation, I'm trying to deploy a Devstack on an Ubuntu 18.04 Server OS on a virtual machine. The devstack node has only one network card (ens160) connected to a network with the following CIDR 10.20.30.40/24. I need my instances accessible publicly on this network (from 10.20.30.240 to 10.20.30.250). So again the following the official floating-IP documentation I managed to form this local.conf file:
[[local|localrc]]
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
PUBLIC_INTERFACE=ens160
HOST_IP=10.20.30.40
FLOATING_RANGE=10.20.30.40/24
PUBLIC_NETWORK_GATEWAY=10.20.30.1
Q_FLOATING_ALLOCATION_POOL=start=10.20.30.240,end=10.20.30.250
This would lead to form a br-ex with the global IP address 10.20.30.40 and secondary IP address 10.20.30.1 (The gateway already exists on the network; isn't PUBLIC_NETWORK_GATEWAY parameter talking about real gateway on the network?)
Now, after a successful deployment, disabling ufw (according to this), creating a cirros instance with proper security group for ping and ssh and attaching a floating-IP, I only can access my instance on my devstack node, not on the whole network! Also from within the cirros instance, I cannot access the outside world (even though I can access the outside world from the devstack node)
Afterwards, watching this video, I modified the local.conf file like this:
[[local|localrc]]
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
FLAT_INTERFACE=ens160
HOST_IP=10.20.30.40
FLOATING_RANGE=10.20.30.240/28
After a successful deployment and instance setup, I still can access my instance only on devstack node and not from the outside! But the good news is that I can access the outside world from within the cirros instance.
Any help would be appreciated!
Update
On the second configuration, checking packets on tcpdump while pinging the instance floating-IP, I observed that the who-has broadcast packet for the floating-IP of the instance reaches the devstack node from the network router; however no is-at reply is generated and thus ICMP packets are not routed to the devstack node and the instance.
So, with some tricks I created the response and everything works fine afterwards; but certainly this isn't solution and I imagine that the devstack should work out of the box without any tweaking and probably this is because of a misconfiguration of devstack.
After 5 days of tests, research and lecture, I found this: Openstack VM is not accessible on LAN
Enter the following commands on devstack node:
echo 1 > /proc/sys/net/ipv4/conf/ens160/proxy_arp
iptables -t nat -A POSTROUTING -o ens160 -j MASQUERADE
That'll do the trick!
Cheers!
I could connect to coreOS through Putty in Windows10.
But after changing DHCP to static IP in coreOS,
I suddenly became unable to use SSH through putty(cannot connect to coreOS through putty in Windows10).
I wonder why this happened, and how I could solve this problem.
I investigated status of ssh in coreOS. and it says inavtive.
What should I do to solve this problem?
If anyone knows please help me.
I have no clue... TT
If your sshd is inactive, you might be able to restart it. I'd be interested whether you used networkd (as documented here) when you changed from DHCP to static IP, as I think that should have been picked up automagically by CoreOS.
If you are seeing that the following command shows sshd as "inactive (dead}":
sudo systemctl status sshd
You can start sshd with:
sudo systemctl start sshd
And just in case you need it here is documentation on how to customize the ssh daemon.
Are you sure that your network unit was formatted correctly as is being accepted?
Did you restart networkd afterwards if you added the network unit manually? sudo systemctl restart systemd-networkd
Are you using cloudconfig to add the network unit? See if there is anything in the journal: journalctl _EXE=/usr/bin/coreos-cloudinit
You can also validate your cloud-config here: https://coreos.com/validate/