OpenStack Xena
CentOS 8 Stream
On the controller, I'm trying to make a request, but I'm getting a 404 error. How can I fix this?
[root#os-controller-01 conf.d(keystone)]# ip netns exec qdhcp-2f730236-cc81-4e29-b5eb-e47126db582f curl http://169.254.169.254/openstack
<html>
<head>
<title>404 Not Found</title>
</head>
<body>
<h1>404 Not Found</h1>
The resource could not be found.<br /><br />
</body>
</html>
[root#os-controller-01 conf.d(keystone)]# ip netns exec qdhcp-2f730236-cc81-4e29-b5eb-e47126db582f ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ns-3e21dd54-4b#if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether fa:16:3e:78:ca:4b brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.10.85.10/24 brd 10.10.85.255 scope global ns-3e21dd54-4b
valid_lft forever preferred_lft forever
inet 169.254.169.254/32 brd 169.254.169.254 scope global ns-3e21dd54-4b
valid_lft forever preferred_lft forever
inet6 fe80::a9fe:a9fe/64 scope link
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe78:ca4b/64 scope link
valid_lft forever preferred_lft forever
My config metadata_agent.ini:
[root#os-controller-01 conf.d(keystone)]# cat /etc/neutron/metadata_agent.ini | egrep -v "(^#.*|^$)"
[DEFAULT]
nova_metadata_host = os-controller-01
metadata_proxy_shared_secret = mypass
debug = true
[cache]
Related
I have a Raspberry PI 4 running Ubuntu 21.10 with a static ip-address on eth0. Despite that, I keep getting a secondary 'dynamic' DHCP address on on it.
netplan
network:
version: 2
renderer: networkd
ethernets:
eth0:
dhcp4: no
addresses:
- 192.168.0.10/24
routes:
- to: default
via: 192.168.0.1
nameservers:
search: [lan]
addresses: [192.168.0.12]
ip addr show
eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether dc:a6:32:da:df:55 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.10/23 brd 192.168.1.255 scope global eth0
valid_lft forever preferred_lft forever
inet 192.168.0.225/23 brd 192.168.1.255 scope global secondary dynamic eth0
valid_lft 68727sec preferred_lft 68727sec
inet6 fe80::dea6:32ff:feda:df55/64 scope link
valid_lft forever preferred_lft forever
Even if I delete that interface, it keeps coming back after a few minutes. I have another PI with the "same" configuration and it doesn't have this problem.
I also have the /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg per instructions.
Have you tried using the word false instead of no on your dhcp4 entry to netplan?
Problem: I want 2 IPs so that I can run two servers on my LAN. Apparently my ISP doesn't allow static IPs and I need to use DHCP to obtain my second IP.
What I have learned so far:
In order to get two distinct IP addresses with DHCP, you need two different MACs (or client IDs?)
You can't have two MACs on a single interface, so you need to put your internet facing interface into promiscuous mode and somehow get that traffic to a virtual interface with its own MAC.
Once the traffic gets to my virtual interface, I can just assign it to WAN firewall zone (OpenWRT thingie, not so important) for ez profit.
But here is the hard part: In order to separate my LAN from WAN there is by default two different VLANs configured in OpenWRT. LAN VLAN is eth0.1 and WAN VLAN is eth 0.2.
The final question is: How do I configure my system? Do I put eth0 in promisc or eth0.2 or both? Or is my premise completely wrong? How do I create the said virtual interface? Below is my ip addr extract.
root#TopLevelRouter:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP qlen 1000
link/ether [REDACTED] brd ff:ff:ff:ff:ff:ff
inet6 [REDACTED]/64 scope link
valid_lft forever preferred_lft forever
9: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether [REDACTED] brd ff:ff:ff:ff:ff:ff
inet6 [REDACTED]/64 scope link
valid_lft forever preferred_lft forever
10: wlan1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether [REDACTED] brd ff:ff:ff:ff:ff:ff
inet6 [REDACTED]/64 scope link
valid_lft forever preferred_lft forever
16: eth0.1#eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether [REDACTED] brd ff:ff:ff:ff:ff:ff
inet 192.168.1.1/24 brd 192.168.1.255 scope global eth0.1
valid_lft forever preferred_lft forever
inet6 [REDACTED]/60 scope global noprefixroute
valid_lft forever preferred_lft forever
inet6 [REDACTED]/64 scope link
valid_lft forever preferred_lft forever
17: eth0.2#eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether [REDACTED] brd ff:ff:ff:ff:ff:ff
inet [external IP 1]/24 brd [redacted].255 scope global eth0.2
valid_lft forever preferred_lft forever
inet6 [REDACTED]/64 scope link
valid_lft forever preferred_lft forever
I solved it, finally.
Full solution in my blog
And a web archive link in case my blog doesn't exist when you read this
I'm using docker 1.12.1 in swarm mode.
When I run the following command:
docker network create --driver overlay --subnet 10.0.9.0/24 --opt encrypted services
and then
docker service create --name nginx nginx
than exec the command ip address in the running container (on the correct node) the result is
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
234: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1424 qdisc noqueue state UP group default
link/ether 02:42:0a:00:09:03 brd ff:ff:ff:ff:ff:ff
inet 10.0.9.3/24 scope global eth0
valid_lft forever preferred_lft forever
inet 10.0.9.2/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:aff:fe00:903/64 scope link
valid_lft forever preferred_lft forever
236: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.3/16 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe12:3/64 scope link
valid_lft forever preferred_lft forever
Can please anyone explain, why eth0 has in that case two ip addresses 10.0.9.3/24 and 10.0.9.2/32?
This causes a problem, because when I run more instances, there are overlapping addresses which breaks my running service.
One is VIP, used for service.
Another is for node address, only for internal use. From perspective of APP, they should use service IP.
In such way I have configured OVS bridge for LXC containers LXC with Open vSwitch
It is bridge configuration:
# ovs-vsctl show
1b236728-4637-42a5-8b81-53d4c93a6803
Bridge "switch0"
Port vethNSCEGY
Interface vethNSCEGY
Port "switch0"
Interface "switch0"
type: internal
Port "vethD6TFEB"
Interface "vethD6TFEB"
ovs_version: "2.3.2"
switch0 is interface on host and has IP 192.168.100.1/24
vethNSCEGY and vethD6TFEB are interfaces for LXC guests.
Eventually first LXC guest with IP 192.168.100.10/24 can ping second LXC guest 192.168.100.11/24 but cant ping host IP 192.168.100.1/24
Is it OK for OVS? Or do I need smth to enable?
PS. IPs on my interfaces:
# ip a
...
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default
link/ether 52:9d:e1:60:1d:56 brd ff:ff:ff:ff:ff:ff
5: switch0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default
link/ether 16:63:eb:47:13:41 brd ff:ff:ff:ff:ff:ff
inet 192.168.100.1/24 scope global switch0
valid_lft forever preferred_lft forever
35: vethNSCEGY: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP group default qlen 1000
link/ether fe:d1:06:81:69:ed brd ff:ff:ff:ff:ff:ff
inet6 fe80::fcd1:6ff:fe81:69ed/64 scope link
valid_lft forever preferred_lft forever
37: vethD6TFEB: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP group default qlen 1000
link/ether fe:ca:e9:16:dd:81 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fcca:e9ff:fe16:dd81/64 scope link
valid_lft forever preferred_lft foreve
It was my bad again. switch0 was down. So turning up of interface helped me:
# ip link set dev switch0 up
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have an LXC debian container which run on my archlinux host. I tried to setup a bridge (lxc-bridge-nat) using wlan0 but i can't ping the outside world from my container except if i ping using the ip instead of the domain name.
I can ping the container from the host and the host from the container.
Here some informations:
Host: ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
link/ether d4:be:d9:70:bd:e5 brd ff:ff:ff:ff:ff:ff
3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether c4:85:08:b4:5c:e9 brd ff:ff:ff:ff:ff:ff
inet 192.168.42.121/24 brd 192.168.42.255 scope global wlan0
valid_lft forever preferred_lft forever
inet6 fe80::c685:8ff:feb4:5ce9/64 scope link
valid_lft forever preferred_lft forever
valid_lft forever preferred_lft forever
4: lxc-bridge-nat: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether fe:b3:b7:a2:e1:31 brd ff:ff:ff:ff:ff:ff
inet 192.168.50.1/24 brd 192.168.50.255 scope global lxc-bridge-nat
valid_lft forever preferred_lft forever
inet6 fe80::b0c8:d2ff:fe73:aa50/64 scope link
valid_lft forever preferred_lft forever
host: ip route
default via 192.168.42.1 dev wlan0 proto static
192.168.42.0/24 dev wlan0 proto kernel scope link src 192.168.42.121 metric 9
192.168.50.0/24 dev lxc-bridge-nat proto kernel scope link src 192.168.50.1
Container: ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
10: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:ff:aa:00:00:01 brd ff:ff:ff:ff:ff:ff
inet 192.168.50.2/24 brd 192.168.50.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::2ff:aaff:fe00:1/64 scope link
valid_lft forever preferred_lft forever
Container: ip route
default via 192.168.50.1 dev eth0
192.168.50.0/24 dev eth0 proto kernel scope link src 192.168.50.2
Container: /etc/resolv.conf
nameserver 212.27.40.240
nameserver 212.27.40.241
Hostnames not resolving points immediately to the DNS servers, and posting the content of resolv.conf was useful here; they were the only part of this setup outside of your immediate control.
As you've found, simply pinging a remote server doesn't always help - running nslookup against them showed that they were the problem. (As a counterpoint, due to the way ping itself works a lack of response from a ping doesn't mean the server is down - pings are trivial to block at firewall level.)
To work around to your DNS issue, you can make use of other DNS servers, such as those hosted by Google. Simply alter your resolv.conf to:
nameserver 8.8.8.8
nameserver 8.8.4.4
Adding 8.8.8.8 in /etc/network/interfaces is not a good idea. You should leave the interfaces file unchanged (restore the old settings) and modify only /etc/resolv.conf file.
You have set the container IP to the same IP of one of the Google's DNS servers.
Sure, the ping will work, but when you try to resolve a DNS hostname your container will contact itself and generate an error.