retrieve id of the network with openstack API and shell script - openstack

I need to retrieve the id of the network created with shell script and openstack API. Is there a way to do so?
neutron net-create test-net --provider:network_type vlan --provider:physical_network physnet2 --provider:segmentation_id 22
neutron subnet-create test-net --name test-subnet --allocation-pool start=10.153.9.20,end=10.153.9.34 --gateway 10.153.8.1 10.153.8.0/22

You can do neutron net-list[1] giving as parameter the tenant_id and then pipe the response to grep to selected the network you want by its name
neutron --os-tenant-id {tenant_id} net-list | grep {network_name}
you will get something like this as response
| {network_id} | {network_name} | {subnets} |

Related

public endpoint for load-balancer service not found

I have an issue to list loadbalancers on open stack using cli
from#ge ~
$ openstack loadbalancer list
public endpoint for load-balancer service not found
from#ge ~
$ export | grep OS_
declare -x OS_AUTH_TYPE="password"
declare -x OS_AUTH_URL="http://192.168.20.33:5000/v3"
declare -x OS_IDENTITY_API_VERSION="3"
declare -x OS_PASSWORD="XXXXXX"
declare -x OS_PROJECT_NAME="project-name"
declare -x OS_TENANT_NAME="tenant-name"
declare -x OS_USERNAME="from"
declare -x OS_USER_DOMAIN_ID="default"
from#ge ~
$ echo "endpoint list" | openstack
You are not authorized to perform the requested action: identity:list_endpoints. (HTTP 403) (Request-ID: req-aec8b22e-d3ad-4116-b7bb-52545f641667)
I've tried to set OS_REGION_NAME to RegionOne, but I get the same result
Any tip ?
load-balancer service not found
It seems that the load balance service not work, have you deployment Octavia service successful?
identity:list_endpoints. (HTTP 403)
According to the official document, it's Forbidden about the authorization.
The identity was successfully authenticated but it is not authorized to perform the requested action.
Maybe there is a miss-configuration of the admin's roles in keystone, you should check it in database first.
Ok thanks for your answers.
I finally managed to play with load balancers using neutron cli:
$ neutron
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
(neutron) lbaas-loadbalancer-list
+--------------------------------------+------------------------+----------------+---------------------+----------+
| id | name | vip_address | provisioning_status | provider |
+--------------------------------------+------------------------+----------------+---------------------+----------+
| 00f3453d-8738-4eb6-b362-aefc8dfaeea6 | lb1 | 192.168.36.93 | ACTIVE | haproxy |
| 090e062d-d6cc-4ebe-bcbf-165d5c21051d | lb2 | 192.168.36.169 | ACTIVE | haproxy |
| 0c244567-8f49-4be0-9055-17fa903d4619 | lb3 | 192.168.36.43 | ACTIVE | haproxy |

More than one endpoint exists with the name 'nova'

the command do not work, when I want to show the nova's endpoints:
openstack endpoint show nova
it will report error:
More than one endpoint exists with the name 'nova'.
When you check your endpoints, you will likely find that they reside in an Interface environment.
% openstack endpoint list -c ID -c "Service Name" -c Interface --service nova
+----------------------------------+--------------+-----------+
| ID | Service Name | Interface |
+----------------------------------+--------------+-----------+
| 2d45aed973da34f7d28b8c9e410bba5e | nova | public |
| 7de83faa23d4ee5b39a8b7de45b8ee15 | nova | internal |
| ab8374d8b8f233fe11cda487bfe98ad7 | nova | admin |
+----------------------------------+--------------+-----------+
Similarily, you can filter only endpoints in a specific interface
% openstack endpoint list --interface public
For your command then use the ID instead of the name of the Service Name, e.g. this would give me the admin nova API:
openstack endpoint show ab8374d8b8f233fe11cda487bfe98ad7
You should use:
openstack endpoint list --service nova
to show the endpoints.

Docker: unexpected error (Failure EADDRINUSE)

I'm really new to Docker. I'm trying to run Wordpress, and I've run into an error.
$ docker-compose up -d
testpublichtml_mariadb_1 is up-to-date
Starting 00b4dc8e3264_testpublichtml_wordpress_1
ERROR: for wordpress Cannot start service wordpress: driver failed programming external connectivity on endpoint
00b4dc8e3264_testpublichtml_wordpress_1 (63165c221c0b2b11d513e97d35afa39146790086115029b9bb229212d0c8c06a): Error starting userland proxy: Bind for 0.0.0.0:80: unexpected error (Failure EADDRINUSE)
ERROR: Encountered errors while bringing up the project.
$
My guess is to try and check if something is on port 80, though I'm not sure how to check that.
When I enter netstat -tulnp | grep ':80', I get:
$ netstat -tulnp | grep ':80'
netstat: option requires an argument -- p
Usage: netstat [-AaLlnW] [-f address_family | -p protocol]
netstat [-gilns] [-f address_family]
netstat -i | -I interface [-w wait] [-abdgRtS]
netstat -s [-s] [-f address_family | -p protocol] [-w wait]
netstat -i | -I interface -s [-f address_family | -p protocol]
netstat -m [-m]
netstat -r [-Aaln] [-f address_family]
netstat -rs [-s]
Probably you have some service running on port 80. To check this, execute the following command.
netstat -tulnp | grep ':80'
The last column is PID/Program name of your process. If you want to kill it, use the following command.
kill PID
After that, you should be able to start your container.

Find out which network interface belongs to docker container

Docker creates these virtual ethernet interfaces veth[UNIQUE ID] listed in ifconfig. How can I find out which interface belongs to a specific docker container?
I want to listen to the tcp traffic.
To locate interface
In my case getting value from container was like (check eth0 to):
$ docker exec -it my-container cat /sys/class/net/eth1/iflink
123
And then:
$ ip ad | grep 123
123: vethd3234u4#if122: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default
Check with tcpdump -i vethd3234u4
Reference about mysterious iflink from http://lxr.free-electrons.com/source/Documentation/ABI/testing/sysfs-class-net:
150 What: /sys/class/net/<iface>/iflink
151 Date: April 2005
152 KernelVersion: 2.6.12
153 Contact: netdev#vger.kernel.org
154 Description:
155 Indicates the system-wide interface unique index identifier a
156 the interface is linked to. Format is decimal. This attribute is
157 used to resolve interfaces chaining, linking and stacking.
158 Physical interfaces have the same 'ifindex' and 'iflink' values.
Based on the provided answer (which worked for me), I made this simple bash script:
#!/bin/bash
export containers=$(sudo docker ps --format "{{.ID}}|{{.Names}}")
export interfaces=$(sudo ip ad);
for x in $containers
do
export name=$(echo "$x" |cut -d '|' -f 2);
export id=$(echo "$x"|cut -d '|' -f 1)
export ifaceNum="$(echo $(sudo docker exec -it "$id" cat /sys/class/net/eth0/iflink) | sed s/[^0-9]*//g):"
export ifaceStr=$( echo "$interfaces" | grep $ifaceNum | cut -d ':' -f 2 | cut -d '#' -f 1);
echo -e "$name: $ifaceStr";
done
My answer more like improvement on that important topic because it didn't help to "Find out which network interface belongs to docker container", but, as author noticed, he "want to listen to the tcp traffic" inside docker container - I'll try to help on that one during your troubleshooting of network.
Considering that veth network devices are about network namespaces, it is useful to know that we can execute program in another namespace via nsenter tool as follow (remember - you need a privileged permission (sudo/root) for doing that):
Get ID of any container you are interested in capture the traffic, for example it will be 78334270b8f8
Then we need to take PID of that containerized application (I assume you are running only 1 network-related process inside container and want to capture its traffic. Otherwise, that approach is hard to be suitable):
sudo docker inspect 78334270b8f8 | grep -i pid
For example, output for pid will be 111380 - that's ID of your containerized app, you can check also it via ps command: ps aux | grep 111380 just in curiosity.
Next step is to check what network interfaces you have inside your container:
sudo nsenter -t 111380 -n ifconfig
This command will return you list of network devices in network namespace of the containerized app (you should not have ifconfig tool on board of your container, only on your node/machine)
For example, you need to capture traffic on interface eth2 and filter it to tcp destination port 80 (it may vary of course) with this command:
sudo nsenter -t 111380 -n tcpdump -nni eth2 -w nginx_tcpdump_test.pcap 'tcp dst port 80'
Remember, that in this case you do not need tcpdump tool to be installed inside your container.
Then, after capturing packets, .pcap file will be available on your machine/node and to read it use any tool you prefer tcpdump -r nginx_tcpdump_test.pcap
approach's pros:
no need to have network tools inside container, only on docker node
no need to search for map between network devices in container and node
cons:
you need to have privileged user on node/machine to run nsenter tool
One-liner of the solution from #pbaranski
num=$(docker exec -i my-container cat /sys/class/net/eth0/iflink | tr -d '\r'); ip ad | grep -oE "^${num}: veth[^#]+" | awk '{print $2}'
If you need to find out on a container that does not include cat then try this tool: https://github.com/micahculpepper/dockerveth
You can also read the interface names via /proc/PID/net/igmp like (container name as argument 1):
#!/bin/bash
NAME=$1
PID=$(docker inspect $NAME --format "{{.State.Pid}}")
while read iface id; do
[[ "$iface" == lo ]] && continue
veth=$(ip -br addr | sed -nre "s/(veth.*)#if$id.*/\1/p")
echo -e "$NAME\t$iface\t$veth"
done < <(</proc/$PID/net/igmp awk '/^[0-9]+/{print $2 " " $1;}')

how to connect Docker containers without a bridge?

I'm having some experiments to do with Docker container technology.
I need for a specific reason to connect two veth container interfaces together without using a bridge, Docker creates a bridge by default, so I do not want to use it.
I'm confused and want to know if it is right to do that way. Anyone can give advices and point me out some links or methods ? I will appreciate.
Thank you so much.
+--------------+ +--------------+
| | | |
| Container X | | Container Y |
| | | |
+--------------+ +--------------+
^ veth   ^ veth  
| |
+--------------------+
Sure, it's possible, although you won't be able to get Docker to do it for you automatically. Begin by creating your two containers with no networking:
# docker run --net=none --name container_x ...
# docker run --net=none --name container_y ...
Now create a veth pair:
# ip link add c_x_eth0 type veth peer name c_y_eth0
Assign each side of the veth pair to a container. You will need to know the PID of the container to do this, which you can get with, for example:
docker inspect --format '{{.State.Pid}}' container_x
I'm going to assume you've stuck this in a shell script named docker-pid. Set the name space on the first veth link:
# ip link set netns $(docker-pid container_x) dev c_x_eth0
And on the second:
# ip link set netns $(docker-pid container_y) dev c_y_eth0
Now you will need to configure the link inside each container. If you haven't started your containers with --privileged, you will need to do this using nsenter:
# nsenter -t $(docker-pid container_x) -n ip link set c_x_eth0 up
# nsenter -t $(docker-pid container_y) -n ip link set c_y_eth0 up
And then assign them ip addresses:
# nsenter -t $(docker-pid container_x) -n ip addr add 10.10.10.1/24 dev c_x_eth0
# nsenter -t $(docker-pid container_y) -n ip addr add 10.10.10.2/24 dev c_y_eth0
And you should be all set.
Update
If nsenter is unavailable...
The easiest solution really is just to install nsenter on your system; if you are able to create new veth interfaces and start Docker containers you should have all the privileges you need.
You could accomplish the above without nsenter if you run your containers in privileged mode (docker run --privileged...). This will allow your containers to do things -- such as run network configuration commands -- that are normally prohibited. In this case, you would just run the ip link and ip addr commands in the container, either from a shell you started with docker run or using something like docker exec. You should be aware that running a container in privileged mode removes many of the restrictions normally placed on containers, and so it is not something you want to do if anyone else has access to those containers.

Resources