Nginx backend to one of 4 hosts in Docker Swarm - nginx

I've been searching around and haven't found a solution to my situation. I'm running a cluster of 5 machines with Docker Swarm. For right now, I'm trying to get two of the containers communicating with each other. If both containers are running on the same single machine, everything works fine. If one container is on one machine/node, and another container is on a different machine/node, things stop working.
One container is nginx, the other is a front-end nodejs app that nginx will act as a proxy for. nginx "assumes" that the front-end container is running on the localhost only. Although this can be the case, it's not always the case. The front-end container can be running in one of any 5 hosts.
I'm not sure how to get nginx to realize that there are 4 other hosts out there which might have this front-end service it needs to connect with. I do have a DNS on IP 127.0.0.11 (Docker standard embedded DNS server).
Here is an error I see whenever the containers are on different nodes
2016/11/01 14:33:41 [error] 8#8: *1 connect() failed (113: No route to host) while connecting to upstream, client: 10.255.0.3, server: , request: "GET / HTTP/1.1", upstream: "http://10.10.0.2:8079/", host: "cluster3"
In this case, cluster3 is the localhost running the nginx container. However, front-end is running on cluster4, which explains why it can't find front-end.
Below is my current nginx.conf file. Is there something I can change here to get this to work right? I was playing around with upstream and resolver but that didn't seem make a difference.
worker_processes 5; ## Default: 1
worker_rlimit_nofile 8192;
events {
worker_connections 4096; ## Default: 1024
}
http {
server { # simple load balancing
listen 80;
location / {
proxy_pass http://front-end:8079;
}
}
}
EDIT:
I notice the following gets printed to the kernel logs whenever I point my browser to the nginx instance.
[ 225.521382] BUG: using smp_processor_id() in preemptible [00000000] code: nginx/3501
[ 225.528808] caller is $x+0x5ac/0x828 [ip_vs]
[ 225.528820] CPU: 0 PID: 3501 Comm: nginx Not tainted 3.14.79-92 #1
[ 225.528825] Call trace:
[ 225.528836] [<ffffffc001088e40>] dump_backtrace+0x0/0x128
[ 225.528842] [<ffffffc001088f8c>] show_stack+0x24/0x30
[ 225.528849] [<ffffffc00188460c>] dump_stack+0x88/0xac
[ 225.528858] [<ffffffc0013dcf94>] debug_smp_processor_id+0xe4/0xe8
[ 225.528899] [<ffffffbffc1ec8fc>] $x+0x5ac/0x828 [ip_vs]
[ 225.528938] [<ffffffbffc1ecbdc>] ip_vs_local_request4+0x2c/0x38 [ip_vs]
[ 225.528945] [<ffffffc001789ecc>] nf_iterate+0xc4/0xd8
[ 225.528951] [<ffffffc001789f6c>] nf_hook_slow+0x8c/0x138
[ 225.528957] [<ffffffc0017994e8>] __ip_local_out+0x90/0xb0
[ 225.528963] [<ffffffc001799528>] ip_local_out+0x20/0x48
[ 225.528968] [<ffffffc001799848>] ip_queue_xmit+0x118/0x370
[ 225.528975] [<ffffffc0017b09d8>] tcp_transmit_skb+0x438/0x8b0
[ 225.528980] [<ffffffc0017b231c>] tcp_connect+0x54c/0x638
[ 225.528987] [<ffffffc0017b50c8>] tcp_v4_connect+0x258/0x3b8
[ 225.528993] [<ffffffc0017cca50>] __inet_stream_connect+0x120/0x358
[ 225.528999] [<ffffffc0017cccd0>] inet_stream_connect+0x48/0x68
[ 225.529005] [<ffffffc00173bf80>] SyS_connect+0xc0/0xe8
[ 269.026442] BUG: using smp_processor_id() in preemptible [00000000] code: nginx/3501
[ 269.028592] caller is ip_vs_schedule+0x31c/0x4f8 [ip_vs]
[ 269.028600] CPU: 0 PID: 3501 Comm: nginx Not tainted 3.14.79-92 #1
[ 269.028604] Call trace:
[ 269.028618] [<ffffffc001088e40>] dump_backtrace+0x0/0x128
[ 269.028624] [<ffffffc001088f8c>] show_stack+0x24/0x30
[ 269.028632] [<ffffffc00188460c>] dump_stack+0x88/0xac
[ 269.028641] [<ffffffc0013dcf94>] debug_smp_processor_id+0xe4/0xe8
[ 269.028680] [<ffffffbffc1eb2dc>] ip_vs_schedule+0x31c/0x4f8 [ip_vs]
[ 269.028719] [<ffffffbffc1fe658>] $x+0x130/0x278 [ip_vs]
[ 269.028757] [<ffffffbffc1eca08>] $x+0x6b8/0x828 [ip_vs]
[ 269.028795] [<ffffffbffc1ecbdc>] ip_vs_local_request4+0x2c/0x38 [ip_vs]
[ 269.028802] [<ffffffc001789ecc>] nf_iterate+0xc4/0xd8
[ 269.028807] [<ffffffc001789f6c>] nf_hook_slow+0x8c/0x138
[ 269.028814] [<ffffffc0017994e8>] __ip_local_out+0x90/0xb0
[ 269.028820] [<ffffffc001799528>] ip_local_out+0x20/0x48
[ 269.028825] [<ffffffc001799848>] ip_queue_xmit+0x118/0x370
[ 269.028832] [<ffffffc0017b09d8>] tcp_transmit_skb+0x438/0x8b0
[ 269.028837] [<ffffffc0017b231c>] tcp_connect+0x54c/0x638
[ 269.028844] [<ffffffc0017b50c8>] tcp_v4_connect+0x258/0x3b8
[ 269.028850] [<ffffffc0017cca50>] __inet_stream_connect+0x120/0x358
[ 269.028855] [<ffffffc0017cccd0>] inet_stream_connect+0x48/0x68
[ 269.028862] [<ffffffc00173bf80>] SyS_connect+0xc0/0xe8

Related

Euca 5 Ansible Install Skipping Node Actions

I'm trying to use the Euca 5 ansible installer to install a single server for all services "exp-euca.lan.com" with two node controllers "exp-enc-[01:02].lan.com" running VPCMIDO. The install goes okay and I end up with a single server running all Euca services including being able to run instances but the ansible scripts never take action to install and configure my node servers. I think I'm misunerdstanding the inventory format. What could be wrong with the following? I don't want my main euca server to run instances and I do want the two node controllers installed and running instances.
---
all:
hosts:
exp-euca.lan.com:
exp-enc-[01:02].lan.com:
vars:
vpcmido_public_ip_range: "192.168.100.5-192.168.100.254"
vpcmido_public_ip_cidr: "192.168.100.1/24"
cloud_system_dns_dnsdomain: "cloud.lan.com"
cloud_public_port: 443
eucalyptus_console_cloud_deploy: yes
cloud_service_image_rpm: no
cloud_properties:
services.imaging.worker.ntp_server: "x.x.x.x"
services.loadbalancing.worker.ntp_server: "x.x.x.x"
children:
cloud:
hosts:
exp-euca.lan.com:
console:
hosts:
exp-euca.lan.com:
zone:
hosts:
exp-euca.lan.com:
nodes:
hosts:
exp-enc-[01:02].lan.com:
All of the plays related to nodes have a pattern similar to this where they succeed and acknowledge the main server exp-euca but then skip the nodes.
2021-01-14 08:15:23,572 p=57513 u=root n=ansible | TASK [zone assignments default] ***********************************************************************************************************************
2021-01-14 08:15:23,596 p=57513 u=root n=ansible | ok: [exp-euca.lan.com] => (item=[0, u'exp-euca.lan.com']) => {"ansible_facts": {"host_zone_key": "1"}, "ansible_loop_var": "item", "changed": false, "item": [0, "exp-euca.lan.com"]}
2021-01-14 08:15:23,604 p=57513 u=root n=ansible | skipping: [exp-enc-01.lan.com] => (item=[0, u'exp-euca.lan.com']) => {"ansible_loop_var": "item", "changed": false, "item": [0, "exp-euca.lan.com"], "skip_reason": "Conditional result was False"}
It should be node, not nodes, i.e.:
node:
hosts:
exp-enc-[01:02].lan.com:
The documentation for this is currently incorrect.

How to connect to OpenVPN from Chromebook when using tls-crypt?

​Hi there!
I'm very Very new to Chromebook and the ONC file, so my apology if it's already asked and answered.
I'm running OpenVPN v2.4.9 Server and everything works just fine form Mac/Linux/Windows using .ovpn formatted client configuration file. On the server-side, I'm using tls-crypt ​(as opposed to tls-auth) as per the new recommendation and looks like that's where it's failing from the CB, using ONC file.
This is my server configuration:
auth SHA256
auth-nocache
ca /etc/openvpn/server/ca.crt
cert /etc/openvpn/server/server.crt
cipher AES-256-GCM
client-config-dir /etc/openvpn/client
compress lz4-v2
dev tun
dh /etc/openvpn/server/dh2048.pem
explicit-exit-notify 1
ifconfig-pool-persist /etc/openvpn/server/ipp.txt
keepalive 10 120
key /etc/openvpn/server/server.key
log /var/log/openvpn/connection.log
log-append /var/log/openvpn/connection.log
max-clients 10
ncp-ciphers AES-256-GCM
persist-key
persist-tun
plugin /usr/lib64/openvpn/plugins/openvpn-plugin-auth-pam.so login
port 1194
proto udp4
push "compress lz4-v2"
push "dhcp-option DNS 8.8.8.8"
push "redirect-gateway def1 bypass-dhcp"
push "route 10.0.0.0 255.255.0.0"
remote-cert-eku "TLS Web Client Authentication"
server 192.168.10.0 255.255.255.0
sndbuf 2097152
status /var/log/openvpn/status.log
tls-crypt /etc/openvpn/server/ta.key
tls-version-min 1.2
verb 3
And this my client ONC config :
{
"Type": "UnencryptedConfiguration",
"Certificates": [
{
"GUID": "Bootstrap-Server-CA",
"Type": "Authority",
"X509": "MIIGITCCBAmgAw.....MAYsw8ZLPlmJNN/wA=="
},
{
"GUID": "Bootstrap-Root-CA",
"Type": "Authority",
"X509": "MIIGDDCCA/SgAf.....TbtcIBMrAiSlsOwHg=="
},
{
"GUID": "Bootstrap-User-Cert",
"Type": "Client",
"PKCS12": "MIILvQIBAzCC.....srrOGmHY3h7MPauIlD3"
}
],
"NetworkConfigurations": [
{
"GUID": "BOOTSTRAP_CONN_1",
"Name": "bootstrap_vpn",
"Type": "VPN",
"VPN": {
"Type": "OpenVPN",
"Host": "xx.xxx.xx.xxx",
"OpenVPN": {
"Auth": "SHA256",
"Cipher": "AES-256-GCM",
"ClientCertRef": "Bootstrap-User-Cert",
"ClientCertType": "Ref",
"IgnoreDefaultRoute": true,
"KeyDirection": "1",
"Port": 1194,
"Proto": "udp4",
"RemoteCertEKU": "TLS Web Client Authentication",
"RemoteCertTLS": "server",
"UseSystemCAs": true,
"ServerCARefs": [
"Bootstrap-Server-CA",
"Bootstrap-Root-CA",
],
"TLSAuthContents": "-----BEGIN OpenVPN Static key V1-----\n....\n.....\n-----END OpenVPN Static key V1-----\n",
"UserAuthenticationType": "Password"
}
}
}
]
}
It fails with no such useful message on the client-side (apart from saying: Failed to connect to the network..) but on the server, it's reported as:
Wed Sep 23 17:44:15 2020 us=591576 tls-crypt unwrap error: packet authentication failed
Wed Sep 23 17:44:15 2020 us=591631 TLS Error: tls-crypt unwrapping failed from [AF_INET]xx.xx.xx.xx:64762
Wed Sep 23 17:44:44 2020 us=359795 tls-crypt unwrap error: packet authentication failed
Wed Sep 23 17:44:44 2020 us=359858 TLS Error: tls-crypt unwrapping failed from [AF_INET]xx.xx.xx.xx:19733
Any idea what I am doing wrong or missing? I'd really appreciate if anyone can put me to the right direction.
-S
As far as I know, the ONC format does not accept tls-crypt. If your Chromebook accepts Android apps, you can use the unofficial OpenVPN android app (blinkt.de), which does accept that.

Podman: Changes made to podman config.json do not persist on container start

I am trying to add a network to a podman container after it has already been created.
These are the steps I took:
Create and start a container:
podman run -it --name "container" --network=mgmtnet img_v1 /bin/bash
The container starts.
I now stop the container
podman stop container
I edit the podman config.json file at:
/var/lib/containers/storage/overlay-containers/60dfc044f28b0b60f0490f351f44b3647531c245d1348084944feaea783a6ad5/userdata/config.json
I add an extra netns path in the namespaces section.
"namespaces": [
{
"type": "pid"
},
{
"type": "network",
>> "path": "/var/run/netns/cni-8231c733-6932-ed54-4dee-92477014da6e",
>>[+] "path": "/var/run/netns/test_net"
},
{
"type": "ipc"
},
{
"type": "uts"
},
{
"type": "mount"
}
],
I start the container
podman start container
I expected the changes (an extra interface) in the container. But that doesn't happen. Also, checking the config.json, I find that my changes are gone.
So starting the container removes the changes in config. How to overcome this?
Extra info:
[root#bng-ix-svr1 ~]# podman info
host:
BuildahVersion: 1.9.0
Conmon:
package: podman-1.4.2-5.module+el8.1.0+4240+893c1ab8.x86_64
path: /usr/libexec/podman/conmon
version: 'conmon version 2.0.1-dev, commit: unknown'
Distribution:
distribution: '"rhel"'
version: "8.1"
MemFree: 253316108288
MemTotal: 270097387520
OCIRuntime:
package: runc-1.0.0-60.rc8.module+el8.1.0+4081+b29780af.x86_64
path: /usr/bin/runc
version: 'runc version spec: 1.0.1-dev'
SwapFree: 5368705024
SwapTotal: 5368705024
arch: amd64
cpus: 16
hostname: bng-ix-svr1.englab.juniper.net
kernel: 4.18.0-147.el8.x86_64
os: linux
rootless: false
uptime: 408h 2m 41.08s (Approximately 17.00 days)
registries:
blocked: null
insecure: null
search:
- registry.redhat.io
- registry.access.redhat.com
- quay.io
- docker.io
store:
ConfigFile: /etc/containers/storage.conf
ContainerStore:
number: 4
GraphDriverName: overlay
GraphOptions: null
GraphRoot: /var/lib/containers/storage
GraphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "true"
Supports d_type: "true"
Using metacopy: "false"
ImageStore:
number: 2
RunRoot: /var/run/containers/storage
VolumePath: /var/lib/containers/storage/volumes
that is correct. The config.json file is generated by Podman to instruct the OCI runtime how to run the container.
All the changes done directly on that file will be lost next time you restart the container. The config.json file is used by the OCI runtime to create the container and then it is not used anymore.

ansible meta: refresh_inventory does not include previously absent hosts in task execution

Sometime ago, somebody suggested using dynamic inventories to generate a different hosts file depending on a location and other variables from a template, but I faced a pretty big issue :
After I create the inventory from a template, I need to refresh it (I do it using meta: refresh_inventory) for Ansible to execute tasks on newly added hosts, however, if the host was not initially in hosts file, ansible does not execute tasks on it. On the other hand, if after changing the host file a host is absent from a newly-formed file, then Ansible omits the host like it should, so the refresh_inventory does half of the work. Is there any way to get around this issue?
E.g. I have 1 task to generate hosts file from template, then refresh inventory, then do a simple task on all hosts, like show message:
tasks:
- name: Creating inventory template
local_action:
module: template
src: hosts.j2
dest: "/opt/ansible/inventories/{{location}}/hosts"
mode: 0777
force: yes
backup: yes
ignore_errors: yes
run_once: true
- name: "Refreshing hosts file for {{location}} location"
meta: refresh_inventory
- name: Force refresh of host errors
meta: clear_host_errors
- name: Show message
debug: msg="This works for this host"
If initial hosts file has hosts A, B, C, D, and the newly created inventory has B, C, D, then all is good:
ok: [B] => {
"msg": "This works for this host"
}
ok: [C] => {
"msg": "This works for this host"
}
ok: [D] => {
"msg": "This works for this host"
}
However, if newly formed hosts file has hosts B, C, D, E (E not being present at initial hosts file) then again the result is:
ok: [B] => {
"msg": "This works for this host"
}
ok: [C] => {
"msg": "This works for this host"
}
ok: [D] => {
"msg": "This works for this host"
}
With task for E missing. Now if I replay the playbook, only to add another host, say F, then the result looks like:
ok: [B] => {
"msg": "This works for this host"
}
ok: [C] => {
"msg": "This works for this host"
}
ok: [D] => {
"msg": "This works for this host"
}
ok: [E] => {
"msg": "This works for this host"
}
But no F, which is already added to the inventory file before the refresh.
So, any ideas?
Quoting from Basics
For each play in a playbook, you get to choose which machines in your infrastructure to target ... The hosts line is a list of one or more groups or host patterns ...
For example, it is possible to create the inventory in the 1st play and use it in the 2nd play. The playbook below
- hosts: localhost
tasks:
- template:
src: hosts.j2
dest: "{{ playbook_dir }}/hosts"
- meta: refresh_inventory
- hosts: test
tasks:
- debug:
var: inventory_hostname
with the template (fit it to your needs)
$ cat hosts.j2
[test]
test_01
test_02
test_03
[test:vars]
ansible_connection=ssh
ansible_user=admin
ansible_become=yes
ansible_become_user=root
ansible_become_method=sudo
ansible_python_interpreter=/usr/local/bin/python3.6
ansible_perl_interpreter=/usr/local/bin/perl
give
PLAY [localhost] ****************************************************************************
TASK [Gathering Facts] **********************************************************************
ok: [localhost]
TASK [template] *****************************************************************************
changed: [localhost]
PLAY [test] *********************************************************************************
TASK [Gathering Facts] **********************************************************************
ok: [test_02]
ok: [test_01]
ok: [test_03]
TASK [debug] ********************************************************************************
ok: [test_01] => {
"inventory_hostname": "test_01"
}
ok: [test_02] => {
"inventory_hostname": "test_02"
}
ok: [test_03] => {
"inventory_hostname": "test_03"
}
PLAY RECAP **********************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0
test_01 : ok=2 changed=0 unreachable=0 failed=0
test_02 : ok=2 changed=0 unreachable=0 failed=0
test_03 : ok=2 changed=0 unreachable=0 failed=0
Even though the first answer provided here is correct I think this deserves an explanation on how refresh_inventory and also add_host behave. As I've seen a few other questions regarding this topic.
It does not matter if you use static or dynamic inventory, the behavior is the same. The only thing specific for dynamic inventory that can change the behavior is caching. The following applies for disabled caching or refreshed cache after adding the new host.
Both refresh_inventory and add_host allow you to execute tasks only in subsequent plays. However they allow you to access hostvars of the added hosts also in the current play. This behavior is partially and very briefly mentioned in the add_host documentation and is easy to miss.
Use variables to create new hosts and groups in inventory for use in later plays of the same playbook.
Consider following inventory called hosts_ini-main.ini:
localhost testvar='testmain'
Now you can write a playbook that will observe and test the behavior of refresh_inventory. It overwrites hosts_ini-main.ini inventory file (used by the playbook) with the following contents from the second file hosts_ini-second.ini:
localhost testvar='testmain'
127.0.0.2 testvar='test2'
The playbook prints hostvars before the inventory is changed follows by changing the inventory, refreshing inventory, again printing hostvars and then trying to execute task only on the newly added host.
The second play also tries to execute task only on the added host.
---
- hosts: all
connection: local
become: false
gather_facts: false
tasks:
- name: Print hostvars
debug:
var: hostvars
- name: Print var for first host
debug:
var: testvar
when: hostvars[inventory_hostname]['testvar'] == "testmain"
- name: Copy alternate hosts file to main hosts file
copy:
src: "hosts_ini-second.ini"
dest: "hosts_ini-main.ini"
- name: Refresh inventory using meta module
meta: refresh_inventory
- name: Print hostvars for the second time in the first play
debug:
var: hostvars
- name: Print var for added host
debug:
var: testvar # This will not execute
when: hostvars[inventory_hostname]['testvar'] == "test2"
# New play
- hosts: all
connection: local
become: false
gather_facts: false
tasks:
- name: Print hostvars in a different play
debug:
var: testvar
when: hostvars[inventory_hostname]['testvar'] == "test2"
Here is the execution (I've truncated parts of the output to make it more readable).
PLAY [all] *******************************************************************************
TASK [Print hostvars] ********************************************************************
ok: [localhost] => {
"hostvars": {
"localhost": {
"ansible_check_mode": false,
"ansible_config_file": "/home/john/dev-ansible/ansible.cfg",
"ansible_diff_mode": false,
"ansible_facts": {},
...
"testvar": "testmain"
}
}
}
TASK [Print var for first host] ***********************************************************
ok: [localhost] => {
"testvar": "testmain"
}
TASK [Copy alternate hosts file to main hosts file] ***************************************
changed: [localhost]
TASK [Refresh inventory using meta module] ************************************************
TASK [Print hostvars for the second time in the first play] *******************************
ok: [localhost] => {
"hostvars": {
"127.0.0.2": {
"ansible_check_mode": false,
"ansible_config_file": "/home/john/dev-ansible/ansible.cfg",
"ansible_diff_mode": false,
"ansible_facts": {},
...
"testvar": "test2"
},
"localhost": {
"ansible_check_mode": false,
"ansible_config_file": "/home/john/dev-ansible/ansible.cfg",
"ansible_diff_mode": false,
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
...
"testvar": "testmain"
}
}
}
TASK [Print var for added host] ***********************************************************
skipping: [localhost]
PLAY [all] ********************************************************************************
TASK [Print hostvars in a different play] *************************************************
skipping: [localhost]
ok: [127.0.0.2] => {
"testvar": "test2"
}
PLAY RECAP *******************************************************************************
127.0.0.2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
localhost : ok=4 changed=1 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
As can be seen the hostvars contain information about the newly added host even in the first play, but Ansible is not able to execute task on the host. When new play is created the task is executed on the new host without problems.

Docker how to send request(curl - get, post) one container to another container

GET(and POST PUT ....) request using curl in one container to another.
$ docker ps
CONTAINER ID IMAGE COMMAND PORTS NAMES
b184219cd8f6 otp_totp "./run.sh" 0.0.0.0:3000->3000/tcp totp_api
c381c276593f service "/bin/sh -c" 0.0.0.0:8000->8000/tcp, 0.0.0.0:9000->9000/tcp service
d0add6b1c72e mysql "/tmp/run.sh" 0.0.0.0:3306->3306/tcp mysql
when I send request curl -X GET http://localhost.3000 to totp_api container in local
totp_api return {'status':200}
but I want send request in service container
like// curl -X GET http://localhost:3000 to totp_api container in service(docker exec -it /bin/bash), totp_api will return {'status':200} to server container
project_folder
ㄴ- docker-compose.yml # service, mysql container
api_folder
ㄴ- docker-compose.yml # totp_api container
please some body tell me some advice
after
api/folder/docker-compose.yml
version: '3'
services:
totp:
build: .
container_name: totp_api
volumes:
- $PWD:/home
ports:
- "3000:3000"
tty: true
restart: always
networks:
- bridge
networks:
bridge:
driver: bridge
-
$ docker-compose up -d
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
4a05be5e600f bridge bridge local
503d0586c7ec host host local
######## ####
727bfc6cc21f otp_bridge bridge local
######## ####
3c19d98d9ca5 otp_default bridge local
-
$ docker network inspect otp_bridge
[
{
"Name": "otp_bridge",
"Id": "727bfc6cc21fd74f19eb7fe164582637cbad4c2d7e36620c1c0a1c51c0490d31",
"Created": "2017-12-13T06:12:40.5070258Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"02fa407062cdd5f6f368f2d17c5e68d2f9155d1d9b30c9edcbb2386a9ded648a": {
"Name": "totp_api",
"EndpointID": "860c47da1e70d304399b42917b927a8cc6717b86c6d1ee6f065229e33f496c2f",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "bridge",
"com.docker.compose.project": "otp"
}
}
]
Using docker networking and the network commands, you can make sure those containers are part of the same network.
A side-effect of that unique network will be container A will know of container B name: from totp_api, you can ping or curl the service container with its name:
ping service
You can:
do it statically in the docker compose file (and relaunch new containers), - test it out at runtime, adding existing running containers to a new network

Resources