Netcating from client to server with docker-ed application lags initially - ip

I have a client and server in an en enterprise environment. netcating client to server is instant.
_______SERVER______ ______CLIENT_______
| | | |
| | <---------> | |
| 20.20.20.20 | | |
|___________________| |___________________|
> netcat -l 8000
> netcat 20.20.20.20 8000
blahblah blahblah
However netcating with a container on the server..
_______SERVER______ ______CLIENT_______
| | | |
| __container__ | | |
| | | | <---------> | |
| | 30.00.00.01 | | | |
| |_____________| | | |
| | | |
| 20.20.20.20 | | |
|___________________| |___________________|
> docker run -p 8000:8000 -t -i ubuntu netcat -l 8000
>netcat 20.20.20.20 8000
(blank for 10 seconds) blahblah
blahblah
blah2 blah2
So the difference is that the first message takes about 10 seconds to appear when I use netcat inside a container. Why is this? After that the messages are instant.
Since netcating without a container is instant I am pretty sure there's something tricky going on with docker.

I seem to have solved this issue. Using the -n flag makes it all work nice. So instead of
docker run -p 8000:8000 -t -i ubuntu netcat -l 8000
you use
docker run -p 8000:8000 -n -t -i ubuntu netcat -l 8000
According to Docker the -n flag is to
Enable networking for this container
I am still puzzled though how networking works even without it. What does it actually do?

Related

mariadb server: I can't stop the server with `mysql.server stop`

OSX 10.13.6
I installed mariadb sever with homebrew a few years ago, and I use it infrequently. Today, I tried to start mariadb using the command:
$ mysql.server start
and I got a bunch of errors. So, I did:
$ brew update
then:
$ brew uprade mariadb
That completed fine, and now I can start mariadb with:
$ mysql.server start
and I can access all my old db's.
The problem I'm having is that I cannot stop mysql. Both these commands hang:
$ mysql.server stop
and(in another terminal window):
$ mysql.server status
According to the MariaDB docs for mysql.server, both those commands should work.
Currently, I'm killing the server like this:
$ killall mysqld mysqld_safe
then checking that the server was killed with this:
$ ps aux | grep mysqld
When I run the ps command when mysql is running, I get:
~$ ps aux | grep mysqld
7stud 3707 0.0 1.0 4808208 79948 s005 S 1:26PM 0:00.47
/usr/local/Cellar/mariadb/10.3.15/bin/mysqld
--basedir=/usr/local/Cellar/mariadb/10.3.15 --datadir=/usr/local/var/mysql --plugin-dir=/usr/local/Cellar/mariadb/10.3.15/lib/plugin --log-error=/usr/local/var/mysql/My-MacBook-Pro-2.local.err --pid-file=/usr/local/var/mysql/My-MacBook-Pro-2.local.pid
7stud 3643 0.0 0.0 4287792 1460 s005 S 1:26PM 0:00.02 /bin/sh
/usr/local/Cellar/mariadb/10.3.15/bin/mysqld_safe
--datadir=/usr/local/var/mysql --pid-file=/usr/local/var/mysql/My-MacBook-Pro-2.local.pid
7stud 4544 0.0 0.0 4267752 880 s000 S+ 1:41PM 0:00.00
grep mysqld
What is the proper way to shut down the mariadb server?
mysql> SHOW VARIABLES LIKE '%vers%';
+---------------------------------+------------------------------------------+
| Variable_name | Value |
+---------------------------------+------------------------------------------+
| innodb_version | 10.3.15 |
| protocol_version | 10 |
| slave_type_conversions | |
| system_versioning_alter_history | ERROR |
| system_versioning_asof | DEFAULT |
| thread_pool_oversubscribe | 3 |
| version | 10.3.15-MariaDB |
| version_comment | Homebrew |
| version_compile_machine | x86_64 |
| version_compile_os | osx10.13 |
| version_malloc_library | system |
| version_source_revision | 07aef9f7eb936de2b277f8ae209a1fd72510c011 |
| version_ssl_library | OpenSSL 1.0.2r 26 Feb 2019 |
| wsrep_patch_version | wsrep_25.24 |
+---------------------------------+------------------------------------------+
14 rows in set (0.01 sec)
I had the same issue, tried to use mysql.server stop and mysql.server status after running mysql.server start but they will just run indefinitely.
If you are using a Mac seems that mysql.server stop won't work... I installed mariadb with homebrew and found out that I can use brew services to start and stop it in this link:
https://dba.stackexchange.com/questions/214883/homebrew-mariadb-server-start-error-with-mysql-server-start
The commands are pretty simple and they work for me
brew services start mariadb
brew services stop mariadb
It may be a little late but I hope this helps you to figure out your problem.
See https://stackoverflow.com/a/59938033/4579271 for details.
To fix, run:
cp /usr/local/bin/mysql.server /usr/local/bin/mysql.server.backup
sed -i "" "s/user='mysql'/user=\`whoami\`/g" /usr/local/bin/mysql.server

How to configure this external service for kubernetes?

I have this situation shown in the drawing.
I would like to configure the kubernetes pods so that they can talk to the external docker container "mysql", but I don't know how to configure it, especially regarding the IP address to tell them to connect to. I can't use "localhost" because that will just redirect back to the calling pod, I can't use 192.168.1.8 because the port is not exposed from there.
What is the DB Host IP in this case?
Thank you for any clues
+----------------------------------------------------------------------------------------------------------+
| |
| My Macbook Pro Laptop |
| |
| Today's DHCP IP: 192.168.1.8 |
| +-------------------------+ |
| | | |
| | K8s Cluster | |
| | | |
| | | |
| | K8s Pod | |
| | +---------------+ | |
| | | Docker | | |
| | | Container | | |
| | | +-----------+ | | |
| ??? | | | | | | |
| <-----------+ Foo | | | |
| +-------------+ | | | Program | | | |
| | Docker | | | | | | | |
| +-----------------------+ | container | Listening | | +-----------+ | | |
| | Local Hard Disk | | +---------+ | Port | +---------------+ | |
| | +------------------+ | | | | | 3306 | | |
| | | /Users/foo/data <------------- mysql <------+ | | |
| | | | | | | | | | K8s Pod | |
| | +------------------+ | | +---------+ | | +---------------+ | |
| +-----------------------+ +-------------+ | | Docker | | |
| | | Container | | |
| | | +-----------+ | | |
| ??? | | | | | | |
| <-----------+ Bar | | | |
| | | | Program | | | |
| | | | | | | |
| | | +-----------+ | | |
| | +---------------+ | |
| | | |
| | | |
| +-------------------------+ |
| |
| |
+----------------------------------------------------------------------------------------------------------+
Note Because of the current limitations of the kubernetes system available for MacOS, I cannot persist the data to the local hard disk (to a location of my choosing that I wish to specify) through kubernetes. I can do it however with Docker, so this is the only configuration I can find to achieve the desired goal of persisting the database files beyond the lifetime of the containers/pods
You can create External IP or External Name service with db host IP and call that service from POD.
Also other options is to deploy db as pod on k8 cluster with headless service.
Just out of curiosity, why you are deploying db as container using docker if you have k8 cluster?
I do not have OSX to test it, but that statement seems to be not true - that there is no way to persist data on Kubernetes for OSX. To do that you can just create a Persistent Volume which will be a resource in the cluster and then add PersistentVolumeClaim :
PV is is a resource in the cluster just like a node is a cluster
resource. PVs are volume plugins like Volumes, but have a lifecycle
independent of any individual pod that uses the PV.
PVC is a request for storage by a user. It is similar to a pod. Pods
consume node resources and PVCs consume PV resources. Pods can request
specific levels of resources (CPU and Memory). Claims can request
specific size and access modes (e.g., can be mounted once read/write
or many times read-only).
You can find explanation here. And how to here with step by step configuration of mySQL and in this case Wordpress.
About your setup, first try to follow the official documentation about running mySQL inside of the cluster (assuming you are using minikube, but if not there should't be much differences) if you do not succeed we will continue. I already started trying to connect from the inside of the cluster to mysql container outside (my setup is Ubuntu 18.04 with minikube).
Also you are right you won't be able to access it on localhost, because docker is actually using 172.17 (if I remember correctly) so you one of the options would be building new image and put host machine IP with exposed port.

"Error: No valid host was found. There are not enough hosts available"+Newton

I have installed openstack (Newton release) on Ubuntu 16.04 (all in one). every service is running well, but when I try to launch an instance, an error occurs:
Error: Failed to perform requested operation on instance "test1", the
instance has an error status: Please try again later [Error: No valid
host was found. There are not enough hosts available.].
When I enter:
[root#controller-hp ~(keystone_admin)]$ nova-hypervisor-list
+----+---------------------+-------+---------+
| ID | Hypervisor hostname | State | Status |
+----+---------------------+-------+---------+
| 1 | controller-hp | up | enabled |
+----+---------------------+-------+---------+
and the output of
[root#controller-hp ~(keystone_admin)]$ neutron ext-list
+---------------------------+-----------------------------------------------+
| alias | name |
+---------------------------+-----------------------------------------------+
| default-subnetpools | Default Subnetpools |
| network-ip-availability | Network IP Availability |
| network_availability_zone | Network Availability Zone |
| auto-allocated-topology | Auto Allocated Topology Services |
| ext-gw-mode | Neutron L3 Configurable external gateway mode |
| binding | Port Binding |
| metering | Neutron Metering |
| agent | agent |
| subnet_allocation | Subnet Allocation |
| l3_agent_scheduler | L3 Agent Scheduler |
| tag | Tag support |
| external-net | Neutron external network |
| flavors | Neutron Service Flavors |
| fwaasrouterinsertion | Firewall Router insertion |
| net-mtu | Network MTU |
| availability_zone | Availability Zone |
| quotas | Quota management support |
| l3-ha | HA Router extension |
| provider | Provider Network |
| multi-provider | Multi Provider Network |
| address-scope | Address scope |
| extraroute | Neutron Extra Route |
| shared_pools | Shared pools for LBaaSv2 |
| subnet-service-types | Subnet service types |
| standard-attr-timestamp | Resource timestamps |
| fwaas | Firewall service |
| service-type | Neutron Service Type Management |
| lb_network_vip | Create loadbalancer with network_id |
| l3-flavors | Router Flavor Extension |
| port-security | Port Security |
| hm_max_retries_down | Add a fall threshold to health monitor |
| extra_dhcp_opt | Neutron Extra DHCP opts |
| standard-attr-revisions | Resource revision numbers |
| lbaasv2 | LoadBalancing service v2 |
| pagination | Pagination support |
| sorting | Sorting support |
| lbaas_agent_schedulerv2 | Loadbalancer Agent Scheduler V2 |
| security-group | security-group |
| dhcp_agent_scheduler | DHCP Agent Scheduler |
| router_availability_zone | Router Availability Zone |
| lb-graph | Load Balancer Graph |
| rbac-policies | RBAC Policies |
| l7 | L7 capabilities for LBaaSv2 |
| standard-attr-description | standard-attr-description |
| router | Neutron L3 Router |
| allowed-address-pairs | Allowed Address Pairs |
| project-id | project_id field enabled |
| dvr | Distributed Virtual Router |
+---------------------------+-----------------------------------------------+
My nova.conf file is here:
https://paste.debian.net/1005278/
‌‌By using this guide, My network configuration is:
Network Overview
Name
physical01
ID
212248e9-71ff-4015-8fec-54567f38e92f
Project ID
43605ff65d2a40d19073e7268fd5d124
Status
Active
Admin State
UP
Shared
Yes
External Network
Yes
MTU
1500
Provider Network
Network Type: flat
Physical Network: physical01
Segmentation ID: -
and:
[root#controller-hp ~(keystone_admin)]$ neutron net-list
+--------------------------------+------------+--------------------------------+
| id | name | subnets |
+--------------------------------+------------+--------------------------------+
| 212248e9-71ff-4015-8fec- | physical01 | ee667189-bb7c-4861-81a0-2735d0 |
| 54567f38e92f | | 1c2011 10.1.79.0/24 |
+--------------------------------+------------+--------------------------------+
I use a large flavor for Cirros 0.3.4 64 bits image that is shows the problem doesn't relevant to resource.
that's the output of "tailf /var/log/neutron/neutron-*.log"
http://paste.openstack.org/show/648396/
These are my plugins configure:
ml2
Linux Bridge Agent
openvswitch
DHCP agent
L3 agent
I'd read these topics, but they couldn't solve my problem:
OpenStack Newton Installation(using packstack) Error
No valid host was found. There are not enough hosts available
...
Any help would be appreciated.
You may get clues from the following logs on compute nodes:
/var/log/nova/nova-scheduler.log
/var/log/neutron/neutron-*.log

How to restart ceilometer service

I changed pulling intervals in /etc/ceilometer/pipeline.yaml file from 600 to 60 and can't make the service to use new values. I restarted everything that relates to ceilometer in openstack-status command, but that did not work. Can somebody tell me the proper way how to do it?
I am using Openstack Liberty on Ubuntu 14.04 LTS
root#OS1:~# openstack service list
+----------------------------------+------------+---------------+
| ID | Name | Type |
+----------------------------------+------------+---------------+
| 056fcccaad5c4991a8a0da199ed1d737 | cinderv2 | volumev2 |
| 483a0cd1ba79430690a8960ae3d40222 | glance | image |
| 5c704fc9253e4c15895589eb19fab2ac | keystone | identity |
| 92bfcfb417314e80a43e6e7d4d21f99b | nova | compute |
| a7a3809d73674d3da3fbe8030b47055a | horizon | dashboard |
| c21b5e3c9d68417cb11df60d72f9bb58 | heat | orchestration |
| c7030edb082346328a715b00098b974a | neutron | network |
| d331f5360e2b4d3a854e7f47107a9421 | ec2 | ec2 |
| f0a22f827bed43dbbc43822abfc3e3e0 | ceilometer | metering |
+----------------------------------+------------+---------------+
root#OS11:~# openstack-status
.
.
.
== Ceilometer services ==
ceilometer-api: active
ceilometer-agent-central: active
ceilometer-agent-compute: inactive (disabled on boot)
ceilometer-collector: active
ceilometer-alarm-notifier: active
ceilometer-alarm-evaluator: active
ceilometer-agent-notification:active
.
.
.
Well, you need to restart ceilometer-agent-notification service because this service is responsible for transforming the data into samples in the ceilometer database.
Thus, systemctl restart ceilometer-agent-notification.service will help along with restarting other services.
Since ceilometer-agent-compute service is disabled, you just need to restart ceilometer-agent-central service on the node you have modified the config file.
sudo service ceilometer-agent-central restart
You might want to auto reload pipelines after you modify it, for that, you can set refresh_pipeline_cfg=True and a proper time for pipeline_polling_interval such as 120 seconds in /etc/ceilometer/ceilometer.conf.
Note, be careful when you enable auto reload, and only save pipeline config file after you are sure about the content is right (otherwise it might lose 1 polling period data)

Partition drive automatically/set OS-DCF:diskConfig to auto with nova

Rackspace Linux cloud servers now set OS-DCF:diskConfig to MANUAL when using nova. This means that the full drive isn't partitioned.
19:29:48 ~$ nova boot server01 --image 62df001e-87ee-407c-b042-6f4e13f5d7e1 --flavor performance2-60 --poll --key-name kylepub
+------------------------+---------------------------------------------+
| Property | Value |
+------------------------+---------------------------------------------+
| status | BUILD |
| updated | 2013-11-16T01:29:58Z |
| OS-EXT-STS:task_state | scheduling |
| key_name | kylepub |
| image | Ubuntu 13.04 (Raring Ringtail) (PVHVM beta) |
| hostId | |
| OS-EXT-STS:vm_state | building |
| flavor | 60 GB Performance |
| id | 9bd6aaac-bbdd-4644-821d-fb697fd48091 |
| user_id | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa |
| name | server01 |
| adminPass | XXXXXXXXXXXX |
| tenant_id | 864477 |
| created | 2013-11-16T01:29:58Z |
| OS-DCF:diskConfig | MANUAL |
| accessIPv4 | |
| accessIPv6 | |
| progress | 0 |
| OS-EXT-STS:power_state | 0 |
| config_drive | |
| metadata | {} |
+------------------------+---------------------------------------------+
How do I set OS-DCF:diskConfig to auto so the full disk is partitioned automatically?
Make sure you have os-diskconfig-python-novaclient-ext installed. If it's not, pip install it! It should come as part of installing rackspace-novaclient.
$ pip install os-diskconfig-python-novaclient-ext
Next up, use the --disk-config=AUTO option:
$ nova boot server01 --disk-config=AUTO --image 62df001e-87ee-407c-b042-6f4e13f5d7e1 --flavor performance2-60
Note that this is an extension by Rackspace, so if this is for a separate OpenStack deployment your provider needs the server side extension as well.
Big note: If you do the partitioning yourself, you are able to have non-EXT3 file systems, multiple partitions, and it lets you manage the disk configuration.

Resources