How long will Mariadb automatic close the sleep connection - mariadb

Update:
I forgot to set wait_timeout in my.cnf.
I have a persistent connection from my PHP application.
I set the wait_timeout to 600 already.
MariaDB [(none)]> show variables like 'wait_timeout';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| wait_timeout | 600 |
+---------------+-------+
But the connection doesn't be killed automatically when the sleep time over 600 seconds.
Why?
MariaDB [(none)]> show processlist;
+----+------+-----------+------+---------+------+----------+------------------+----------+
| Id | User | Host | db | Command | Time | State | Info | Progress |
+----+------+-----------+------+---------+------+----------+------------------+----------+
| 32 | lin | localhost | NULL | Query | 0 | starting | show processlist | 0.000 |
| 33 | lin | localhost | test | Sleep | 741 | | NULL | 0.000 |
+----+------+-----------+------+---------+------+----------+------------------+----------+

wait_timeout
https://mariadb.com/docs/reference/mdb/system-variables/wait_timeout/
The number of seconds the server waits for activity on a connection before closing it
Any kind of activity or query, will keep the connection open. My guess is that you run other queries during that time.
max_statement_time
This is your per query time
Description: Maximum time in seconds that a query can execute before being aborted. This includes all queries, not just SELECT statements, but excludes statements in stored procedures.

Related

How to configure this external service for kubernetes?

I have this situation shown in the drawing.
I would like to configure the kubernetes pods so that they can talk to the external docker container "mysql", but I don't know how to configure it, especially regarding the IP address to tell them to connect to. I can't use "localhost" because that will just redirect back to the calling pod, I can't use 192.168.1.8 because the port is not exposed from there.
What is the DB Host IP in this case?
Thank you for any clues
+----------------------------------------------------------------------------------------------------------+
| |
| My Macbook Pro Laptop |
| |
| Today's DHCP IP: 192.168.1.8 |
| +-------------------------+ |
| | | |
| | K8s Cluster | |
| | | |
| | | |
| | K8s Pod | |
| | +---------------+ | |
| | | Docker | | |
| | | Container | | |
| | | +-----------+ | | |
| ??? | | | | | | |
| <-----------+ Foo | | | |
| +-------------+ | | | Program | | | |
| | Docker | | | | | | | |
| +-----------------------+ | container | Listening | | +-----------+ | | |
| | Local Hard Disk | | +---------+ | Port | +---------------+ | |
| | +------------------+ | | | | | 3306 | | |
| | | /Users/foo/data <------------- mysql <------+ | | |
| | | | | | | | | | K8s Pod | |
| | +------------------+ | | +---------+ | | +---------------+ | |
| +-----------------------+ +-------------+ | | Docker | | |
| | | Container | | |
| | | +-----------+ | | |
| ??? | | | | | | |
| <-----------+ Bar | | | |
| | | | Program | | | |
| | | | | | | |
| | | +-----------+ | | |
| | +---------------+ | |
| | | |
| | | |
| +-------------------------+ |
| |
| |
+----------------------------------------------------------------------------------------------------------+
Note Because of the current limitations of the kubernetes system available for MacOS, I cannot persist the data to the local hard disk (to a location of my choosing that I wish to specify) through kubernetes. I can do it however with Docker, so this is the only configuration I can find to achieve the desired goal of persisting the database files beyond the lifetime of the containers/pods
You can create External IP or External Name service with db host IP and call that service from POD.
Also other options is to deploy db as pod on k8 cluster with headless service.
Just out of curiosity, why you are deploying db as container using docker if you have k8 cluster?
I do not have OSX to test it, but that statement seems to be not true - that there is no way to persist data on Kubernetes for OSX. To do that you can just create a Persistent Volume which will be a resource in the cluster and then add PersistentVolumeClaim :
PV is is a resource in the cluster just like a node is a cluster
resource. PVs are volume plugins like Volumes, but have a lifecycle
independent of any individual pod that uses the PV.
PVC is a request for storage by a user. It is similar to a pod. Pods
consume node resources and PVCs consume PV resources. Pods can request
specific levels of resources (CPU and Memory). Claims can request
specific size and access modes (e.g., can be mounted once read/write
or many times read-only).
You can find explanation here. And how to here with step by step configuration of mySQL and in this case Wordpress.
About your setup, first try to follow the official documentation about running mySQL inside of the cluster (assuming you are using minikube, but if not there should't be much differences) if you do not succeed we will continue. I already started trying to connect from the inside of the cluster to mysql container outside (my setup is Ubuntu 18.04 with minikube).
Also you are right you won't be able to access it on localhost, because docker is actually using 172.17 (if I remember correctly) so you one of the options would be building new image and put host machine IP with exposed port.

"Error: No valid host was found. There are not enough hosts available"+Newton

I have installed openstack (Newton release) on Ubuntu 16.04 (all in one). every service is running well, but when I try to launch an instance, an error occurs:
Error: Failed to perform requested operation on instance "test1", the
instance has an error status: Please try again later [Error: No valid
host was found. There are not enough hosts available.].
When I enter:
[root#controller-hp ~(keystone_admin)]$ nova-hypervisor-list
+----+---------------------+-------+---------+
| ID | Hypervisor hostname | State | Status |
+----+---------------------+-------+---------+
| 1 | controller-hp | up | enabled |
+----+---------------------+-------+---------+
and the output of
[root#controller-hp ~(keystone_admin)]$ neutron ext-list
+---------------------------+-----------------------------------------------+
| alias | name |
+---------------------------+-----------------------------------------------+
| default-subnetpools | Default Subnetpools |
| network-ip-availability | Network IP Availability |
| network_availability_zone | Network Availability Zone |
| auto-allocated-topology | Auto Allocated Topology Services |
| ext-gw-mode | Neutron L3 Configurable external gateway mode |
| binding | Port Binding |
| metering | Neutron Metering |
| agent | agent |
| subnet_allocation | Subnet Allocation |
| l3_agent_scheduler | L3 Agent Scheduler |
| tag | Tag support |
| external-net | Neutron external network |
| flavors | Neutron Service Flavors |
| fwaasrouterinsertion | Firewall Router insertion |
| net-mtu | Network MTU |
| availability_zone | Availability Zone |
| quotas | Quota management support |
| l3-ha | HA Router extension |
| provider | Provider Network |
| multi-provider | Multi Provider Network |
| address-scope | Address scope |
| extraroute | Neutron Extra Route |
| shared_pools | Shared pools for LBaaSv2 |
| subnet-service-types | Subnet service types |
| standard-attr-timestamp | Resource timestamps |
| fwaas | Firewall service |
| service-type | Neutron Service Type Management |
| lb_network_vip | Create loadbalancer with network_id |
| l3-flavors | Router Flavor Extension |
| port-security | Port Security |
| hm_max_retries_down | Add a fall threshold to health monitor |
| extra_dhcp_opt | Neutron Extra DHCP opts |
| standard-attr-revisions | Resource revision numbers |
| lbaasv2 | LoadBalancing service v2 |
| pagination | Pagination support |
| sorting | Sorting support |
| lbaas_agent_schedulerv2 | Loadbalancer Agent Scheduler V2 |
| security-group | security-group |
| dhcp_agent_scheduler | DHCP Agent Scheduler |
| router_availability_zone | Router Availability Zone |
| lb-graph | Load Balancer Graph |
| rbac-policies | RBAC Policies |
| l7 | L7 capabilities for LBaaSv2 |
| standard-attr-description | standard-attr-description |
| router | Neutron L3 Router |
| allowed-address-pairs | Allowed Address Pairs |
| project-id | project_id field enabled |
| dvr | Distributed Virtual Router |
+---------------------------+-----------------------------------------------+
My nova.conf file is here:
https://paste.debian.net/1005278/
‌‌By using this guide, My network configuration is:
Network Overview
Name
physical01
ID
212248e9-71ff-4015-8fec-54567f38e92f
Project ID
43605ff65d2a40d19073e7268fd5d124
Status
Active
Admin State
UP
Shared
Yes
External Network
Yes
MTU
1500
Provider Network
Network Type: flat
Physical Network: physical01
Segmentation ID: -
and:
[root#controller-hp ~(keystone_admin)]$ neutron net-list
+--------------------------------+------------+--------------------------------+
| id | name | subnets |
+--------------------------------+------------+--------------------------------+
| 212248e9-71ff-4015-8fec- | physical01 | ee667189-bb7c-4861-81a0-2735d0 |
| 54567f38e92f | | 1c2011 10.1.79.0/24 |
+--------------------------------+------------+--------------------------------+
I use a large flavor for Cirros 0.3.4 64 bits image that is shows the problem doesn't relevant to resource.
that's the output of "tailf /var/log/neutron/neutron-*.log"
http://paste.openstack.org/show/648396/
These are my plugins configure:
ml2
Linux Bridge Agent
openvswitch
DHCP agent
L3 agent
I'd read these topics, but they couldn't solve my problem:
OpenStack Newton Installation(using packstack) Error
No valid host was found. There are not enough hosts available
...
Any help would be appreciated.
You may get clues from the following logs on compute nodes:
/var/log/nova/nova-scheduler.log
/var/log/neutron/neutron-*.log

flywaydb baseline baselineVersion parameter is being ignored

I am trying to impelment flywaydb in our process. In our env each client has their own instance of the DB.
I have a bash that loops through the clients to run migrate. So the command looks like
flyway -url=jdbc:jtds:sqlserver://localhost:1434/main_client_$ID migrate
This all works when all the clients start from the baseline. But as we add new customers their DB will reflect the newest code. Now we have older clients started with V1(and all the migration scripts to V2) and new clients with the latest DB V2.
I thought i could do something like :::
flyway baseline -url=jdbc:jtds:sqlserver://localhost:1434/main_client_3
--baselineVersion=2 --baselineDescription="Base 2 version"
but when i do it this way then call info i see something like :
+---------+-----------------------+---------------------+---------+
| Version | Description | Installed on | State |
+---------+-----------------------+---------------------+---------+
| 1 | << Flyway Baseline >> | 2015-06-08 22:07:54 | Success |
| 1.1 | update | | Pending |
| 1.2.0 | update | | Pending |
| 1.2.1 | update | | Pending |
+---------+-----------------------+---------------------+---------+
if i look in the DB i see the version value of schema_version set to 1.
if via the DB i force schema_version column value to 1.2.0 i see
+---------+-----------------------+---------------------+---------+
| Version | Description | Installed on | State |
+---------+-----------------------+---------------------+---------+
| 1 | Base version initial | | <Baseln |
| 1.1 | update | | <Baseln |
| 1.2.0 | << Flyway Baseline >> | 2015-06-08 22:07:54 | Success |
| 1.2.1 | update | | Pending |
+---------+-----------------------+---------------------+---------+
This is what i want.
But i can not figure out how to set the value via the baseline command
Thanks for any help
All parameters should be passed in with - not --

Robotframework threads

Is there any way you can start two actions in Robotframework. I'm trying to run a process that will run continuously and then make other actions without stopping the first process. The problem is that RF is waiting the first process to finish, then proceed with other actions. well, problem is the first process is not going to stop. Any advice?
Thanks,
Stell
Yes, you can do this with the Process library. It lets you run programs in the background.
In my open source project rfhub, the acceptance test suite has a suite setup that starts the hub in the background.
Here's the keyword that starts the hub:
*** Keywords ***
| Start rfhub
| | [Arguments] | ${PORT}
| | [Documentation]
| | ... | Starts rfhub on the port given in the variable ${PORT}
| | ... | As a side effect this creates a suite variable named ${rfhub process},
| | ... | which is used by the 'Stop rfhub' keyword.
| |
| | ${rfhub process}= | Start process | python | -m | rfhub | --port | ${PORT}
| | Set suite variable | ${rfhub process}
| | Wait until keyword succeeds | 20 seconds | 1 second
| | ... | Verify URL is reachable | /ping

Partition drive automatically/set OS-DCF:diskConfig to auto with nova

Rackspace Linux cloud servers now set OS-DCF:diskConfig to MANUAL when using nova. This means that the full drive isn't partitioned.
19:29:48 ~$ nova boot server01 --image 62df001e-87ee-407c-b042-6f4e13f5d7e1 --flavor performance2-60 --poll --key-name kylepub
+------------------------+---------------------------------------------+
| Property | Value |
+------------------------+---------------------------------------------+
| status | BUILD |
| updated | 2013-11-16T01:29:58Z |
| OS-EXT-STS:task_state | scheduling |
| key_name | kylepub |
| image | Ubuntu 13.04 (Raring Ringtail) (PVHVM beta) |
| hostId | |
| OS-EXT-STS:vm_state | building |
| flavor | 60 GB Performance |
| id | 9bd6aaac-bbdd-4644-821d-fb697fd48091 |
| user_id | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa |
| name | server01 |
| adminPass | XXXXXXXXXXXX |
| tenant_id | 864477 |
| created | 2013-11-16T01:29:58Z |
| OS-DCF:diskConfig | MANUAL |
| accessIPv4 | |
| accessIPv6 | |
| progress | 0 |
| OS-EXT-STS:power_state | 0 |
| config_drive | |
| metadata | {} |
+------------------------+---------------------------------------------+
How do I set OS-DCF:diskConfig to auto so the full disk is partitioned automatically?
Make sure you have os-diskconfig-python-novaclient-ext installed. If it's not, pip install it! It should come as part of installing rackspace-novaclient.
$ pip install os-diskconfig-python-novaclient-ext
Next up, use the --disk-config=AUTO option:
$ nova boot server01 --disk-config=AUTO --image 62df001e-87ee-407c-b042-6f4e13f5d7e1 --flavor performance2-60
Note that this is an extension by Rackspace, so if this is for a separate OpenStack deployment your provider needs the server side extension as well.
Big note: If you do the partitioning yourself, you are able to have non-EXT3 file systems, multiple partitions, and it lets you manage the disk configuration.

Resources