Python greenthread and requests module only process 10 requests at each time? - urllib3

I am using Python 2.7.5. and try to use co-routine Greenthreads (Python eventlet) and Python requests module to speed up my REST API request.
I know The Python requests module uses poolmanager object (from urllib3 module) to maintain connections, I set the poolmanager DEFAULT_POOLSIZE = 1000.
Then monkeypatched my python module at __init__.py, and spawn 1000 greenthread to send PATCH requests (REST API) to my device. My device(F5 bigip device) can deal with 50req/s.
this is the log, which I find the problem is.
2022-07-13 09:54:31.893 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 1
2022-07-13 09:54:31.893 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.894 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 2
2022-07-13 09:54:31.895 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.895 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 3
2022-07-13 09:54:31.896 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.896 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 4
2022-07-13 09:54:31.897 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.897 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 5
2022-07-13 09:54:31.898 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.898 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 6
2022-07-13 09:54:31.899 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.900 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 7
2022-07-13 09:54:31.900 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.901 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 8
2022-07-13 09:54:31.901 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.902 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 9
2022-07-13 09:54:31.902 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.903 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 10
2022-07-13 09:54:31.903 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.904 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 11
2022-07-13 09:54:31.904 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.905 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 12
2022-07-13 09:54:31.906 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.906 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 13
2022-07-13 09:54:31.907 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.907 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 14
2022-07-13 09:54:31.908 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.909 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 15
2022-07-13 09:54:31.909 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.910 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 16
2022-07-13 09:54:31.910 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.911 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 17
2022-07-13 09:54:31.911 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.912 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 18
2022-07-13 09:54:31.912 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.913 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 19
2022-07-13 09:54:31.913 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.914 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 20
2022-07-13 09:54:31.914 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.915 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 21
2022-07-13 09:54:31.915 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.916 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 22
2022-07-13 09:54:31.916 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.917 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 23
2022-07-13 09:54:31.917 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.918 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 24
2022-07-13 09:54:31.918 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.919 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 25
2022-07-13 09:54:31.919 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.920 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 26
2022-07-13 09:54:31.920 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.921 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 27
2022-07-13 09:54:31.921 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.922 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 28
2022-07-13 09:54:31.922 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.923 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 29
2022-07-13 09:54:31.923 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.924 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 30
2022-07-13 09:54:31.925 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.925 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 31
2022-07-13 09:54:31.926 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.927 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 32
2022-07-13 09:54:31.927 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.928 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 33
2022-07-13 09:54:31.928 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.929 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 34
2022-07-13 09:54:31.929 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.930 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 35
2022-07-13 09:54:31.930 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:31.931 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse running green thread is 36
2022-07-13 09:54:31.931 31706 INFO f5_openstack_agent.lbaasv2.drivers.bigip.vs_connection [req-f7db08a8-432a-413f-b4c9-f9eb90de9613 bcd6de7aaddc49e386a664d0a20efcff 8a2d7296ae9b4bd4a412eb3cb9aa680e - - -] time elapse waiting green thread is 0
2022-07-13 09:54:32.207 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:00.273908 sec
2022-07-13 09:54:32.209 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 0.276766 sec
2022-07-13 09:54:32.302 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:00.337186 sec
2022-07-13 09:54:32.303 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 0.340059 sec
2022-07-13 09:54:32.329 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:00.381034 sec
2022-07-13 09:54:32.331 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 0.384191 sec
2022-07-13 09:54:32.365 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:00.390816 sec
2022-07-13 09:54:32.367 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 0.394396 sec
2022-07-13 09:54:32.393 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:00.458491 sec
2022-07-13 09:54:32.396 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 0.461856 sec
2022-07-13 09:54:32.419 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:00.457652 sec
2022-07-13 09:54:32.421 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 0.460368 sec
2022-07-13 09:54:32.480 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:00.507334 sec
2022-07-13 09:54:32.481 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 0.510402 sec
2022-07-13 09:54:32.519 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:00.569053 sec
2022-07-13 09:54:32.521 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 0.571782 sec
2022-07-13 09:54:32.555 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:00.587162 sec
2022-07-13 09:54:32.558 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 0.590332 sec
2022-07-13 09:54:32.592 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:00.637019 sec
2022-07-13 09:54:32.595 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 0.640877 sec
2022-07-13 09:54:38.395 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:06.436821 sec
2022-07-13 09:54:38.396 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 6.439384 sec
2022-07-13 09:54:38.466 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:06.512065 sec
2022-07-13 09:54:38.468 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 6.515319 sec
2022-07-13 09:54:38.531 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:06.553081 sec
2022-07-13 09:54:38.533 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 6.556024 sec
2022-07-13 09:54:38.583 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:06.631171 sec
2022-07-13 09:54:38.585 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 6.634816 sec
2022-07-13 09:54:38.612 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:06.632594 sec
2022-07-13 09:54:38.614 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 6.635825 sec
2022-07-13 09:54:38.659 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:06.700147 sec
2022-07-13 09:54:38.661 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 6.703008 sec
2022-07-13 09:54:38.686 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:06.749487 sec
2022-07-13 09:54:38.688 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 6.753057 sec
2022-07-13 09:54:38.749 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:06.799416 sec
2022-07-13 09:54:38.750 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 6.801891 sec
2022-07-13 09:54:38.775 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:06.814294 sec
2022-07-13 09:54:38.776 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 6.817045 sec
2022-07-13 09:54:38.809 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:06.824674 sec
2022-07-13 09:54:38.811 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 6.827580 sec
2022-07-13 09:54:43.222 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:11.283620 sec
2022-07-13 09:54:43.224 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 11.287391 sec
2022-07-13 09:54:43.259 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:11.296450 sec
2022-07-13 09:54:43.261 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 11.299664 sec
2022-07-13 09:54:44.608 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:12.668820 sec
2022-07-13 09:54:44.610 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 12.672017 sec
2022-07-13 09:54:44.656 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:12.711311 sec
2022-07-13 09:54:44.658 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 12.714831 sec
2022-07-13 09:54:44.724 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:12.777329 sec
2022-07-13 09:54:44.725 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 12.780314 sec
2022-07-13 09:54:44.779 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:12.803660 sec
2022-07-13 09:54:44.781 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 12.806557 sec
2022-07-13 09:54:44.834 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:12.862764 sec
2022-07-13 09:54:44.836 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 12.865921 sec
2022-07-13 09:54:44.861 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:12.895232 sec
2022-07-13 09:54:44.864 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 12.899029 sec
2022-07-13 09:54:44.904 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:12.921001 sec
2022-07-13 09:54:44.906 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 12.924201 sec
2022-07-13 09:54:44.947 31706 INFO requests.sessions [-] send request {"connectionLimit": 1388} time elapse is 0:00:13.005864 sec
2022-07-13 09:54:44.949 31706 INFO f5.bigip.tm.ltm.virtual [-] method modify time elapse is 13.008865 sec
...
and the code is like this
pool = eventlet.greenpool.GreenPool()
for vs in vss:
try:
# vs.modify(connectionLimit=limit)
pool.spawn(vs.modify, connectionLimit=limit)
nums = pool.running()
waits = pool.waiting()
LOG.info("time elapse running green thread is %d" % nums)
LOG.info("time elapse waiting green thread is %d" % waits)
except Exception as ex:
LOG.error(
"Fail to refresh virtual server %s"
" connection limit %s." % (vs.name, limit)
)
As the log above shows, there are 36 requests needed to be processed. all the 36 requests seem send out at once.
I also time the requests.sessions send method. it seems like every 10 requests responded as a batch, and each batch is about 6 seconds more latency than the previous batch.
At first, I think I am not monkeypatched my module, but I check the init.py file, and the eventlet monkeypatch is there.
I am a bit confused, it seems like non-blocking, but every 10 requests are responded to and blocked. Also, the poolmanager connection is 1000, it is much large than 10.
why the request is responded to in this way, anyone could give me a hint? is that possible my device problem?
------- update some information -------
As I mentioned, I change the poolmanager DEFAULT_POOLSIZE = 1000 which affects the value of pool_maxsize. Then I do a test again, I change the pool_maxsize to 1.
Although the log shows 2022-07-18 08:19:52.423 15412 WARNING requests.packages.urllib3.connectionpool [-] Connection pool is full, discarding connection: 100.123.73.244: Full, the process become faster.
It seems more efficient to rebuild connections to the device rather than maintain a pool of connections and reuse them.
class HTTPAdapter(BaseAdapter):
"""The built-in HTTP Adapter for urllib3.
Provides a general-case interface for Requests sessions to contact HTTP and
HTTPS urls by implementing the Transport Adapter interface. This class will
usually be created by the :class:`Session <Session>` class under the
covers.
:param pool_connections: The number of urllib3 connection pools to cache.
:param pool_maxsize: The maximum number of connections to save in the pool.
:param max_retries: The maximum number of retries each connection
should attempt. Note, this applies only to failed DNS lookups, socket
connections and connection timeouts, never to requests where data has
made it to the server. By default, Requests does not retry failed
connections. If you need granular control over the conditions under
which we retry a request, import urllib3's ``Retry`` class and pass
that instead.
:param pool_block: Whether the connection pool should block for connections.
Usage::
>>> import requests
>>> s = requests.Session()
>>> a = requests.adapters.HTTPAdapter(max_retries=3)
>>> s.mount('http://', a)
"""
__attrs__ = ['max_retries', 'config', '_pool_connections', '_pool_maxsize',
'_pool_block']
def __init__(self, pool_connections=DEFAULT_POOLSIZE,
pool_maxsize=DEFAULT_POOLSIZE, max_retries=DEFAULT_RETRIES,
pool_block=DEFAULT_POOLBLOCK):
pool_connections = 1
# I change this pool_maxsize value from 1000 to 1
# pool_maxsize = 1000
pool_maxsize = 1
if max_retries == DEFAULT_RETRIES:
self.max_retries = Retry(0, read=False)
else:
self.max_retries = Retry.from_int(max_retries)
self.config = {}
self.proxy_manager = {}
super(HTTPAdapter, self).__init__()
self._pool_connections = pool_connections
self._pool_maxsize = pool_maxsize
self._pool_block = pool_block
self.init_poolmanager(pool_connections, pool_maxsize, block=pool_block)
I am so curious why it behaves like this. is it F5 REST server problem or my code problem...

Related

using rabbit mq with rebus keeps publishing the events

I am using Rebus and I am subscribing to RabbitMq as explained here in the document:
https://github.com/rebus-org/Rebus/wiki/RabbitMQ-transport
Using Autofac container.
Registering handlers:
_builder.RegisterAssemblyTypes(assembly)
.Where(t => t.IsClass && !t.IsAbstract && t.GetInterfaces().Any(IsRebusHandler))
.As(GetImplementedHandlerInterfaces)
.InstancePerDependency()
.PropertiesAutowired();
Action<OptionsConfigurer> optionsConfigurer = o =>
{
o.SetNumberOfWorkers(2);
o.SetMaxParallelism(30);
};
_rebusConfig = (configurer, context) => configurer
.Routing(r => r.TypeBased().MapAssemblyOf<MyMessage>(destination))
.Transport(t=> t.UseRabbitMq(connectionString, endPointName))
.Options(OptionsConfigurer );
_builder.RegisterRebus(_rebusConfig);
And in startup code:
var bus = container.Resolve<IBus>()
bus.Start();
Where there is a subscriber:
var events = container.Resolve<IEnumerable<IAppEvent>>();
foreach (var evt in events)
{
bus.Subscribe(evt.GetType());
}
Handler code:
public async Task Handle(FundsTransfer_InitiateFundsTransferCommand message)
{
FundsTransferCompletedEvent #eventSuccess = new FundsTransferCompletedEvent
{
Desc=_packet.cmd.model.Desc
};
await bus.Publish(#eventSuccess);
}
When I execute the end point, I get continuous message as below:
It continuously publishes message without executing the handler.
Can anyone help where it might be wrong.
2019-10-22 08:40:33.183 +05:30 [Debug] Sending NewStructure4.FundsTransferCompletedEvent -> "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages#RebusTopics"
2019-10-22 08:40:33.190 +05:30 [Debug] Bus Instance: RebusBus
2019-10-22 08:40:33.193 +05:30 [Debug] Dispatching "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages" "d729e3f8-ced9-47e5-8c79-96c8a99b7473" to 1 handlers took 42 ms
2019-10-22 08:40:33.197 +05:30 [Debug] Sending NewStructure4.FundsTransferCompletedEvent -> "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages#RebusTopics"
2019-10-22 08:40:33.209 +05:30 [Debug] Publishing message via Bus
2019-10-22 08:40:33.210 +05:30 [Debug] Dispatching "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages" "e1dc6be7-bc18-4c21-9ed0-422ca19b2ad5" to 1 handlers took 35 ms
2019-10-22 08:40:33.212 +05:30 [Debug] Bus Context: FlexBusOmegaContext
2019-10-22 08:40:33.222 +05:30 [Debug] Publishing message via Bus
2019-10-22 08:40:33.223 +05:30 [Debug] Bus Instance: RebusBus
2019-10-22 08:40:33.224 +05:30 [Debug] Bus Context: FlexBusOmegaContext
2019-10-22 08:40:33.228 +05:30 [Debug] Sending NewStructure4.FundsTransferCompletedEvent -> "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages#RebusTopics"
2019-10-22 08:40:33.230 +05:30 [Debug] Bus Instance: RebusBus
2019-10-22 08:40:33.233 +05:30 [Debug] Sending NewStructure4.FundsTransferCompletedEvent -> "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages#RebusTopics"
2019-10-22 08:40:33.239 +05:30 [Debug] Dispatching "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages" "8f1fd52c-dd87-4cdb-9341-c71e4ac0801b" to 1 handlers took 30 ms
2019-10-22 08:40:33.242 +05:30 [Debug] Dispatching "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages" "ee1e2ccb-eeec-4802-9f5e-7767efcab678" to 1 handlers took 19 ms
2019-10-22 08:40:33.246 +05:30 [Debug] Publishing message via Bus
2019-10-22 08:40:33.253 +05:30 [Debug] Publishing message via Bus
2019-10-22 08:40:33.253 +05:30 [Debug] Bus Context: FlexBusOmegaContext
2019-10-22 08:40:33.255 +05:30 [Debug] Bus Context: FlexBusOmegaContext
2019-10-22 08:40:33.256 +05:30 [Debug] Bus Instance: RebusBus
2019-10-22 08:40:33.260 +05:30 [Debug] Bus Instance: RebusBus
2019-10-22 08:40:33.263 +05:30 [Debug] Sending NewStructure4.FundsTransferCompletedEvent -> "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages#RebusTopics"
2019-10-22 08:40:33.267 +05:30 [Debug] Sending NewStructure4.FundsTransferCompletedEvent -> "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages#RebusTopics"
2019-10-22 08:40:33.271 +05:30 [Debug] Dispatching "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages" "e8a6c158-13e1-4ca6-a351-c1e6df37ac61" to 1 handlers took 17 ms
2019-10-22 08:40:33.273 +05:30 [Debug] Dispatching "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages" "95ac2bac-a947-4757-9b24-c693909f2224" to 1 handlers took 27 ms
2019-10-22 08:40:33.285 +05:30 [Debug] Publishing message via Bus
2019-10-22 08:40:33.285 +05:30 [Debug] Publishing message via Bus
2019-10-22 08:40:33.286 +05:30 [Debug] Bus Context: FlexBusOmegaContext
2019-10-22 08:40:33.289 +05:30 [Debug] Bus Context: FlexBusOmegaContext
2019-10-22 08:40:33.291 +05:30 [Debug] Bus Instance: RebusBus
2019-10-22 08:40:33.293 +05:30 [Debug] Bus Instance: RebusBus
2019-10-22 08:40:33.300 +05:30 [Debug] Sending NewStructure4.FundsTransferCompletedEvent -> "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages#RebusTopics"
2019-10-22 08:40:33.302 +05:30 [Debug] Sending NewStructure4.FundsTransferCompletedEvent -> "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages#RebusTopics"
2019-10-22 08:40:33.304 +05:30 [Debug] Dispatching "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages" "4af26345-73fd-44b1-9127-514769337c3c" to 1 handlers took 19 ms
2019-10-22 08:40:33.306 +05:30 [Debug] Dispatching "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages" "49825f15-c7a0-4dde-a8e4-fa249b8d38d5" to 1 handlers took 21 ms
2019-10-22 08:40:33.318 +05:30 [Debug] Publishing message via Bus
2019-10-22 08:40:33.318 +05:30 [Debug] Publishing message via Bus
2019-10-22 08:40:33.320 +05:30 [Debug] Bus Context: FlexBusOmegaContext
2019-10-22 08:40:33.322 +05:30 [Debug] Bus Context: FlexBusOmegaContext
2019-10-22 08:40:33.324 +05:30 [Debug] Bus Instance: RebusBus
2019-10-22 08:40:33.329 +05:30 [Debug] Bus Instance: RebusBus
2019-10-22 08:40:33.332 +05:30 [Debug] Sending NewStructure4.FundsTransferCompletedEvent -> "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages#RebusTopics"
2019-10-22 08:40:33.333 +05:30 [Debug] Sending NewStructure4.FundsTransferCompletedEvent -> "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages#RebusTopics"
2019-10-22 08:40:33.336 +05:30 [Debug] Dispatching "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages" "92803028-5c0f-47d0-bb52-5765377b03c0" to 1 handlers took 18 ms
2019-10-22 08:40:33.339 +05:30 [Debug] Dispatching "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages" "0fac55d6-653b-462f-bc9b-26ba86a8d069" to 1 handlers took 21 ms
2019-10-22 08:40:33.348 +05:30 [Debug] Publishing message via Bus
2019-10-22 08:40:33.348 +05:30 [Debug] Publishing message via Bus
2019-10-22 08:40:33.350 +05:30 [Debug] Bus Context: FlexBusOmegaContext
2019-10-22 08:40:33.352 +05:30 [Debug] Bus Context: FlexBusOmegaContext
2019-10-22 08:40:33.354 +05:30 [Debug] Bus Instance: RebusBus
2019-10-22 08:40:33.356 +05:30 [Debug] Bus Instance: RebusBus
2019-10-22 08:40:33.361 +05:30 [Debug] Sending NewStructure4.FundsTransferCompletedEvent -> "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages#RebusTopics"
2019-10-22 08:40:33.366 +05:30 [Debug] Sending NewStructure4.FundsTransferCompletedEvent -> "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages#RebusTopics"
2019-10-22 08:40:33.384 +05:30 [Debug] Dispatching "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages" "5c7e1870-7746-4053-bb1d-e4b6d1f519eb" to 1 handlers took 35 ms
2019-10-22 08:40:33.385 +05:30 [Debug] Dispatching "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages" "1ac9d320-b026-449d-9c05-4aead10250d7" to 1 handlers took 36 ms
2019-10-22 08:40:33.405 +05:30 [Debug] Publishing message via Bus
2019-10-22 08:40:33.409 +05:30 [Debug] Bus Context: FlexBusOmegaContext
2019-10-22 08:40:33.410 +05:30 [Debug] Publishing message via Bus
2019-10-22 08:40:33.412 +05:30 [Debug] Bus Instance: RebusBus
2019-10-22 08:40:33.416 +05:30 [Debug] Bus Context: FlexBusOmegaContext
2019-10-22 08:40:33.420 +05:30 [Debug] Sending NewStructure4.FundsTransferCompletedEvent -> "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages#RebusTopics"
2019-10-22 08:40:33.422 +05:30 [Debug] Bus Instance: RebusBus
2019-10-22 08:40:33.426 +05:30 [Debug] Dispatching "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages" "a7090778-3c3a-41ca-bd88-fa5819f7af29" to 1 handlers took 20 ms
2019-10-22 08:40:33.429 +05:30 [Debug] Sending NewStructure4.FundsTransferCompletedEvent -> "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages#RebusTopics"
2019-10-22 08:40:33.437 +05:30 [Debug] Publishing message via Bus
2019-10-22 08:40:33.438 +05:30 [Debug] Dispatching "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages" "aa267f55-27d0-48da-918d-6cc6c90f0be6" to 1 handlers took 28 ms
2019-10-22 08:40:33.440 +05:30 [Debug] Bus Context: FlexBusOmegaContext
2019-10-22 08:40:33.447 +05:30 [Debug] Publishing message via Bus
2019-10-22 08:40:33.449 +05:30 [Debug] Bus Instance: RebusBus
2019-10-22 08:40:33.454 +05:30 [Debug] Bus Context: FlexBusOmegaContext
2019-10-22 08:40:33.456 +05:30 [Debug] Sending NewStructure4.FundsTransferCompletedEvent -> "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages#RebusTopics"
2019-10-22 08:40:33.459 +05:30 [Debug] Bus Instance: RebusBus
2019-10-22 08:40:33.462 +05:30 [Debug] Dispatching "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages" "7cfd754c-4a3c-48a3-9d9d-6ffe625e1187" to 1 handlers took 25 ms
2019-10-22 08:40:33.465 +05:30 [Debug] Sending NewStructure4.FundsTransferCompletedEvent -> "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages#RebusTopics"
2019-10-22 08:40:33.473 +05:30 [Debug] Publishing message via Bus
2019-10-22 08:40:33.474 +05:30 [Debug] Dispatching "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages" "c161e531-51e1-498c-88a1-98a63f9c846e" to 1 handlers took 27 ms
2019-10-22 08:40:33.476 +05:30 [Debug] Bus Context: FlexBusOmegaContext
2019-10-22 08:40:33.483 +05:30 [Debug] Bus Instance: RebusBus
2019-10-22 08:40:33.483 +05:30 [Debug] Publishing message via Bus
2019-10-22 08:40:33.485 +05:30 [Debug] Sending NewStructure4.FundsTransferCompletedEvent -> "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages#RebusTopics"
2019-10-22 08:40:33.486 +05:30 [Debug] Bus Context: FlexBusOmegaContext
2019-10-22 08:40:33.493 +05:30 [Debug] Dispatching "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages" "914705dc-dd7c-4ed6-8b54-bc29a96a4d5d" to 1 handlers took 19 ms
2019-10-22 08:40:33.495 +05:30 [Debug] Bus Instance: RebusBus
2019-10-22 08:40:33.503 +05:30 [Debug] Publishing message via Bus
2019-10-22 08:40:33.503 +05:30 [Debug] Sending NewStructure4.FundsTransferCompletedEvent -> "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages#RebusTopics"
2019-10-22 08:40:33.505 +05:30 [Debug] Bus Context: FlexBusOmegaContext
2019-10-22 08:40:33.508 +05:30 [Debug] Dispatching "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages" "6aab45ca-f875-47ec-985c-76ab26a3b2a8" to 1 handlers took 24 ms
2019-10-22 08:40:33.510 +05:30 [Debug] Bus Instance: RebusBus
2019-10-22 08:40:33.518 +05:30 [Debug] Sending NewStructure4.FundsTransferCompletedEvent -> "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages#RebusTopics"
2019-10-22 08:40:33.518 +05:30 [Debug] Publishing message via Bus
2019-10-22 08:40:33.524 +05:30 [Debug] Dispatching "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages" "c8dba472-27f3-46cb-8ee0-abfae49a222d" to 1 handlers took 21 ms
2019-10-22 08:40:33.526 +05:30 [Debug] Bus Context: FlexBusOmegaContext
2019-10-22 08:40:33.533 +05:30 [Debug] Publishing message via Bus
2019-10-22 08:40:33.534 +05:30 [Debug] Bus Instance: RebusBus
2019-10-22 08:40:33.537 +05:30 [Debug] Bus Context: FlexBusOmegaContext
2019-10-22 08:40:33.539 +05:30 [Debug] Sending NewStructure4.FundsTransferCompletedEvent -> "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages#RebusTopics"
2019-10-22 08:40:33.540 +05:30 [Debug] Bus Instance: RebusBus
2019-10-22 08:40:33.544 +05:30 [Debug] Dispatching "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages" "4f4d5bbe-a700-4562-a41e-09d2ce4e8999" to 1 handlers took 26 ms
2019-10-22 08:40:33.546 +05:30 [Debug] Sending NewStructure4.FundsTransferCompletedEvent -> "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages#RebusTopics"
2019-10-22 08:40:33.556 +05:30 [Debug] Publishing message via Bus
2019-10-22 08:40:33.557 +05:30 [Debug] Dispatching "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages" "40847a7a-b789-4202-bc90-32fef097f01d" to 1 handlers took 23 ms
2
And in my handler end point, I get this message even if the handler is getting executed and publishing the event (using RabbitMq transport) :
2019-10-22 10:15:37.962 +05:30 [Debug] Bus Instance: RebusBus
2019-10-22 10:15:38.079 +05:30 [Debug] Sending NewStructure4.FundsTransferCompletedEvent -> "NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages#RebusTopics"
2019-10-22 10:15:40.146 +05:30 [Information] awaiting task
2019-10-22 10:15:44.268 +05:30 [Debug] Dispatching "NewStructure4.FundsTransfer_InitiateFundsTransferCommand, NewStructure4.Messages" "c38dea6e-e77f-4900-bdd0-ebaff4cedecc" to 1 handlers took 18803 ms
2019-10-22 10:15:47.112 +05:30 [Warning] Unhandled exception 1 (FINAL) while handling message with ID "e717b3a9-02b3-47f5-b519-2f8e1fdbaf2e"
Rebus.Exceptions.MessageCouldNotBeDispatchedToAnyHandlersException: Message with ID e717b3a9-02b3-47f5-b519-2f8e1fdbaf2e and type NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages could not be dispatched to any handlers (and will not be retried under the default fail-fast settings)
at Rebus.Pipeline.Receive.DispatchIncomingMessageStep.Process(IncomingStepContext context, Func`1 next)
at Rebus.Sagas.LoadSagaDataStep.Process(IncomingStepContext context, Func`1 next)
at Rebus.Pipeline.Receive.ActivateHandlersStep.Process(IncomingStepContext context, Func`1 next)
at Rebus.Pipeline.Receive.HandleRoutingSlipsStep.Process(IncomingStepContext context, Func`1 next)
at Rebus.Pipeline.Receive.DeserializeIncomingMessageStep.Process(IncomingStepContext context, Func`1 next)
at Rebus.Pipeline.Receive.HandleDeferredMessagesStep.Process(IncomingStepContext context, Func`1 next)
at Rebus.Retry.FailFast.FailFastStep.Process(IncomingStepContext context, Func`1 next)
at Rebus.Retry.Simple.SimpleRetryStrategyStep.DispatchWithTrackerIdentifier(Func`1 next, String identifierToTrackMessageBy, ITransactionContext transactionContext, String messageId, String secondLevelMessageId)
2019-10-22 10:15:47.225 +05:30 [Error] Moving message with ID "e717b3a9-02b3-47f5-b519-2f8e1fdbaf2e" to error queue "error"
System.AggregateException: 1 unhandled exceptions (Message with ID e717b3a9-02b3-47f5-b519-2f8e1fdbaf2e and type NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages could not be dispatched to any handlers (and will not be retried under the default fail-fast settings)) ---> Rebus.Exceptions.MessageCouldNotBeDispatchedToAnyHandlersException: Message with ID e717b3a9-02b3-47f5-b519-2f8e1fdbaf2e and type NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages could not be dispatched to any handlers (and will not be retried under the default fail-fast settings)
at Rebus.Pipeline.Receive.DispatchIncomingMessageStep.Process(IncomingStepContext context, Func`1 next)
at Rebus.Sagas.LoadSagaDataStep.Process(IncomingStepContext context, Func`1 next)
at Rebus.Pipeline.Receive.ActivateHandlersStep.Process(IncomingStepContext context, Func`1 next)
at Rebus.Pipeline.Receive.HandleRoutingSlipsStep.Process(IncomingStepContext context, Func`1 next)
at Rebus.Pipeline.Receive.DeserializeIncomingMessageStep.Process(IncomingStepContext context, Func`1 next)
at Rebus.Pipeline.Receive.HandleDeferredMessagesStep.Process(IncomingStepContext context, Func`1 next)
at Rebus.Retry.FailFast.FailFastStep.Process(IncomingStepContext context, Func`1 next)
at Rebus.Retry.Simple.SimpleRetryStrategyStep.DispatchWithTrackerIdentifier(Func`1 next, String identifierToTrackMessageBy, ITransactionContext transactionContext, String messageId, String secondLevelMessageId)
--- End of inner exception stack trace ---
---> (Inner Exception #0) Rebus.Exceptions.MessageCouldNotBeDispatchedToAnyHandlersException: Message with ID e717b3a9-02b3-47f5-b519-2f8e1fdbaf2e and type NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages could not be dispatched to any handlers (and will not be retried under the default fail-fast settings)
at Rebus.Pipeline.Receive.DispatchIncomingMessageStep.Process(IncomingStepContext context, Func`1 next)
at Rebus.Sagas.LoadSagaDataStep.Process(IncomingStepContext context, Func`1 next)
at Rebus.Pipeline.Receive.ActivateHandlersStep.Process(IncomingStepContext context, Func`1 next)
at Rebus.Pipeline.Receive.HandleRoutingSlipsStep.Process(IncomingStepContext context, Func`1 next)
at Rebus.Pipeline.Receive.DeserializeIncomingMessageStep.Process(IncomingStepContext context, Func`1 next)
at Rebus.Pipeline.Receive.HandleDeferredMessagesStep.Process(IncomingStepContext context, Func`1 next)
at Rebus.Retry.FailFast.FailFastStep.Process(IncomingStepContext context, Func`1 next)
at Rebus.Retry.Simple.SimpleRetryStrategyStep.DispatchWithTrackerIdentifier(Func`1 next, String identifierToTrackMessageBy, ITransactionContext transactionContext, String messageId, String secondLevelMessageId)<---
Could you include the (gist of the) code that publishes the event? I suspect there's a logic problem hiding in there somehow. 🙂
The error message
2019-10-22 10:15:47.225 +05:30 [Error] Moving message with ID "e717b3a9-02b3-47f5-b519-2f8e1fdbaf2e" to error queue "error"
System.AggregateException: 1 unhandled exceptions (Message with ID e717b3a9-02b3-47f5-b519-2f8e1fdbaf2e and type NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages could not be dispatched to any handlers (and will not be retried under the default fail-fast settings)) ---> Rebus.Exceptions.MessageCouldNotBeDispatchedToAnyHandlersException: Message with ID e717b3a9-02b3-47f5-b519-2f8e1fdbaf2e and type NewStructure4.FundsTransferCompletedEvent, NewStructure4.Messages could not be dispatched to any handlers (and will not be retried under the default fail-fast settings)
at Rebus.Pipeline.Receive.DispatchIncomingMessageStep.Process(IncomingStepContext context, Func`1 next)
at Rebus.Sagas.LoadSagaDataStep.Process(IncomingStepContext context, Func`1 next)
(...)
indicates that Rebus could not handle a received message, in this case a FundsTransferCompletedEvent. You need to add a message handler to this bus instance, thus making it capable of handling this particular event.
If you're using the built-in handler activator, it could be something like
activator.Handle<FundsTransferCompletedEvent>(async message => {
// handle message in here
});
or, if you're using an IoC container, it would look different, depending on which container you're using. With Castle Windsor, Rebus provides a registration extension, which makes it possible to do this:
container.RegisterHandler<FundsTransferCompletedEventHandler>();
where FundsTransferCompletedEventHandler would then be a message handler:
public class FundsTransferCompletedEventHandler : IHandleMessages<FundsTransferCompletedEvent>
{
public async Task Handle(FundsTransferCompletedEvent message)
{
// handle message in here
}
}
I hope that makes sense. 🙂
I'll update the answer as more detailes are added to the question.
Found the mistake. Was calling the bus.Subscribe from a common library in all end points.
foreach (var evt in events)
{
bus.Subscribe(evt.GetType());
}
Need to be configured for the subscriber end point only. By default, it does not ignore configuration in the absence of any event handlers present in the end point.
To get the correct behaviour, I deleted the queues and recreated those in RabbitMq.

How to avoid terminal to catch error instead of Symfony profiler

[edited on 19th june] The question is not regarding the content of the logs, but why logs doesn't not appear in my symfony profiler
My symfony profiler doesn't display any error log but my php built-in-server seems to catch them and display them in my terminal.
To run my built-in web server, i'm using the following command: php bin/console server:run with no extra parameters
the output in my terminal is something like:
2019-06-19T07:23:21+00:00 [info] Matched route "overblog_graphql_endpoint".
2019-06-19T07:23:21+00:00 [debug] Checking for guard authentication credentials.
2019-06-19T07:23:21+00:00 [debug] Checking support on guard authenticator.
2019-06-19T07:23:21+00:00 [debug] Calling getCredentials() on guard authenticator.
2019-06-19T07:23:21+00:00 [info] Guard authentication failed.
2019-06-19T07:23:21+00:00 [debug] The "Lexik\Bundle\JWTAuthenticationBundle\Security\Guard\JWTTokenAuthenticator" authenticator set the response. Any later authenticator will not be called
2019-06-19T07:23:21+00:00 [debug] Notified event "kernel.request" to listener "Symfony\Component\HttpKernel\EventListener\DebugHandlersListener::configure".
2019-06-19T07:23:21+00:00 [debug] Notified event "kernel.request" to listener "Symfony\Component\HttpKernel\EventListener\ValidateRequestListener::onKernelRequest".
2019-06-19T07:23:21+00:00 [debug] Notified event "kernel.request" to listener "Overblog\GraphQLBundle\EventListener\ClassLoaderListener::load".
2019-06-19T07:23:21+00:00 [debug] Notified event "kernel.request" to listener "Symfony\Component\HttpKernel\EventListener\SessionListener::onKernelRequest".
2019-06-19T07:23:21+00:00 [debug] Notified event "kernel.request" to listener "Symfony\Component\HttpKernel\EventListener\RouterListener::onKernelRequest".
2019-06-19T07:23:21+00:00 [debug] Notified event "kernel.request" to listener "Symfony\Bundle\FrameworkBundle\EventListener\ResolveControllerNameSubscriber::onKernelRequest".
2019-06-19T07:23:21+00:00 [debug] Notified event "kernel.request" to listener "Symfony\Component\HttpKernel\EventListener\LocaleListener::onKernelRequest".
2019-06-19T07:23:21+00:00 [debug] Notified event "kernel.request" to listener "Symfony\Bundle\SecurityBundle\Debug\TraceableFirewallListener::onKernelRequest".
2019-06-19T07:23:21+00:00 [debug] Listener "Symfony\Bundle\SecurityBundle\Debug\TraceableFirewallListener::onKernelRequest" stopped propagation of the event "kernel.request".
2019-06-19T07:23:21+00:00 [debug] Notified event "kernel.response" to listener "Symfony\Component\HttpKernel\EventListener\ResponseListener::onKernelResponse".
2019-06-19T07:23:21+00:00 [debug] Notified event "kernel.response" to listener "Symfony\Component\HttpKernel\DataCollector\RequestDataCollector::onKernelResponse".
2019-06-19T07:23:21+00:00 [debug] Notified event "kernel.response" to listener "Sensio\Bundle\FrameworkExtraBundle\EventListener\HttpCacheListener::onKernelResponse".
2019-06-19T07:23:21+00:00 [debug] Notified event "kernel.response" to listener "Symfony\Component\Security\Http\RememberMe\ResponseListener::onKernelResponse".
2019-06-19T07:23:21+00:00 [debug] Notified event "kernel.response" to listener "Symfony\Component\HttpKernel\EventListener\ProfilerListener::onKernelResponse".
2019-06-19T07:23:21+00:00 [debug] Notified event "kernel.response" to listener "Symfony\Bundle\WebProfilerBundle\EventListener\WebDebugToolbarListener::onKernelResponse".
2019-06-19T07:23:21+00:00 [debug] Notified event "kernel.response" to listener "Symfony\Component\HttpKernel\EventListener\SessionListener::onKernelResponse".
2019-06-19T07:23:21+00:00 [debug] Notified event "kernel.response" to listener "Symfony\Component\HttpKernel\EventListener\StreamedResponseListener::onKernelResponse".
2019-06-19T07:23:21+00:00 [debug] Notified event "kernel.finish_request" to listener "Symfony\Component\HttpKernel\EventListener\LocaleListener::onKernelFinishRequest".
2019-06-19T07:23:21+00:00 [debug] Notified event "kernel.finish_request" to listener "Symfony\Component\HttpKernel\EventListener\SessionListener::onFinishRequest".
2019-06-19T07:23:21+00:00 [debug] Notified event "kernel.finish_request" to listener "Symfony\Component\HttpKernel\EventListener\RouterListener::onKernelFinishRequest".
2019-06-19T07:23:21+00:00 [debug] Notified event "kernel.finish_request" to listener "Symfony\Bundle\SecurityBundle\Debug\TraceableFirewallListener::onKernelFinishRequest".
2019-06-19T07:23:21+00:00 [debug] Notified event "kernel.terminate" to listener "Symfony\Component\HttpKernel\EventListener\ProfilerListener::onKernelTerminate".
[Wed Jun 19 09:23:21 2019] 127.0.0.1:60635 [401]: /
When I check my symfony profiler, i got this empty screen:
(source: i.ibb.co)
I remember long time ago, I run a command in my terminal to change the verbosity, but I'm unfortunately unable to remember which one was it to revert it. And I think this is the reason why the terminal is displaying the log info instead of my symfony profiler
Is there someone who can help me ?
There is no error shown in your output from terminal. It states that authentication failed and it won't let your HTTP request pass security component. That's why you are getting HTTP response 401 Unauthorized.
2019-06-19T07:23:21+00:00 [info] Guard authentication failed.
2019-06-19T07:23:21+00:00 [debug] The "Lexik\Bundle\JWTAuthenticationBundle\Security\Guard\JWTTokenAuthenticator" authenticator set the response. Any later authenticator will not be called
You have issue in your application security config which is not letting any request through. That's why Symfony profiler is empty and why you cannot reach the website. Though this is not an Exception which could be thrown. It's simply required behavior based on your particular config and sent request.
Also any change to verbosity of the command (parameter -v[vv]) is not permanent and is relevant only during the run of that particular command.
ok, i was just missing monolog (composer require symfony/monolog-bundle)
By doing so, my terminal only show this:
[Wed Jun 19 11:56:26 2019] 127.0.0.1:53057 [200]: /graphiql
[Wed Jun 19 11:56:27 2019] 127.0.0.1:53058 [200]: /
[Wed Jun 19 11:56:28 2019] 127.0.0.1:53059 [200]: /_wdt/ddb809
[Wed Jun 19 11:56:30 2019] 127.0.0.1:53062 [200]: /graphiql
[Wed Jun 19 11:56:30 2019] 127.0.0.1:53063 [200]: /
[Wed Jun 19 11:56:30 2019] 127.0.0.1:53065 [200]: /_wdt/4f2a47
[Wed Jun 19 11:56:33 2019] 127.0.0.1:53068 [200]: /
[Wed Jun 19 11:56:42 2019] 127.0.0.1:53074 [200]: /_profiler/ea688f
[Wed Jun 19 11:56:43 2019] 127.0.0.1:53075 [200]: /_profiler/ea688f?panel=logger
and my logs are appearing normally in my symfony profiler

My VM can not be resize, but the flavor is subsistent exist

I can not resize the VM, and get the bellow error:
(source: openstack.org)
In the /var/log/nova-api.log:
2017-10-11 16:53:07.796 24060 INFO nova.osapi_compute.wsgi.server [-] 118.113.57.187 "GET / HTTP/1.1" status: 200 len: 502 time: 0.0013940
2017-10-11 16:53:08.129 24060 INFO nova.api.openstack.wsgi [req-24f954ef-4e99-41d4-9700-26fa7204c863 - - - - -] HTTP Exception Thrown :***Cloud hosting type flavor: 2Core1GB40GB1M did not found*** 。
2017-10-11 16:53:08.131 24060 INFO nova.osapi_compute.wsgi.server [req-24f954ef-4e99-41d4-9700-26fa7204c863 - - - - -] 118.113.57.187 "GET /v2.1/99a50773b170406b8902227118bb72bf/flavors/flavor:%202Core1GB40GB1M HTTP/1.1" status: 404 len: 485 time: 0.2736869
2017-10-11 16:53:08.248 24060 INFO nova.osapi_compute.wsgi.server [req-b7d1d426-b110-4931-90aa-f9cceeddb187 - - - - -] 118.113.57.187 "GET /v2.1/99a50773b170406b8902227118bb72bf/flavors HTTP/1.1" status: 200 len: 1913 time: 0.0570610
2017-10-11 16:53:08.565 24060 INFO nova.osapi_compute.wsgi.server [req-8cbc33a5-1a78-4ba7-8869-cf01536f784b - - - - -] 118.113.57.187 "POST /v2.1/99a50773b170406b8902227118bb72bf/flavors HTTP/1.1" status: 200 len: 875 time: 0.2515521
2017-10-11 16:53:10.433 24059 INFO nova.api.openstack.wsgi [req-42faeebb-d3ad-4e09-90e8-8da64f688fb9 - - - - -] HTTP Exception Thrown:***Can not find valid host,....***。
2017-10-11 16:53:10.435 24059 INFO nova.osapi_compute.wsgi.server [req-42faeebb-d3ad-4e09-90e8-8da64f688fb9 - - - - -] 118.113.57.187 "POST /v2.1/99a50773b170406b8902227118bb72bf/servers/f9bef431-0635-4c74-9af5-cf61ed4d3ae4/action HTTP/1.1" status: 400 len: 564 time: 1.6831121
No mater whatever flavor, the VM can not resize, but the flavor is subsistent.
Because my openstack is all in one server, so I can not migrate the VM( the resize essence is migrate), so I add this line in my nova.conf's [DEFAULT]:
allow_resize_to_same_host=True
And restart the nova related service, I success:
# systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl restart openstack-nova-compute.service
For the two nodes configuration (controller and compute), I needed to include a couple of parameters more as it is explained here:
Update the “nova.conf” file in both controller and compute with the following lines
allow_migrate_to_same_host = True
scheduler_default_filters = AllHostsFilter
allow_resize_to_same_host = True
Restart the following services of the "controller" node
• nova-api
• nova-cert
• nova-consoleauth
• nova-scheduler
• nova-conductor
• nova-novncproxy
Restart the following service of the compute node:
• nova-compute

Unable to install Wordpress on MAMP

I recieve this error message when attempting to install Wordpress:
mamp has quit because of an unknown error. Please check the mamp pro error log inside the logs directory for more information.
Also, when attempting to change the MySQL password under the MySQL tab I recieve this error message:
mamp could not update mysql password. please check if mysql is running, check your configuration and try again.
Here are my error logs:
----apache----
[Thu Aug 11 14:54:25 2016] [warn] pid file C:/MAMP/bin/apache/logs/httpd.pid overwritten -- Unclean shutdown of previous Apache run?
[Thu Aug 11 14:54:25 2016] [notice] Digest: generating secret for digest authentication ...
[Thu Aug 11 14:54:25 2016] [notice] Digest: done
[Thu Aug 11 14:54:26 2016] [notice] Apache/2.2.31 (Win32) DAV/2 mod_ssl/2.2.31 OpenSSL/1.0.2e mod_fcgid/2.3.9 mod_wsgi/3.4 Python/2.7.6 PHP/7.0.6 mod_perl/2.0.8 Perl/v5.16.3 configured -- resuming normal operations
[Thu Aug 11 14:54:26 2016] [notice] Server built: May 6 2016 10:19:53
[Thu Aug 11 14:54:26 2016] [notice] Parent: Created child process 11620
[Thu Aug 11 14:54:27 2016] [notice] Digest: generating secret for digest authentication ...
[Thu Aug 11 14:54:27 2016] [notice] Digest: done
[Thu Aug 11 14:54:28 2016] [notice] Child 11620: Child process is running
[Thu Aug 11 14:54:28 2016] [notice] Child 11620: Acquired the start mutex.
[Thu Aug 11 14:54:28 2016] [notice] Child 11620: Starting 64 worker threads.
[Thu Aug 11 14:54:28 2016] [notice] Child 11620: Starting thread to listen on port 8888.
[Thu Aug 11 14:55:34 2016] [warn] pid file C:/MAMP/bin/apache/logs/httpd.pid overwritten -- Unclean shutdown of previous Apache run?
[Thu Aug 11 14:55:34 2016] [notice] Digest: generating secret for digest authentication ...
[Thu Aug 11 14:55:34 2016] [notice] Digest: done
[Thu Aug 11 14:55:35 2016] [notice] Apache/2.2.31 (Win32) DAV/2 mod_ssl/2.2.31 OpenSSL/1.0.2e mod_fcgid/2.3.9 mod_wsgi/3.4 Python/2.7.6 PHP/7.0.6 mod_perl/2.0.8 Perl/v5.16.3 configured -- resuming normal operations
[Thu Aug 11 14:55:35 2016] [notice] Server built: May 6 2016 10:19:53
[Thu Aug 11 14:55:35 2016] [notice] Parent: Created child process 4176
[Thu Aug 11 14:55:36 2016] [notice] Digest: generating secret for digest authentication ...
[Thu Aug 11 14:55:36 2016] [notice] Digest: done
[Thu Aug 11 14:55:37 2016] [notice] Child 4176: Child process is running
[Thu Aug 11 14:55:37 2016] [notice] Child 4176: Acquired the start mutex.
[Thu Aug 11 14:55:37 2016] [notice] Child 4176: Starting 64 worker threads.
[Thu Aug 11 14:55:37 2016] [notice] Child 4176: Starting thread to listen on port 80.
--------
----mysql----
160811 14:54:28 [Note] Plugin 'FEDERATED' is disabled.
160811 14:54:28 InnoDB: The InnoDB memory heap is disabled
160811 14:54:28 InnoDB: Mutexes and rw_locks use Windows interlocked functions
160811 14:54:28 InnoDB: Compressed tables use zlib 1.2.3
160811 14:54:28 InnoDB: Initializing buffer pool, size = 128.0M
160811 14:54:28 InnoDB: Completed initialization of buffer pool
160811 14:54:28 InnoDB: highest supported file format is Barracuda.
160811 14:54:28 InnoDB: Waiting for the background threads to start
160811 14:54:29 InnoDB: 5.5.49 started; log sequence number 1595675
160811 14:54:29 [Note] Server hostname (bind-address): '0.0.0.0'; port: 8889
160811 14:54:29 [Note] - '0.0.0.0' resolves to '0.0.0.0';
160811 14:54:29 [Note] Server socket created on IP: '0.0.0.0'.
160811 14:54:29 [Note] Event Scheduler: Loaded 0 events
160811 14:54:29 [Note] C:\MAMP\bin\mysql\bin\mysqld.exe: ready for connections.
Version: '5.5.49-log' socket: '' port: 8889 MySQL Community Server (GPL)
160811 14:55:30 [Note] C:\MAMP\bin\mysql\bin\mysqld.exe: Normal shutdown
160811 14:55:30 [Note] Event Scheduler: Purging the queue. 0 events
160811 14:55:30 InnoDB: Starting shutdown...
160811 14:55:31 InnoDB: Shutdown completed; log sequence number 1595675
160811 14:55:31 [Note] C:\MAMP\bin\mysql\bin\mysqld.exe: Shutdown complete
160811 14:55:37 [Note] Plugin 'FEDERATED' is disabled.
160811 14:55:37 InnoDB: The InnoDB memory heap is disabled
160811 14:55:37 InnoDB: Mutexes and rw_locks use Windows interlocked functions
160811 14:55:37 InnoDB: Compressed tables use zlib 1.2.3
160811 14:55:37 InnoDB: Initializing buffer pool, size = 128.0M
160811 14:55:37 InnoDB: Completed initialization of buffer pool
160811 14:55:37 InnoDB: highest supported file format is Barracuda.
160811 14:55:37 InnoDB: Waiting for the background threads to start
160811 14:55:38 InnoDB: 5.5.49 started; log sequence number 1595675
160811 14:55:38 [Note] Server hostname (bind-address): '0.0.0.0'; port: 3306
160811 14:55:38 [Note] - '0.0.0.0' resolves to '0.0.0.0';
160811 14:55:38 [Note] Server socket created on IP: '0.0.0.0'.
160811 14:55:38 [Note] Event Scheduler: Loaded 0 events
160811 14:55:38 [Note] C:\MAMP\bin\mysql\bin\mysqld.exe: ready for connections.
Version: '5.5.49-log' socket: '' port: 3306 MySQL Community Server (GPL)
160811 15:01:46 [Note] C:\MAMP\bin\mysql\bin\mysqld.exe: Normal shutdown
160811 15:01:46 [Note] Event Scheduler: Purging the queue. 0 events
160811 15:01:46 InnoDB: Starting shutdown...
160811 15:01:47 InnoDB: Shutdown completed; log sequence number 1595675
160811 15:01:47 [Note] C:\MAMP\bin\mysql\bin\mysqld.exe: Shutdown complete
--------
If you would be able to provide me with a solution to this, it would be much appreciated.
Event if MAMP quits, sometimes MySQL is running in the background. Could you kill all MySQL background process and try again?
Quit MAMP
Open the terminal and type: killall -9 mysqld
Restart MAMP

Plone Switching to ZRS using plone.recipe.zeoserver on Plone 4.3.1

I'm working on setting up a Zope Replicated Storage (ZRS) based deployment. I currently have two servers (east and west)
west will be the primary
east will be the secondary
I haven't touched the west box yet. On the east box I've edited my buildout as follows (I had to pin plone.recipe.zeoserver to 1.2.6 because the zrs features didn't exist before version 1.2.6):
[zeoserver]
recipe = plone.recipe.zeoserver[zrs]
replicate-from = ${hosts:zeoserver-west}:${ports:zeoserver-zrs}
[versions]
plone.recipe.zeoserver = 1.2.6
After running
bin/buildout
I try to start my cluster and the instances seem to hang. A debugging instance now gives me the following output:
2013-09-11 08:24:00 INFO ZServer HTTP server started at Wed Sep 11 08:24:00 2013
Hostname: localhost
Port: 7680
2013-09-11 08:24:02 INFO Products.PloneFormGen gpg_subprocess initialized, using /usr/bin/gpg
2013-09-11 08:24:02 INFO DeadlockDebugger Not activated, you must change ACTIVATED in custom.py
2013-09-11 08:24:02 INFO ZEO.ClientStorage zeostorage ClientStorage (pid=22802) created RW/normal for storage: '1'
2013-09-11 08:24:02 INFO ZEO.cache created temporary cache file '<fdopen>'
2013-09-11 08:24:02 INFO ZEO.ClientStorage zeostorage Testing connection <ManagedClientConnection ('127.0.0.1', 7600)>
2013-09-11 08:24:02 INFO ZEO.zrpc.Connection(C) (127.0.0.1:7600) received handshake 'Z3101'
2013-09-11 08:24:02 INFO ZEO.ClientStorage zeostorage Server authentication protocol None
2013-09-11 08:24:02 INFO ZEO.zrpc.Connection(C) (127.0.0.1:7600) received handshake 'Z3101'
2013-09-11 08:24:02 INFO ZEO.ClientStorage zeostorage Testing connection <ManagedClientConnection ('127.0.0.1', 7600)>
2013-09-11 08:24:02 INFO ZEO.ClientStorage zeostorage Server authentication protocol None
2013-09-11 08:24:02 INFO ZEO.zrpc.Connection(C) (127.0.0.1:7600) received handshake 'Z3101'
2013-09-11 08:24:02 INFO ZEO.ClientStorage zeostorage Testing connection <ManagedClientConnection ('127.0.0.1', 7600)>
2013-09-11 08:24:02 INFO ZEO.ClientStorage zeostorage Server authentication protocol None
2013-09-11 08:24:07 INFO ZEO.ClientStorage zeostorage Testing connection <ManagedClientConnection ('127.0.0.1', 7600)>
2013-09-11 08:24:07 INFO ZEO.zrpc.Connection(C) (127.0.0.1:7600) received handshake 'Z3101'
2013-09-11 08:24:07 INFO ZEO.ClientStorage zeostorage Server authentication protocol None
2013-09-11 08:24:07 INFO ZEO.zrpc.Connection(C) (127.0.0.1:7600) received handshake 'Z3101'
2013-09-11 08:24:07 INFO ZEO.ClientStorage zeostorage Testing connection <ManagedClientConnection ('127.0.0.1', 7600)>
2013-09-11 08:24:07 INFO ZEO.ClientStorage zeostorage Server authentication protocol None
2013-09-11 08:24:07 INFO ZEO.zrpc.Connection(C) (127.0.0.1:7600) received handshake 'Z3101'
2013-09-11 08:24:07 INFO ZEO.ClientStorage zeostorage Testing connection <ManagedClientConnection ('127.0.0.1', 7600)>
2013-09-11 08:24:07 INFO ZEO.ClientStorage zeostorage Server authentication protocol None
I've copied the Data.fs over so it's available on east.
I'm wondering if this has anything to do with the primary ZRS not being up yet.
This is my first initial test and I just wanted to see the secondary working before setting up the primary.
If I removing the zrs settings and rebuild buildout the cluster starts without issue.
Make sure that you also set the "east" clients read-only. That option was added to plone.recipe.zope2instance in version 4.2.12.
It's the "read-only" option in the definition of a basic ZEO storage.
Anything in the zeoserver log?
I assume that's not your full zeoserver configuration correct? The ZRS replication runs on an additional port to the zeoserver so you still need to specify zeoserver host and port.
Additionally, run the replicated server in read-only mode and start from an empty database--you don't need to sync the database to start off, zrs will sync all up once it's running.

Resources