How does grpc-python maxworker work when handling request - grpc-python

I was wondered that how does python grpc maxworker work. If I set maxworker 10
grpc.server(futures.ThreadPoolExecutor(max_workers=int)
Does it mean if I send 15 req to the grpc server, it will only handle 10 req at the same time(if my cpu num support this amount of concurrency). The other 5 req will hang on until the first btach with 10 req finish handling.

Short answer: Yes
Long answer:
The request will hang on if current requests reach to max_workers
If you do not set max_workers, it depends on your CPU but no more than 32 since Python 3.8
You can still set a large number greater than your CPU limit.
https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor

Related

Fiddler parallel http request limitation

I am using Fiddler to test my computer's http request performance, and
I want to send 200 parallel request exactly at same time (within variance 20ms).
My computer has 8 Core CPU.
And I found that I only can send 8x10 =80 requests at same time in maximum.
An example while sending 85 requests: https://upload.cc/i1/2021/03/17/VbUSmf.jpg
I'm quite sure that it's all about the limitation of Fiddler or the number of CPU core.
As my friend's computer has 10 core, and he can send 10x10 =100 requests exactly at same time.
How can I increase the maximum number of parallel request at same time?
~Greatly Appreciate for any help~

Does SNMP have minimum time-out based on version

I know this is a very basic question but i was not aware since i'm new to SNMP stuff.So, is there any minimum timeout for SNMP based on version if so can you please specify the version and the timeout in seconds.
As #LexLi mentioned, You need to be more specific about which timeouts are You asking. And, he is right, there is no dependency between version and timeout.
snmpcmds like snmpwalk\snmpget\snmpset... have timeout
((retries+1) x timeout_between_retries) seconds, which is (5+1) x 1 = 6 seconds
by default. Both parameters could be changed by either command line
or snmp.conf.
Example:
snmpwalk -v2c -cpublic -r1 -t10 host OID
The maximum timeout here will be 20 seconds because we will send 2 queries with the timeout between them is 10 seconds.
Also there is agentXTimeout and agentXRetries, which are defining timeout for AgentX requests between masterAgent and subAgent. Values are the same: 5 times and 1 second ,respectively. Those values could be changed by the command line or in the snmpd.conf.

Cannot Create the Desired Throughput on the IIS Server

In short, I am trying to do a load test. But I cannot create the desired throughput on the IIS server (Windows Server 2016 Datacenter) even though there seems to be no bottleneck in terms of cpu, memory, disk or network.
Here is my configuration:
IIS Server: 16 vCPU, 32GB memory
SQL Server: 4 vCPU, 8GB memory
Test Server (sending the requests): 8 vCPU, 16GB memory
In order to remove concurrency limits on the IIS server, I did the following changes:
<serverRuntime appConcurrentRequestLimit="1000000" />
<applicationPool
maxConcurrentRequestsPerCPU="1000000"
maxConcurrentThreadsPerCPU="0"
requestQueueLimit="1000000" />
Default Application Pool Queue Length: 65000
<processModel minWorkerThreads="5000">
I have created a WPF application that creates the desired number of concurrent requests towards the IIS server using HttpClient and deployed it on the test server. (I changed the service point default connection limit to 1000000 as well.) And I tested with 5000 requests which all returned 200 OK.
Normally, one request returns in 20ms. And here are the results of the test I obtained in the WPF application:
Total time starting from sending the first request through getting the last response: 9380ms
Average response time : 3919ms
Max. response time: 7243ms
Min. response time: 77ms
When I look at the performance counters on the test server, I see that 5000 requests completed in about 3 seconds. Here is the graph I obtained from perfmon:
But when I look at the performance counters on the IIS server, I see that requests are continually received and executed during the course of 9 seconds. So, the average throughput observed is about 400 requests per second. I also tried the test with 10000 requests but the average throughput is always around 400 req/sec.
Why doesn't ASP.NET complete receiving all the requests at the end of the first 3 seconds? How can I increase throughput to any desired value so that I can conduct a proper load test?
After a lot of experimenting, I found out that any value over 2000 for minWorkerThreads seem to be ignored. I checked it using the ThreadPool.GetMinThreads method. And I also added the maxWorkerThreads value of 2100 as #StephenCleary suggested. With these values, the problem disappeared. But the strange thing is that, I have not seen such a limitation on the minWorkerThreads value in any of the MS documentations.

Why Yandex Tank do not generate required load

I have 2 similar servers: 16 vCPUs, 2.4 GHz, Intel Xeon E5-2676v3, 64 GiB memory.
First of them generates load,second process requests.
Config load.ini:
[phantom]
address=0.0.0.0 ;target's address(chanched, of course)
port=443 ;target's port
rps_schedule=step(1000,10000,1000,15s) ;load scheme
ssl=1
header_http = 1.1
headers = [Host: api.somehost.io]
[Content-Type: application/json]
[Connection: close]
uris = /api/test
Expected:
Load will be generated step by step, start from 1 000 RPS, every 15 add 1 000 RPS, up to 10 000 RPS.
We have:
Expected 1000, have ~1000 (avg response time 7 ms).
Expected 2000, have ~2000 (avg response time 30 ms).
Expected 3000, have ~2700 (avg response time 250 ms).
Expected 4000, have ~2700 (avg response time 250 ms).
Further, no matter how much the planned increased RPS, actual remains within ~ 2700.
Have some suggestions:
1. Yandex Tank "understands", that server can not process such load and do not increase it.
2. Server can not establish more connections
Testing url - /api/test is processed by rails application + nginx as a proxy.
I carried out testing using static files to check second suggestion. Results: https://overload.yandex.net/8175
Number of connections more than 2700 = ~200 000.
But this number less than required in load.ini file - const(500000,15s).
Question: why Yandex Tand do not generate required load? or may be I understand results incorrectly?
With an average server's response time 250ms, for one second each phantom instance can send about 4 requests per second.
So with a default amount of phantom instances (1000) tank physically cannot send > ~4000rps - it has no available instances, all of them are busy sending and waiting data.
You could try to use more instances, like defining in [phantom] section instances=10000 It's mentioned in https://yandextank.readthedocs.io/en/latest/core_and_modules.html#basic-options

Jmeter close connection before my test finish

I use jmeter HTTP Sampler to test a sequence of HTTP requests and choosed "Use KeepAlive". But a few threads Jmeter closed connection with TCP FIN before all
requests send out.
As the picture shown, 172.19.0.101 is Jmeter,172.19.0.111 is the server. The rest of requests can be only send in a new connection and they are out of session.
It can be of two reasons:
First reason - timeout
whether timeout is reached (default value is 60 seconds, and configurable. If not configured, it uses the connectionTimeout parameter value in tomcat server).
the default connection timeout of Apache httpd 1.3 and 2.0 is as
little as 15 seconds and just 5 seconds for Apache httpd 2.2 and
above
I observed that the request got the response after 10 seconds (15 -> 29 seconds) before sending FIN signal to terminate the connection.
References:
https://tools.ietf.org/id/draft-thomson-hybi-http-timeout-01.html#p-timeout
https://en.wikipedia.org/wiki/HTTP_persistent_connection
https://tomcat.apache.org/tomcat-7.0-doc/config/http.html
Second reason - 'max' Parameter
May be it reached the number of requests that can be sent on a single Persistent Connetion.
https://tools.ietf.org/id/draft-thomson-hybi-http-timeout-01.html#p-max
Set Implementation in HTTP Samplers to HTTPClient4 and try.
From JMeter HTTP Sampler documentation.
JMeter sets the Connection: keep-alive header. This does not work properly with the default HTTP implementation, as connection re-use is not under user-control. It does work with the Apache HttpComponents HttpClient implementations.
Find the jmeter.properties file in jmeter5.4.1, which describes the parameters during iteration:
# Reset HTTP State when starting a new Thread Group iteration which means:
# true means next iteration is associated to a new user
# false means next iteration is associated to same user
# true involves:
# - Closing opened connection
# - resetting SSL State
#httpclient.reset_state_on_thread_group_iteration=true
set
httpclient.reset_state_on_thread_group_iteration=false

Resources