Python requests/urllib3 raises 'httplib.BadStatusLine' error if called many times? - python-requests

I have a python program which utilizes python requests. I want to be able to run this program many times sequentially. The code can execute on its own, and run without any errors. However, when I try to run it 100 times, it will raise this error eventually:
ConnectionError: HTTPConnectionPool(host='192.168.100.1', port=80): Max retries exceeded with url: 'command' (Caused by <class 'httplib.BadStatusLine'>: '')
There are many different commands that are called, and is not always the same command. I have put in a delay in between the GET requests, so I don't think that because it is getting slammed with requests, it freezes up. (Although when I take out the delay, the error happens a lot more often).
Any ideas?! Thanks.

This is almost certainly either the server misbehaving or the connection being pre-emptively closed. Could you run it 99 times (assuming your 100 number is accurate) and on the 100th time edit the file to do:
import pdb
# before line with the call to requests
pdb.set_trace()
And then follow the stack trace through the HTTPAdapter and into urllib3 and look at the response urllib3 gets?

Related

Resource not available error during issuing of multiple write IO requests to the single file

I have created an application which uses io_uring and generates many write IO requests to a single file. As result, I have got a resource unavailable error (OS error 11). Once I set a limit of simultaneous requests to the 1K error is gone.
Could someone suggest to me how to detect the limit of simultaneous requests which I can issue using io_uring?

What opens persistConn when running a Go server?

Overview
I have a Go echo http server running with version 1.13.
$ go version
go version go1.13.7 linux/amd64
I'm monitoring a number of different statistics about the server, including the number of goroutines. I periodically see brief spikes of thousands of goroutines, when high load shouldn't cause it to exceed maybe a few hundred. These spikes do not correlate to an increase in http requests as logged by the labstack echo middleware.
To better debug this situation, I added a periodic check in the program which sends me a pprof report on the goroutines if the number spikes.
The added goroutines surprised me, as when the server is in "normal" operating mode, I see 0 goroutines of the listed functions.
goroutine profile: total 1946
601 # 0x4435f0 0x4542e1 0x8f09dc 0x472c61
# 0x8f09db net/http.(*persistConn).readLoop+0xf0b /usr/local/go/src/net/http/transport.go:2027
601 # 0x4435f0 0x4542e1 0x8f2943 0x472c61
# 0x8f2942 net/http.(*persistConn).writeLoop+0x1c2 /usr/local/go/src/net/http/transport.go:2205
601 # 0x4435f0 0x4542e1 0x8f705a 0x472c61
# 0x8f7059 net/http.setRequestCancel.func3+0x129 /usr/local/go/src/net/http/client.go:321
What I'm struggling with, however, is where these are coming from, what they indicate, and at what point in an http request would I expect them.
To my untrained eye, it looks as if something is briefly attempting to open a connection the immediately tries to close it.
But it would be good to have confirmation of this. In what part of an http request do readLoop, writeLoop, and setRequestCancel goroutines get started? What do these goroutines indicate?
Notes
A few things I've looked at:
I tried adding middleware to capture requests frequencies from IP addresses as they came in, and report on those when the spikes happen. To total request number remains low, in the 30-40 range even as this spike is happening. No IP address is anomalous.
I've considered executing something like lsof to find open connections but that seems like a tenuous approach at best, and relies on my understanding of what these goroutines mean.
I've tried to cross-correlate the timing of seeing this with other things on the network, but without understanding what could cause this, I can't make much sense of where the potential culprit may lie.
If the number of goroutines exceeds 8192, the program crashes with the error: race: limit on 8192 simultaneously alive goroutines is exceeded, dying. A search for this error gets me to this github issue, which feels relevant because I am, in fact, using gorilla websockets in the program. However, the binary was compiled with -race and no race condition is spit out along with my error, which is entirely different from the aforementioned question.

Connection timed out error in JMeter test execution

When am running my JMeter scripts using GUI for few of the samples sometimes am getting Connection timed out error and response are not getting, but if I run the same test after few mins I got the response for the same samples.
Can anybody please answer what is the solution for this?
Currently am checking the response time of each page, if add timers than the page response time will be showing more right?
There are at least 3 possibles reasons:
Your server (meaning web servers handling request and any components after them) is not handling the load correctly and slowing down, monitor the system and check
You have exhausted your injector ephemeral ports , you need to adjust your OS TCP settings to increase port range
You're running load test in GUI mode with a View Results Tree in test, this is bad practice as GC will happen frequently possibly triggering Stop The World leading to this. As per best-practices use NON GUI mode:
https://jmeter.apache.org/usermanual/best-practices.html
https://www.ubik-ingenierie.com/blog/jmeter_performance_tuning_tips/

WinUSB_AbortPipe Hangs

If I call WinUSB_AbortPipe() just as WinUSB_ReadPipe() starts, I get into a deadlock state. I ran the debug trace log that is provided here. Below is the last 5 lines in the log where the problem occurs. I think ReadPipe must have missed the signal, and AbortPipe is waiting for ReadPipe to complete.
[0]4E34.4B58::06/09/2015-15:42:12.528 - IOCTL_WINUSB_READ_PIPE
[0]4E34.4B58::06/09/2015-15:42:12.528 - PIPE129: (00000019) The read has been added to the raw io queue
[0]4E34.4B58::06/09/2015-15:42:12.528 - PIPE129: (00000019) The read is being handled
[2]4E34.4ECC::06/09/2015-15:42:12.529 - IOCTL_WINUSB_ABORT_PIPE
[2]4E34.4B58::06/09/2015-15:42:12.529 - PIPE129: (00000019) Reading 64 bytes from the device
In my design, I have the IN endpoints read asynchronously into buffers. I found that it is best to set the timeout of the read operation to infinite because the driver hates it when I cause STALLs to occur (ran into other issues with that). So I need to have the disconnect sequence cause the threads to wake up to realize that we need to close. Is there any way to safely do that?
My workaround for this is to instead call WinUsb_ResetPipe(). This causes WinUSB_ReadPipe() to unblock, and doesn't seem to lock up as WinUSB_AbortPipe() sometimes does. The only evidence that I have that this works is through successfully running tests over several hours, so I can't guarantee that this is a solution.

Running Sipp test case multiple times with logging

I am using SIPP as a client to test my SIP Server. To test Stability of the server, I would like to run a specific test case for 1000 times. To do this I use AutoIT (this is the usual automation software we use for other clients, and to maintain uniformity, we want to use AutoIt itself).
The thing is I noticed that, after around 100 times running, the response time increases from the server. In AutoIt, I run the test case, and assume that within a minute, the entire test case will be run, and then run the test again(next iteration).
Is there any way, where I can get to know in AutoIt, no reply has come, or an unexpected reply has come, and I can store it.
For example: If simple test case is-> Register--> and Reply is 200 Ok.
if reply 200 ok came ->write to file: Test case iteration number: Successful
if reply 408 timeout came->write to file: test case iteration number: Timeout error
if no reply comes, after certain time out period: test case iteration number: No response error.
Through AutoIt, the way I can think of is by reading file, check for the particular call id, what response came and so on.
I would like to know, if sipp already gives a provision for this?

Resources