Siege ports getting reused - http

I am running multiple instances of siege, so siege is reusing the ports as a results some of the requests are not going through. Is there a way where the different siege instances could use different port ranges?
HTTP/1.1 200 0.00 secs: 146 bytes ==>
HTTP/1.1 200 0.00 secs: 146 bytes ==>
HTTP/1.1 200 0.00 secs: 146 bytes ==>
HTTP/1.1 200 0.00 secs: 146 bytes ==>
HTTP/1.1 200 0.01 secs: 146 bytes ==>
HTTP/1.1 200 0.00 secs: 146 bytes ==>
HTTP/1.1 200 0.01 secs: 146 bytes ==>
[alert] socket: 671299328 select timed out: Connection timed out
[alert] socket: 788797184 select timed out: Connection timed out
[alert] socket: 721655552 select timed out: Connection timed out
[alert] socket: 738440960 select timed out: Connection timed out
HTTP/1.1 200 0.01 secs: 146 bytes ==> /
HTTP/1.1 200 0.01 secs: 146 bytes ==> /
[alert] socket: 822368000 select timed out: Connection timed out
HTTP/1.1 200 0.01 secs: 146 bytes ==> /
HTTP/1.1 200 0.01 secs: 146 bytes ==> /
HTTP/1.1 200 0.01 secs: 146 bytes ==> /

I see you have a lot of requests one after another, do you consider that you can have problems with KeepAlive.
On server sockets are opened for a little bit longer than connection alone. You can run out of ports quite qucik if KeepAlive is set to high value.

You can set
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
in /etc/sysctl file and run #sysctl -p to active it.
Please have a try. Hope this can help you.

Related

Is it possible to have curl write it's http request to a file?

For testing HTTP protocols, I would like to use curl to make a POST (or other) request to an http server, but either have it write the request to a local file instead of the server socket, or better, simultaneously write the request to the server socket AND to a local file.
That is byte for byte what it writes to the server.
Yes, it's possible on the command line at least with --trace* options
curl --trace-time --trace-ascii - -d "some=val&other=123" http://localhost:8000
Output:
20:11:26.176533 == Info: Rebuilt URL to: http://localhost:8000/
20:11:26.180738 == Info: Trying 127.0.0.1...
20:11:26.180775 == Info: TCP_NODELAY set
20:11:26.180903 == Info: Connected to localhost (127.0.0.1) port 8000 (#0)
20:11:26.180969 => Send header, 148 bytes (0x94)
0000: POST / HTTP/1.1
0011: Host: localhost:8000
0027: User-Agent: curl/7.60.0
0040: Accept: */*
004d: Content-Length: 18
0061: Content-Type: application/x-www-form-urlencoded
0092:
20:11:26.181019 => Send data, 18 bytes (0x12)
0000: some=val&other=123
20:11:26.181029 == Info: upload completely sent off: 18 out of 18 bytes
20:11:26.181069 <= Recv header, 22 bytes (0x16)
0000: HTTP/1.1 201 CREATED
20:11:26.181091 <= Recv header, 19 bytes (0x13)
0000: Connection: Close
20:11:26.181099 <= Recv header, 2 bytes (0x2)
0000:
20:11:26.181104 <= Recv data, 12 bytes (0xc)
0000: .2019-05-02.
2019-05-02
20:11:26.181147 == Info: Closing connection 0
Run a minimal web server with netcat to test
while true ; do { echo -e "HTTP/1.1 201 CREATED\r\nConnection: Close\r\n\r\n"; date --iso-8601 ; } | netcat -q 0 -l 8000 ;done

Docker stop responding under load

notice that docker stop responding under load. Here is the steps to reproduce the issue:
docker-machine create -d virtualbox web
docker-machine ip web
192.168.99.253
eval $(docker-machine env web)
docker run --name app -d -p 3000:3000 ragesh/hello-express
ab -n 100000 -c 100 http://192.168.99.253:3000/
Server Hostname: 192.168.99.253
Server Port: 3000
Document Path: /
Document Length: 207 bytes
Concurrency Level: 100
Time taken for tests: 145.726 seconds
Complete requests: 100000
Failed requests: 0
Total transferred: 38900000 bytes
HTML transferred: 20700000 bytes
Requests per second: 686.22 [#/sec] (mean)
Time per request: 145.726 [ms] (mean)
Time per request: 1.457 [ms] (mean, across all concurrent requests)
Transfer rate: 260.68 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.2 0 15
Processing: 32 145 14.1 140 237
Waiting: 32 145 14.1 140 237
Total: 36 146 14.1 140 237
Percentage of the requests served within a certain time (ms)
50% 140
66% 146
75% 151
80% 155
90% 165
95% 173
98% 185
99% 196
100% 237 (longest request)
Everything looks good even -n 100000.
docker-machine create -d virtualbox router
eval $(docker-machine env router)
vi nginx.conf
upstream web {
server 192.168.99.253:3000;
}
server {
listen 80;
location / {
proxy_pass http://web;
}
}
docker run --name router -p 80:80 -v $(pwd)/nginx.conf:/etc/nginx/conf.d/default.conf:ro -d nginx
docker-machine ip router
192.168.99.252
ab -n 10000 -c 100 http://192.168.99.252:80/
Server Software: nginx/1.9.14
Server Hostname: 192.168.99.252
Server Port: 80
Document Path: /
Document Length: 207 bytes
Concurrency Level: 100
Time taken for tests: 32.631 seconds
Complete requests: 5957
Failed requests: 0
Total transferred: 2448327 bytes
HTML transferred: 1233099 bytes
Requests per second: 182.56 [#/sec] (mean)
Time per request: 547.773 [ms] (mean)
Time per request: 5.478 [ms] (mean, across all concurrent requests)
Transfer rate: 73.27 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 2 52.6 0 3048
Processing: 2 168 56.2 164 3002
Waiting: 2 168 56.2 163 3002
Total: 2 169 97.7 164 5032
Percentage of the requests served within a certain time (ms)
50% 164
66% 171
75% 176
80% 178
90% 185
95% 192
98% 199
99% 209
100% 5032 (longest request)
Expect the Requests per second will drop, but not except it drop so dramatically. And run ab several times, got connection error very often.
Did I do something wrong?

siege is ignoring query parameters

I am testing siege against a simple server which outputs the n'th Fibonacci number. The server works great when using curl:
[mto#localhost ~]$ curl http://localhost:8000?q=8
21
Doing the same with siege, yields the following:
[mto#localhost ~]$ siege 'http://localhost:8000?q=8' -r 4 -c 1
** SIEGE 3.0.9
** Preparing 1 concurrent users for battle.
The server is now under siege...
HTTP/1.1 400 0.00 secs: 73 bytes ==> GET /
HTTP/1.1 400 0.00 secs: 73 bytes ==> GET /
HTTP/1.1 400 0.00 secs: 73 bytes ==> GET /
HTTP/1.1 400 0.00 secs: 73 bytes ==> GET /
done.
Transactions: 4 hits
Availability: 100.00 %
Elapsed time: 1.01 secs
Data transferred: 0.00 MB
Response time: 0.00 secs
Transaction rate: 3.96 trans/sec
Throughput: 0.00 MB/sec
Concurrency: 0.00
Successful transactions: 0
Failed transactions: 0
Longest transaction: 0.00
Shortest transaction: 0.00
FILE: /home/mto/siege.log
You can disable this annoying message by editing
the .siegerc file in your home directory; change
the directive 'show-logfile' to false.
As you can see the server is giving 400. My webserver, written with tornado, outputs the following:
[W 150311 16:58:20 web:1404] 400 GET / (127.0.0.1): Missing argument q
[W 150311 16:58:20 web:1811] 400 GET / (127.0.0.1) 0.85ms
[W 150311 16:58:20 web:1404] 400 GET / (127.0.0.1): Missing argument q
[W 150311 16:58:20 web:1811] 400 GET / (127.0.0.1) 0.71ms
[W 150311 16:58:20 web:1404] 400 GET / (127.0.0.1): Missing argument q
[W 150311 16:58:20 web:1811] 400 GET / (127.0.0.1) 0.72ms
[W 150311 16:58:20 web:1404] 400 GET / (127.0.0.1): Missing argument q
[W 150311 16:58:20 web:1811] 400 GET / (127.0.0.1) 0.79ms
How do I pass the query parameters to siege? The Siege man page says the following:
...
You can pass parameters using GET much like you would in a web browser:
www.haha.com/form.jsp?first=homer&last=simpson
If you invoke the URL as a command line argument, you should probably place it in
quotes.
...
I have tried to put the url in single, double and no quotes. I have also written the urls in a file and passed it to siege using -f, but no luck. I am using:
My environment:
SIEGE 3.0.9
GNOME Terminal 3.10.2
Fedora release 20 (Heisenbug)
Any ideas?
I am using SIEGE 4.0.4 and found that double quotes works from the answer of below question:
https://stackoverflow.com/a/9311812/5824101
siege does not like the url to be at the following form:
http://localhost:8000?q=8
To use query parameters I have to have a url with a path:
http://localhost:8000/fib?q=8
Then it works fine. I have not been able to find a work around

Does Heroku support chunked HTTP POST data?

I have an application that I am developing and I would like to send undefined amounts of data to the server using a chunked HTTP/1.1 POST request.
All I can see for now is that nothing seems to be sent to the server after the initial headers:
% cat ~/file.mp3 | curl -T - -X POST http://foo.com/source -v
* About to connect() to foo.com port 80 (#0)
* Trying xx.yy.zz.tt...
* Connected to foo.com (xx.yy.zz.tt) port 80 (#0)
> POST /source HTTP/1.1
> User-Agent: curl/7.30.0
> Host: foo.com
> Accept: */*
> Transfer-Encoding: chunked
> Expect: 100-continue
>
< HTTP/1.1 100 Continue
< HTTP/1.1 200 OK
< Content-Type: text/html; charset=utf-8
< Date: Tue, 14 Jan 2014 14:57:19 GMT
< X-Powered-By: Express
< Content-Length: 13
< Connection: keep-alive
<
* Connection #0 to host foo.com left intact
Thanks, brah!
In the application (node express), if I log the response's "data" event, I see nothing else than:
2014-01-14T14:57:19.977995+00:00 heroku[router]: at=info method=POST path=/source host=foo.com fwd="xx.yy.zz.tt" dyno=web.1 connect=6ms service=73ms status=200 bytes=13
However, locally, the same logging gives:
...
[request] Got 360 bytes of data
[request] Got 16372 bytes of data
[request] Got 16372 bytes of data
[request] Got 16372 bytes of data
[request] Got 48 bytes of data
[request] Got 15974 bytes of data
[request] Got 398 bytes of data
...
Is there anything that I could be missing?

Siege unknown responses

I'm trying to test my server on highload resistance with siege utility:
siege http://my.server.ru/ -d1 -r10 -c100
Siege outputs a lot of messages like this:
HTTP/1.1 200 0.46 secs: 10298 bytes ==> /
but sometimes there are error messages like this:
Error: socket: unable to connect sock.c:220: Connection timed out
or this:
warning: socket: -598608128 select timed out: Connection timed out
There is siege report after testing:
Transactions: 949 hits
Availability: 94.90 %
...
Successful transactions: 949
Failed transactions: 51
Longest transaction: 9.87
Shortest transaction: 0.37
In nginx logs on my server, only 950 messages with code 200 and response that all right.
"GET / HTTP/1.1" 200 10311 "-" "JoeDog/1.00 [en] (X11; I; Siege 2.68)"
Can anyone tell me what this means
Error: socket: unable to connect sock.c:220: Connection timed out
warning: socket: -598608128 select timed out: Connection timed out
and why in my nginx logs I only see responses with code 200?
It probably means your pipe is full and can't handle more connections. You can't make nginx or nginx backends accept more connections if if your pipe is full. Try testing against localhost. You will then be testing the stack rather than the stack and the pipe. It will resemble real load less, but give you an idea what you can handle with the bigger pipe.

Resources