I'm trying to test my server on highload resistance with siege utility:
siege http://my.server.ru/ -d1 -r10 -c100
Siege outputs a lot of messages like this:
HTTP/1.1 200 0.46 secs: 10298 bytes ==> /
but sometimes there are error messages like this:
Error: socket: unable to connect sock.c:220: Connection timed out
or this:
warning: socket: -598608128 select timed out: Connection timed out
There is siege report after testing:
Transactions: 949 hits
Availability: 94.90 %
...
Successful transactions: 949
Failed transactions: 51
Longest transaction: 9.87
Shortest transaction: 0.37
In nginx logs on my server, only 950 messages with code 200 and response that all right.
"GET / HTTP/1.1" 200 10311 "-" "JoeDog/1.00 [en] (X11; I; Siege 2.68)"
Can anyone tell me what this means
Error: socket: unable to connect sock.c:220: Connection timed out
warning: socket: -598608128 select timed out: Connection timed out
and why in my nginx logs I only see responses with code 200?
It probably means your pipe is full and can't handle more connections. You can't make nginx or nginx backends accept more connections if if your pipe is full. Try testing against localhost. You will then be testing the stack rather than the stack and the pipe. It will resemble real load less, but give you an idea what you can handle with the bigger pipe.
Related
So, I have to retrieve temperature from any one of the cities from http://www.rssweather.com/dir/Asia/India.
Let's assume I want to retrieve of Kanpur's.
How to make an HTTP GET request with Netcat?
I'm doing something like this.
nc -v rssweather.com 80
GET http://www.rssweather.com/wx/in/kanpur/wx.php HTTP/1.1
I don't know exactly if I'm even in the right direction or not. I am not able to find any good tutorials on how to make an HTTP get request with netcat, so I'm posting it on here.
Of course you could dig in standards searched for google, but actually if you want to get only a single URL, it isn'tââââ worth the effort.
You could also start a netcat in listening mode on a port:
nc -l 64738
(Sometimes nc -l -p 64738 is the correct argument list)
...and then do a browser request into this port with a real browser. Just type in your browser http://localhost:64738 and see.
In your actual case the problem is that HTTP/1.1 doesn't close the connection automatically, but it waits your next URL you want to retrieve. The solution is simple:
Use HTTP/1.0:
GET /this/url/you/want/to/get HTTP/1.0
Host: www.rssweather.com
<empty line>
or use a Connection: request header to say the server you want to close after that:
GET /this/url/you/want/to/get HTTP/1.1
Host: www.rssweather.com
Connection: close
<empty line>
Extension: After the GET header write only the path part of the request. The hostname from which you want to get data belongs to a Host: header as you can see in my examples. This is because multiple websites can run on the same webserver, so the browsers need to say him, from which site it wants to load the page.
This works for me:
$ nc www.rssweather.com 80
GET /wx/in/kanpur/wx.php HTTP/1.0
Host: www.rssweather.com
And then hit double <enter>, i.e. once for the remote http server and once for the nc command.
source: pentesterlabs
You don't even need to use/install netcat
Create a tcp socket via an unused file-descriptor i.e I use 88 here
Write the request into it
use the fd
exec 88<>/dev/tcp/rssweather.com/80
echo -e "GET /dir/Asia/India HTTP/1.1\nhost: www.rssweather.com\nConnection: close\n\n" >&88
sed 's/<[^>]*>/ /g' <&88
On MacOS, you need the -c flag as follows:
Little-Net:~ minfrin$ nc -c rssweather.com 80
GET /wx/in/kanpur/wx.php HTTP/1.1
Host: rssweather.com
Connection: close
[empty line]
The response then appears as follows:
HTTP/1.1 200 OK
Date: Thu, 23 Aug 2018 13:20:49 GMT
Server: Apache
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html
The -c flag is described as "Send CRLF as line-ending".
To be HTTP/1.1 compliant, you need the Host header, as well as the "Connection: close" if you want to disable keepalive.
Test it out locally with python3 http.server
This is also a fun way to test it out. On one shell, launch a local file server:
python3 -m http.server 8000
Then on the second shell, make a request:
printf 'GET / HTTP/1.1\r\nHost: localhost\r\n\r\n' | nc localhost 8000
The Host: header is required in HTTP 1.1.
This shows an HTML listing of the directory, just as you would see from:
firefox http://localhost:8000
Next you can try to list files and directories and observe the response:
printf 'GET /my-subdir/ HTTP/1.1\n\n' | nc localhost 8000
printf 'GET /my-file HTTP/1.1\n\n' | nc localhost 8000
Every time you make a successful request, the server prints:
127.0.0.1 - - [05/Oct/2018 11:20:55] "GET / HTTP/1.1" 200 -
confirming that it was received.
example.com
This IANA maintained domain is another good test URL:
printf 'GET / HTTP/1.1\r\nHost: example.com\r\n\r\n' | nc example.com 80
and compare with: http://example.com/
https SSL
nc does not seem to be able to handle https URLs. Instead, you can use:
sudo apt-get install nmap
printf 'GET / HTTP/1.1\r\nHost: github.com\r\n\r\n' | ncat --ssl github.com 443
See also: https://serverfault.com/questions/102032/connecting-to-https-with-netcat-nc/650189#650189
If you try nc, it just hangs:
printf 'GET / HTTP/1.1\r\nHost: github.com\r\n\r\n' | nc github.com 443
and trying port 80:
printf 'GET / HTTP/1.1\r\nHost: github.com\r\n\r\n' | nc github.com 443
just gives a redirect response to the https version:
HTTP/1.1 301 Moved Permanently
Content-Length: 0
Location: https://github.com/
Connection: keep-alive
Tested on Ubuntu 18.04.
I am testing siege against a simple server which outputs the n'th Fibonacci number. The server works great when using curl:
[mto#localhost ~]$ curl http://localhost:8000?q=8
21
Doing the same with siege, yields the following:
[mto#localhost ~]$ siege 'http://localhost:8000?q=8' -r 4 -c 1
** SIEGE 3.0.9
** Preparing 1 concurrent users for battle.
The server is now under siege...
HTTP/1.1 400 0.00 secs: 73 bytes ==> GET /
HTTP/1.1 400 0.00 secs: 73 bytes ==> GET /
HTTP/1.1 400 0.00 secs: 73 bytes ==> GET /
HTTP/1.1 400 0.00 secs: 73 bytes ==> GET /
done.
Transactions: 4 hits
Availability: 100.00 %
Elapsed time: 1.01 secs
Data transferred: 0.00 MB
Response time: 0.00 secs
Transaction rate: 3.96 trans/sec
Throughput: 0.00 MB/sec
Concurrency: 0.00
Successful transactions: 0
Failed transactions: 0
Longest transaction: 0.00
Shortest transaction: 0.00
FILE: /home/mto/siege.log
You can disable this annoying message by editing
the .siegerc file in your home directory; change
the directive 'show-logfile' to false.
As you can see the server is giving 400. My webserver, written with tornado, outputs the following:
[W 150311 16:58:20 web:1404] 400 GET / (127.0.0.1): Missing argument q
[W 150311 16:58:20 web:1811] 400 GET / (127.0.0.1) 0.85ms
[W 150311 16:58:20 web:1404] 400 GET / (127.0.0.1): Missing argument q
[W 150311 16:58:20 web:1811] 400 GET / (127.0.0.1) 0.71ms
[W 150311 16:58:20 web:1404] 400 GET / (127.0.0.1): Missing argument q
[W 150311 16:58:20 web:1811] 400 GET / (127.0.0.1) 0.72ms
[W 150311 16:58:20 web:1404] 400 GET / (127.0.0.1): Missing argument q
[W 150311 16:58:20 web:1811] 400 GET / (127.0.0.1) 0.79ms
How do I pass the query parameters to siege? The Siege man page says the following:
...
You can pass parameters using GET much like you would in a web browser:
www.haha.com/form.jsp?first=homer&last=simpson
If you invoke the URL as a command line argument, you should probably place it in
quotes.
...
I have tried to put the url in single, double and no quotes. I have also written the urls in a file and passed it to siege using -f, but no luck. I am using:
My environment:
SIEGE 3.0.9
GNOME Terminal 3.10.2
Fedora release 20 (Heisenbug)
Any ideas?
I am using SIEGE 4.0.4 and found that double quotes works from the answer of below question:
https://stackoverflow.com/a/9311812/5824101
siege does not like the url to be at the following form:
http://localhost:8000?q=8
To use query parameters I have to have a url with a path:
http://localhost:8000/fib?q=8
Then it works fine. I have not been able to find a work around
I am using Nginx as Proxy for Websocket SSL upgrade to Asterisk backend.
However at times, my users just couldn't connect to Asterisk. On Asterisk end, I do not see any connection attempt.
Thus I was looking at nginx error log and I found a lot of such error
[error] 1049#0: *28726 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 103.246.xx.xx, server: xxx.xxx.io, request: "GET / HTTP/1.1", upstream: "http://0.0.0.0:8088/ws", host: "yyy.xxx.io"
Is there any clue on how can I solve this?
Is this correct? http://0.0.0.0:8088/ws
In the Internet Protocol version 4 the address 0.0.0.0 is a non-routable
meta-address used to designate an invalid, unknown or non applicable target.
To give a special meaning to an otherwise invalid piece of data is
an application of in-band signaling.
Source: http://en.wikipedia.org/wiki/0.0.0.0
I'm trying to host Bottle Application on NGINX using uWSGI.
Here's my nginx.conf
location /myapp/ {
include uwsgi_params;
uwsgi_param X-Real-IP $remote_addr;
uwsgi_param Host $http_host;
uwsgi_param UWSGI_SCRIPT myapp;
uwsgi_pass 127.0.0.1:8080;
}
I'm running uwsgi as this
uwsgi --enable-threads --socket :8080 --plugin python -- wsgi-file ./myApp/myapp.py
I'm using POST Request. For that using dev Http Client. Which goes infinite when I send the request
http://localhost/myapp
uWSGI server receives the request and prints
[pid: 4683|app: 0|req: 1/1] 127.0.0.1 () {50 vars in 806 bytes} [Thu Oct 25 12:29:36 2012] POST /myapp => generated 737 bytes in 11 msecs (HTTP/1.1 404) 2 headers in 87 bytes (1 switches on core 0)
but in nginx error log
2012/10/25 12:20:16 [error] 4364#0: *11 readv() failed (104: Connection reset by peer) while reading upstream, client: 127.0.0.1, server: localhost, request: "POST /myApp/myapp/ HTTP/1.1", upstream: "uwsgi://127.0.0.1:8080", host: "localhost"
What to do?
make sure to consume your post data in your application
for example if you have a Django/python application
def my_view(request):
# ensure to read the post data, even if you don't need it
# without this you get a: failed (104: Connection reset by peer)
data = request.DATA
return HttpResponse("Hello World")
Some details: https://uwsgi-docs.readthedocs.io/en/latest/ThingsToKnow.html
You cannot post data from the client without reading it in your application. while this is not a problem in uWSGI, nginx will fail. You can 'fake' the thing using the --post-buffering option of uWSGI to automatically read datas from the socket (if available), but you'd better to "fix" (even if i do not consider that a bug) your app
This problem occurs when the body of a request is not consumed, since uwsgi cannot know whether it will still be needed at some point. So uwsgi will keep holding on to the data either until it is consumed or until nginx resets the connection (because upstream timed out).
The author of uwsgi explains it here:
08:21 < unbit> plaes: does your DELETE request (not-response) have a body ?
08:40 < unbit> and do you read that body in your app ?
08:41 < unbit> from the nginx logs it looks like it has a body and you are not reading it in the app
08:43 < plaes> so DELETE request shouldn't have the body?
08:43 < unbit> no i mean if a request has a body you have to read/consume it
08:44 < unbit> otherwise the socket will be clobbered
So to fix this you need to make sure to always either read the whole request body or not to send a body if it is not necessary (for a DELETE e.g.).
Not use threads!
I have same problem with Global Interpretator Lock in Python under uwsgi.
When i don't use threads- not connection reset.
Example of uwsgi config ( 1Gb Ram on server)
[root#mail uwsgi]# cat myproj_config.yaml
uwsgi:
print: Myproject Configuration Started
socket: /var/tmp/myproject_uwsgi.sock
pythonpath: /sites/myproject/myproj
env: DJANGO_SETTINGS_MODULE=settings
module: wsgi
chdir: /sites/myproject/myproj
daemonize: /sites/myproject/log/uwsgi.log
max-requests: 4000
buffer-size: 32768
harakiri: 30
harakiri-verbose: true
reload-mercy: 8
vacuum: true
master: 1
post-buffering: 8192
processes: 4
no-orphans: 1
touch-reload: /sites/myproject/log/uwsgi
post-buffering: 8192
I know HTTP keep-alive is on by default in HTTP 1.1 but I want to find a way to confirm that it is actually working.
Does anyone know of a simple way to test from a web browser (EG how to make sense of wireshark). I know I need to look for multiple HTTP requests over the same TCP connection but I don't know how to confirm that in wireshark or any other way.
Thanks!
As Ron Garrity said on ServerFault, you can use Curl like this:
curl -Iv http://www.aptivate.org 2>&1 | grep -i 'connection #0'
And it outputs these two lines if keep-alive is working:
* Connection #0 to host www.aptivate.org left intact
* Closing connection #0
And if keep-alive is not working, then it just outputs this line:
* Closing connection #0
If you're on Windows Vista or later, you can use Resource Manager. The Network tab will list all open TCP connections and the process they were started by. Open a browser with one tab, browse to your page, and test.
First, try to capture the traffic to the target website in Wireshark and limit it to what you need with a filter like:
tcp port 80 and host targetwebsite.com
Then load the page in a browser or fetch it by any tool you have. If the target web page refreshes itself or one of the values in it, leave it open until you have at least one change in it.
Now you have enough data and you can stop capturing procedure in Wireshark.
You should see dozens of records and their protocol should be TCP or HTTP. For the purpose of your quick simple check, you will not need TCP records. So, lets remove them by applying another filter. In top of the window there is a "filter" field. Type http there, and wireshark will hide all records but those which have a HTTP protocol.
Now select a record and look at the next level of details, which you can find in the 2nd box bellow all records. Just to be sure you are looking at the right place, the first line there starts with "Frame XYZ". The fourth line starts with "Transmission Control Protocol". Look for the port numbers after "SRC Port" and "DST Port:". Depending on the record, one of these numbers belongs to the webserver (typically 80) and the other one shows port number in your end.
Now check a couple of different GET records. To know if the request is a GET record, check the Info column. If the port numbers in your end are used several times, all those requests were made through HTTP keepalive.
Remember that most browsers will open multiple connections, even if the webserver supports keepalive. So, DO NOT conclude your evaluation by finding just one different port.
The most accurate way is to curl the same URL multiple times.
curl -v http://weibo.com -o /dev/null http://weibo.com -o /dev/null
If the output contains Re-using existing connection, then the HTTP keep-alive feature is working. For example,
* TCP_NODELAY set
* Connected to weibo.com (180.149.138.251) port 80 (#0)
> GET / HTTP/1.1
> Host: weibo.com
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 301 Moved Permanently
< ...
< ...
<
{ [236 bytes data]
* Connection #0 to host weibo.com left intact
* Found bundle for host weibo.com: 0x56324121d9a0 [serially]
* Can not multiplex, even if we wanted to!
* Re-using existing connection! (#0) with host weibo.com
* Connected to weibo.com (180.149.138.251) port 80 (#0)
> GET / HTTP/1.1
> Host: weibo.com
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 301 Moved Permanently
< ...
< ...
<
{ [236 bytes data]
* Connection #0 to host weibo.com left intact
Another quick way is to test with ab. But some HTTP servers might not return the Connection: keep-alive header even when they've already turned on keep-alive feature, such as uwsgi. In such cases, ab does NOT send keep-alive requests. That makes ab can only do "positive" detection on HTTP keep-alive.
ab -c 5 -n 50 -k https://www.google.com/
If the result shows
...
Complete requests: 50
Failed requests: 0
Keep-Alive requests: 50 # Pay attention to this line
Total transferred:
...
Then the HTTP keep-alive is enabled.