I'm trying to benchmark a symfony application and therefore I also need to benchmark the pages that are restricted to users that are logged in.
I want to benchmark the application using the apachebench tool and therefore wrote a small shell script that tries to login with curl, fetch the phpsessid returned from the request and set it as a cookie in the apachebench command.
Here is what the shellscript looks like:
#!/bin/bash
COOKIE_JAR="/var/www/apachebench/test.jar"
curl -c $COOKIE_JAR --data "_email=admin%40dummy.at&_password=test&_target_path=%2Fbackend" http://symfony.local/login
PHPSESSID=$(cat $COOKIE_JAR | grep PHPSESSID | cut -f 7)
ab -n 10 -p /var/www/apachebench/albumpostfile.txt -T application/x-www-form-urlencoded -C PHPSESSID=$PHPSESSID -k http://symfony.local/album/add
The apachebench command should post a form and store the data in a database. However it looks like I'm not getting logged in because the data aren't stored and I tried the command before using a PHPSESSID that I copied from my browser and it worked perfectly fine. I also already disabled the csrf protection globally.
I also already checked if the PHPSESSID returned from the curl is correctly inputted into the ab command and it is.
I totally have no clue what I'm doing wrong as I tried to post the exact same data to the login page through the chrome extension "POSTMAN" and it works there.
The cookie-jar file from the curl request looks as followed:
# Netscape HTTP Cookie File
# http://curl.haxx.se/docs/http-cookies.html
# This file was generated by libcurl! Edit at your own risk.
symfony.local FALSE / FALSE 0 PHPSESSID t2glc67hlf6lrlik2ieg9r7rv7
Thanks in advance.
EDIT:
That is the output of apachebench when I add -v 3 to the command
WARNING: Response code not 2xx (302)
LOG: header received:
HTTP/1.0 302 Found
Date: Fri, 22 May 2015 12:03:32 GMT
Server: Apache/2.4.7 (Ubuntu)
X-Powered-By: PHP/5.5.9-1ubuntu4.9
Cache-Control: private, must-revalidate
Location: http://symfony.local/login
pragma: no-cache
expires: -1
Connection: close
Content-Type: text/html; charset=UTF-8
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8" />
<meta http-equiv="refresh" content="1;url=http://symfony.local/login" />
<title>Redirecting to http://symfony.local/login</title>
</head>
<body>
Redirecting to http://symfony.local/login.
</body>
</html>
WARNING: Response code not 2xx (302)
LOG: header received:
HTTP/1.0 302 Found
Date: Fri, 22 May 2015 12:03:32 GMT
Server: Apache/2.4.7 (Ubuntu)
X-Powered-By: PHP/5.5.9-1ubuntu4.9
Cache-Control: private, must-revalidate
Location: http://symfony.local/login
pragma: no-cache
expires: -1
Connection: close
Content-Type: text/html; charset=UTF-8
EDIT2:
So this is my new shell script, I altered it, so that the curl command sends a PHPSESSID cookie with its request. Below you can see the different outputs from both scripts. It looks like the second one is working as it states the correct url in the Redirecting to Url part. But this time the apachebench command isn't doing anything at all, it just gets stuck.
#!/bin/bash
COOKIE_JAR="/var/www/apachebench/test.jar"
#curl -c $COOKIE_JAR -v -d "_email=admin%40dummy.at&_password=test&_target_path=%2Fbackend" -b "PHPSESSID=1hrfrnud407n5j42oki13655g7" http://symfony.local/login
curl -c $COOKIE_JAR -v -d "_email=admin%40dummy.at&_password=test&_target_path=%2Fbackend" http://symfony.local/login
PHPSESSID=$(cat $COOKIE_JAR | grep PHPSESSID | cut -f 7)
ab -n 10 -p /var/www/apachebench/albumpostfile.txt -T application/x-www-form-urlencoded -C PHPSESSID=$PHPSESSID http://symfony.local/album/add
OLD-CURL-OUTPUT:
* Hostname was NOT found in DNS cache
* Trying 127.0.1.1...
* Connected to symfony.local (127.0.1.1) port 80 (#0)
> POST /login HTTP/1.1
> User-Agent: curl/7.35.0
> Host: symfony.local
> Accept: */*
> Content-Length: 62
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 62 out of 62 bytes
< HTTP/1.1 302 Found
< Date: Fri, 22 May 2015 12:39:05 GMT
* Server Apache/2.4.7 (Ubuntu) is not blacklisted
< Server: Apache/2.4.7 (Ubuntu)
< X-Powered-By: PHP/5.5.9-1ubuntu4.9
* Added cookie PHPSESSID="l2pfvtum211bd8tnpp1i0vpcj1" for domain symfony.local, path /, expire 0
< Set-Cookie: PHPSESSID=l2pfvtum211bd8tnpp1i0vpcj1; path=/
< Cache-Control: no-cache
< Location: http://symfony.local/login
< Transfer-Encoding: chunked
< Content-Type: text/html; charset=UTF-8
<
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8" />
<meta http-equiv="refresh" content="1;url=http://symfony.local/login" />
<title>Redirecting to http://symfony.local/login</title>
</head>
<body>
Redirecting to http://symfony.local/login.
</body>
* Connection #0 to host symfony.local left intact
</html>
NEW-CURL-OUTPUT:
* Hostname was NOT found in DNS cache
* Trying 127.0.1.1...
* Connected to symfony.local (127.0.1.1) port 80 (#0)
> POST /login HTTP/1.1
> User-Agent: curl/7.35.0
> Host: symfony.local
> Accept: */*
> Cookie: PHPSESSID=1hrfrnud407n5j42oki13655g7
> Content-Length: 62
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 62 out of 62 bytes
< HTTP/1.1 302 Found
< Date: Fri, 22 May 2015 12:40:07 GMT
* Server Apache/2.4.7 (Ubuntu) is not blacklisted
< Server: Apache/2.4.7 (Ubuntu)
< X-Powered-By: PHP/5.5.9-1ubuntu4.9
* Added cookie PHPSESSID="3ehl5ldkbd4ngl2er663899km1" for domain symfony.local, path /, expire 0
< Set-Cookie: PHPSESSID=3ehl5ldkbd4ngl2er663899km1; path=/
< Cache-Control: no-cache
< Location: http://symfony.local/backend
< Transfer-Encoding: chunked
< Content-Type: text/html; charset=UTF-8
<
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8" />
<meta http-equiv="refresh" content="1;url=http://symfony.local/backend" />
<title>Redirecting to http://symfony.local/backend</title>
</head>
<body>
Redirecting to http://symfony.local/backend.
</body>
* Connection #0 to host symfony.local left intact
</html>This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking symfony.local (be patient)...
The Last line of the output is where the ab command gets stuck.
I figured it out and it is working now.
The second command was perfectly fine so I you really need to pass a PHPSESSID cookie to the call of the login if you benchmark symfony.
The reason why it hung up on the apachebench command wasn't related to the shell script but due to a missing { in the code that I accidently deleted when trying to debug it. By looking into the apache2 error.log file I was able to figure it out.
So if anyone else will have the same problem keep in mind to add a PHPSESSID cookie to your curl command and the login will work and you can benchmark pages that need a login.
Related
I'm trying to obtain the HTML dump of some RFC's from IETF website, via a simple GET request. However, it responds with status code 301. I'm making use of netcat to simulate the HTTP GET request with the following command :
$ printf 'GET /html/rfc3986 HTTP/1.1\r\nHost: tools.ietf.org\r\nConnection: close\r\n\r\n' | nc tools.ietf.org 80
The following reply is obtained as a result of the above command :
HTTP/1.1 301 Moved Permanently
Date: Wed, 09 Sep 2020 15:36:36 GMT
Server: Apache/2.2.22 (Debian)
Location: https://tools.ietf.org/html/rfc3986
Vary: Accept-Encoding
Content-Length: 323
Connection: close
Content-Type: text/html; charset=iso-8859-1
X-Pad: avoid browser bug
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>301 Moved Permanently</title>
</head><body>
<h1>Moved Permanently</h1>
<p>The document has moved here.</p>
<hr>
<address>Apache/2.2.22 (Debian) Server at tools.ietf.org Port 80</address>
</body></html>
However, if I try to send a HTTP/1.0 based HEAD request to the Location value determined in the above reply, I get status 404 in reply. I made use of HEAD method just to check the status code of the reply.
Command :
printf 'HEAD https://tools.ietf.org/html/rfc3986 HTTP/1.0\r\n\r\n' | nc tools.ietf.org 80
Reply:
HTTP/1.1 404 Not Found
Date: Wed, 09 Sep 2020 16:32:18 GMT
Server: Apache/2.2.22 (Debian)
Vary: accept-language,accept-charset,Accept-Encoding
Accept-Ranges: bytes
Connection: close
Content-Type: text/html; charset=iso-8859-1
Content-Language: en
Expires: Wed, 09 Sep 2020 16:32:18 GMT
Is there a mistake in the way I'm making use of GET method to obtain the results?
You are sending a plain text request to port 80, so the URL you are trying is effectively http://tools.ietf.org/html/rfc3986
The response is telling you to instead request https://tools.ietf.org/html/rfc3986. That's not a different path on the same server, but a full URL.
The difference is that it begins https meaning you need to make a TLS-secured connection on port 443.
That's not going to be possible with a trivial use of netcat, so you're better off using an HTTP client like curl or wget
Context: I maintain a kind of web service server, but with a particular implementation: all data sent by the web services are located in the http header. That means there is only http header in the response (no body part). The web service runs as a windows service. The consumer is my PHP code which invokes the web service via CURL library. All this is in production since 3 years and works fine. I recently had to build a development environment.
I have the web service on a Windows 7 pro, as a windows service.
I have my PHP consumer in another windows 7 pro (WAMP + CURL).
my PHP code invokes the web service and displays the raw response.
in this context the problem occurs: if the response contains more than 1215 characters, I have an empty response (but no error message).
I installed my PHP code (exactly the same) on a new Linux ubuntu: I have the same problem.
I installed my PHP code (exactly the same) on a new Linux centos: I DON'T HAVE THE PROBLEM.
I read a lot on internet about size limitation on http header, and I think today it's not the reason of the problem.
I examined all size limitation parameters on Apache, PHP, Curl but I didn't find something relevant.
If someone has some information. All tracks are welcome. Thanks
not an answer, but want to say that using PHP 7.2.5 under mod_php with Apache 2.4.33, i am unable to reproduce your issue, as i have no problems sending anything from 1 byte to 10,000 to even 100,000 bytes in headers:
here is my producer.php:
<?php
$size=((int)($_GET['s'] ?? 1));
header("X-size: {$size}");
$data=str_repeat("a",$size);
header("X-data: {$data}");
http_response_code(204); // 204 NO CONTENT
and whether i hit http://127.0.0.1/producer.php?s=1 or http://127.0.0.1/producer.php?s=10000 or even http://127.0.0.1/producer.php?s=100000 , the data is returned without issue, as you can see in the screenshot above. can you reproduce the issue using my producer.php code?
btw, interestingly, when i try 1 million bytes, i get this error from curl:
$ curl -I http://127.0.0.1/producer.php?s=1000000
HTTP/1.1 204 No Content
Date: Wed, 16 Jan 2019 20:11:25 GMT
Server: Apache/2.4.33 (Win32) OpenSSL/1.1.0h PHP/7.2.5
X-Powered-By: PHP/7.2.5
X-size: 1000000
curl: (27) Rejected 104960 bytes header (max is 102400)!
Hanshenrik,
i also used CURLOPT_VERBOSE as you said. Here are the 2 curl logs.
The only difference is the line
<* stopped the pause stream!> in the Ubuntu curl log.
CURL log from Ubuntu witch has the probleme:
* Trying 192.168.1.205...
* TCP_NODELAY set
* Connected to 192.168.1.205 (192.168.1.205) port 8084 (#0)
> POST /datasnap/rest/TServerMethods/%22W_GetDashboard%22/ HTTP/1.1
Host: 192.168.1.205:8084
Accept-Encoding: gzip,deflate
Accept: application/json
Content-Type: text/xml; charset=utf-8
Pragma: dssession=146326.909376.656191
Content-Length: 15
* upload completely sent off: 15 out of 15 bytes
< HTTP/1.1 200 OK
< Connection: close
< Content-Encoding: deflate
< Content-Type: application/json
< Content-Length: 348
< Date: Thu, 17 Jan 2019 15:27:03 GMT
< Pragma: dssession=146326.909376.656191,dssessionexpires=3600000
<
* stopped the pause stream!
* Closing connection 0
CURL log from Centos witch has NOT the probleme:
* About to connect() to 192.168.1.205 port 8084 (#1)
* Trying 192.168.1.205...
* Connected to 192.168.1.205 (192.168.1.205) port 8084 (#1)
> POST /datasnap/rest/TServerMethods/%22W_GetDashboard%22/ HTTP/1.1
Host: 192.168.1.205:8084
Accept-Encoding: gzip,deflate
Accept: application/json
Content-Type: text/xml; charset=utf-8
Pragma: dssession=3812.553164.889594
Content-Length: 15
* upload completely sent off: 15 out of 15 bytes
< HTTP/1.1 200 OK
< Connection: close
< Content-Encoding: deflate
< Content-Type: application/json
< Content-Length: 348
< Date: Thu, 17 Jan 2019 15:43:39 GMT
< Pragma: dssession=3812.553164.889594,dssessionexpires=3600000
<
* Closing connection 1
Options I used:
-I, --head
(HTTP/FTP/FILE) Fetch the HTTP-header only! HTTP-servers feature
the command HEAD which this uses to get nothing but the header
of a document. When used on an FTP or FILE file, curl displays
the file size and last modification time only.
-L, --location
(HTTP/HTTPS) If the server reports that the requested page has moved to a different location (indi-
cated with a Location: header and a 3XX response code), this option will make curl redo the request
on the new place. If used together with -i, --include or -I, --head, headers from all requested
pages will be shown. When authentication is used, curl only sends its credentials to the initial
host. If a redirect takes curl to a different host, it won't be able to intercept the user+password.
See also --location-trusted on how to change this. You can limit the amount of redirects to follow
by using the --max-redirs option.
When curl follows a redirect and the request is not a plain GET (for example POST or PUT), it will
do the following request with a GET if the HTTP response was 301, 302, or 303. If the response code
was any other 3xx code, curl will re-send the following request using the same unmodified method.
You can tell curl to not change the non-GET request method to GET after a 30x response by using the
dedicated options for that: --post301, --post302 and -post303.
-v, --verbose
Be more verbose/talkative during the operation. Useful for debugging and seeing what's going on
"under the hood". A line starting with '>' means "header data" sent by curl, '<' means "header data"
received by curl that is hidden in normal cases, and a line starting with '*' means additional info
provided by curl.
Note that if you only want HTTP headers in the output, -i, --include might be the option you're
looking for.
If you think this option still doesn't give you enough details, consider using --trace or --trace-
ascii instead.
This option overrides previous uses of --trace-ascii or --trace.
Use -s, --silent to make curl quiet.
Below is the output that I'm wondering about. In the response containing the redirect(301), all the headers are displayed twice, but only one of the duplicates has the < in front of it. How am I supposed to interpret that?
$ curl -ILv http://www.mail.com
* Rebuilt URL to: http://www.mail.com/
* Trying 74.208.122.4...
* Connected to www.mail.com (74.208.122.4) port 80 (#0)
> HEAD / HTTP/1.1
> Host: www.mail.com
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
HTTP/1.1 301 Moved Permanently
< Date: Sun, 28 May 2017 22:02:16 GMT
Date: Sun, 28 May 2017 22:02:16 GMT
< Server: Apache
Server: Apache
< Location: https://www.mail.com/
Location: https://www.mail.com/
< Vary: Accept-Encoding
Vary: Accept-Encoding
< Connection: close
Connection: close
< Content-Type: text/html; charset=iso-8859-1
Content-Type: text/html; charset=iso-8859-1
<
* Closing connection 0
* Issue another request to this URL: 'https://www.mail.com/'
* Trying 74.208.122.4...
* Connected to www.mail.com (74.208.122.4) port 443 (#1)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
* Server certificate: *.mail.com
* Server certificate: thawte SSL CA - G2
* Server certificate: thawte Primary Root CA
> HEAD / HTTP/1.1
> Host: www.mail.com
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Date: Sun, 28 May 2017 22:02:16 GMT
Date: Sun, 28 May 2017 22:02:16 GMT
< Server: Apache
Server: Apache
< Vary: X-Forwarded-Proto,Host,Accept-Encoding
Vary: X-Forwarded-Proto,Host,Accept-Encoding
< Set-Cookie: cookieKID=kid%40autoref%40mail.com; Domain=.mail.com; Expires=Tue, 27-Jun-2017 22:02:16 GMT; Path=/
Set-Cookie: cookieKID=kid%40autoref%40mail.com; Domain=.mail.com; Expires=Tue, 27-Jun-2017 22:02:16 GMT; Path=/
< Set-Cookie: cookiePartner=kid%40autoref%40mail.com; Domain=.mail.com; Expires=Tue, 27-Jun-2017 22:02:16 GMT; Path=/
Set-Cookie: cookiePartner=kid%40autoref%40mail.com; Domain=.mail.com; Expires=Tue, 27-Jun-2017 22:02:16 GMT; Path=/
< Cache-Control: no-cache, no-store, must-revalidate
Cache-Control: no-cache, no-store, must-revalidate
< Pragma: no-cache
Pragma: no-cache
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
Expires: Thu, 01 Jan 1970 00:00:00 GMT
< Set-Cookie: JSESSIONID=F0BEF03C92839D69057FFB57C7FAA789; Path=/mailcom-webapp/; HttpOnly
Set-Cookie: JSESSIONID=F0BEF03C92839D69057FFB57C7FAA789; Path=/mailcom-webapp/; HttpOnly
< Content-Language: en-US
Content-Language: en-US
< Content-Length: 85237
Content-Length: 85237
< Connection: close
Connection: close
< Content-Type: text/html;charset=UTF-8
Content-Type: text/html;charset=UTF-8
<
* Closing connection 1
best guess: with -v you tell curl to be verbose (send debug info) to STDERR. with -I you tell curl to dump headers to STDOUT. and your shell, by default, combines STDOUT and STDERR. separate stdout and stderr, and you'll avoid the confusion.
curl -ILv http://www.mail.com >stdout.log 2>stderr.log ; cat stdout.log
Use:
curl -ILv http://www.mail.com 2>&1 | grep '^[<>\*].*$'
When cURL is called with the verbose command line flag, it sends the verbose output to stderr instead of stdout. The above command redirects stderr to stdout (2>&1), then we pipe the combined output to grep and use the above regex to only return the lines that begin with *, <, or >. All of the other lines in the output (including the dupes you were first concerned with) are removed from the output.
When I view the URL below or the other below in the code it's displayed fine. I don't see anything unusual in the network tab when I press F12 in the browser, but with the code below I will get response codes 403 or 400. When I use the response code checker here http://httpstatus.io/ it will come back fine with a 200 response for both URLS.
I get a 403 for http://psychsignal.com/ using my code below.
URL u = new URL("http://www.nasdaqomxnordic.com/"); //returns 400 response code
//u.toURI(); //to check the syntax
HttpURLConnection huc = (HttpURLConnection)u.openConnection();
huc.setRequestMethod("GET");
//huc.setRequestMethod("HEAD");
huc.connect();
System.out.println(huc.getResponseCode());
Thanks if anyone has any ideas! This is actually my first post!
My guess is that there's some restrictions placed on the User-Agent of the client. Some testing seems to support my theory:
If I use the curl default user agent:
# curl -I -H "User-Agent: curl/7.35.0" "http://www.nasdaqomxnordic.com/"
HTTP/1.1 400 Bad Request
Content-Type: text/html; charset=UTF-8
Cache-Control: no-cache
Pragma: no-cache
Expires: 0
Connection: close
If I use a hacked up standard browser agent string:
# curl -I -H "User-Agent: Mozilla/5.0" -0 "http://www.nasdaqomxnordic.com/"
HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Length: 0
Content-Type: text/html;charset=UTF-8
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Server: Microsoft-IIS/7.5
X-Powered-By: ASP.NET
Date: Wed, 22 Jul 2015 15:06:22 GMT
Connection: close
And then if I use a Java agent string (which is my guess as to what you're using):
# curl -I -H "User-Agent: Java/1.6.0_26" "http://www.nasdaqomxnordic.com/"
HTTP/1.1 400 Bad Request
Content-Type: text/html; charset=UTF-8
Cache-Control: no-cache
Pragma: no-cache
Expires: 0
Connection: close
Only the "browser" user agent gets through. I'd try tweaking your code to set the user agent string to something commonly found in a web browser.
Can't figure this one out!
OS: CentOS 6.6 (Up-To-Date)
I get the following 503 error when using my nagios check_http check (or curl) to query an SSL site served via HAProxy 1.5.
[root#nagios ~]# /usr/local/nagios/libexec/check_http -v -H example.com -S1
GET / HTTP/1.1
User-Agent: check_http/v2.0 (nagios-plugins 2.0)
Connection: close
Host: example.com
https://example.com:443/ is 212 characters
STATUS: HTTP/1.0 503 Service Unavailable
**** HEADER ****
Cache-Control: no-cache
Connection: close
Content-Type: text/html
**** CONTENT ****
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>
HTTP CRITICAL: HTTP/1.0 503 Service Unavailable - 212 bytes in 1.076 second response time |time=1.075766s;;;0.000000 size=212B;;;0
[root#nagios ~]# curl -I https://example.com
HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html
However. I can access the site fine via any browser fine (200 OK), and also curl -I https://example.com from another server:
root#localhost:~# curl -I https://example.com
HTTP/1.1 200 OK
Date: Wed, 18 Feb 2015 14:36:51 GMT
Server: Apache/2.4.6
Expires: Mon, 26 Jul 1997 05:00:00 GMT
Pragma: no-cache
Last-Modified: Wed, 18 Feb 2015 14:36:52 GMT
Content-Type: text/html; charset=UTF-8
Strict-Transport-Security: max-age=31536000;
The HAProxy server is runnning on pfSense 2.2.
I see that HAProxy returns an HTTP/1.0 for nagios and HTTP/1.1 from elsewhere. So is it my check_http' plugin causing this or is itcurl`?
Is my server just not sending the HOST header? If so, how can I resolve this?
What check_http does is it checks whether there exists a index.html-file on the server. This means you might have http running and working, while the check still fails.
Regardless whether or not creating an index.html file on the server resolves the issue, u might not want to create the circumstances such that the check works.
I suppose setting up a check for pinging your example.com and a check via nrpe to see whether your http-service is running will meet your requirements.
check_http has an option called --sni
You need to use that option