Is this a valid HTTP response? - http

I am writing a web server using C++, which responds the following for all requests:
static std::string rsp[] = {
"HTTP/1.1 200 OK\r\n",
"Server: WebServer\r\n",
"Content-Type: text/html\r\n",
"Content-Length: 3\r\n",
"Connection: close\r\n",
"\r\n",
"123"
};
the content "123" can be successfully shown in browser. But when I use apache-ab to do a test, ab always show errors like this:
ab -n 1 -c 1 http://127.0.0.1:1080/
apr_socket_recv: Connection reset by peer (104)
I thought that I'm closing the socket too quickly, so I commented the close() function. But ab just hold, ab seems to be waiting for a complete response.

If you can correctly render the response (123) in your browser, then it means that there is nothing wrong with your server, but with the request ab is sending to your server is not understood by your server.
ab's requests are not necessarily the same as a web browser's requests. And ab is not fully http 1.1 complainant, it's a http1.0 client.
"It (ab) does not implement HTTP/1.x fully; only accepts some 'expected'
forms of responses.
Source
Try this:
0. See the request being sent to your server: ab -n 5 -c 5 -v 10 http://127.0.0.1:8000/ using the verbosity argument. You should see something like this:
GET / HTTP/1.0
Host: 127.0.0.1:8000
User-Agent: ApacheBench/2.3
Accept: */*
Using firebug or something, get the requests being sent from your browser.
Now, match them, see what is the difference in the requests being sent. Use ab -H to send those custom headers to your web server from ab.

Related

How can I customize HTTP 400 responses for parse errors?

I've written a REST API service that requires that all responses be JSON. However, when the Go HTTP request parser encounters an error, it returns 400 as a plain text response without ever calling my handlers. Example:
> curl -i -H 'Authorization: Basic hi
there' 'http://localhost:8080/test' -v
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8080 (#0)
> GET /test HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.54.0
> Accept: */*
> Authorization: Basic hi
> there
>
< HTTP/1.1 400 Bad Request
HTTP/1.1 400 Bad Request
< Content-Type: text/plain; charset=utf-8
Content-Type: text/plain; charset=utf-8
< Connection: close
Connection: close
<
* Closing connection 0
Note the invalid Authorization header. Of course 400 is the proper response, but it's text/plain, of course. Is there some way to configure the Go http parser to use custom error response media types and bodies?
You can't. You can find this in net/http source, it only happens if the request was malformed:
https://github.com/golang/go/blob/master/src/net/http/server.go#L1744
I think your problem might be a new line in the header you're adding in curl?
401, 403, 404, 500 errors you'll be able to respond with json, but bad requests or bad headers (too long, malformed) are handled within server.go.
There is at present no way to intercept such errors though it is under consideration, so your only solution in go would be to patch the stdlib source (I don't recommend this). However, since this error only presents if the client has made a mistake and the request is malformed, it's probably not a huge problem. The reason for the text response is so that a browser or similar client (like curl without -v) doesn't just see an empty response. You could put a proxy like nginx in front of your app but then you'd never see the request either as it is a bad request, your proxy would handle it.
Possibly you'd be able to do it with a proxy like nginx in front though if you set a specific static error page for it to serve for 400 errors and serve a 400.json file that you specify? That's the only solution I can think of. A directive something like this might work for nginx:
error_page 400 /400.json;
If you'd like to be able to customise these errors, perhaps add a comment to the issue linked to let them know you had this specific problem.
If you are using the standard net/http library you can use the following code. Take a look at this answer Showing custom 404 error page with standard http package #Mostafa to which I got this example from
func homeHandler(w http.ResponseWriter, r *http.Request) {
if r.URL.Path != "/" {
errorHandler(w, r, http.StatusNotFound)
return
}
fmt.Fprint(w, "welcome home")
}
func errorHandler(w http.ResponseWriter, r *http.Request, status int) {
w.WriteHeader(status)
if status == http.StatusNotFound {
// JSON Out here
}
}

Server utility: receive HTTPS POST requests, cat the data

To test something, I want to run a simple web server that:
Will listen for HTTPS POST requests
Print the POST data received to STDOUT (along with other stuff, potentially, so it's fine if it just cats the whole HTTP request)
Is there a quick way to set something like this up? I've tried using OpenSSL's s_server, but it only seems to want to respond to GET requests.
Since s_server does not support POST requests, you should use socat instead of openssl s_server:
# socat -v OPENSSL-LISTEN:443,cert=mycert.pem,key=key.pem,verify=0,fork 'SYSTEM:/bin/echo HTTP/1.1 200 OK;/bin/echo;/bin/echo this-is-the-content-of-the-http-answer'
Here are essential parameters:
fork: to loop for many requests
-v: to display the POST data (and other stuff) to STDOUT
verify=0: do not ask for mutual authentication
Now, here is an example:
We use the following POST request:
% wget -O - --post-data=abcdef --no-check-certificate https://localhost/
[...]
this-is-the-content-of-the-http-answer
We see the following socat output:
# socat -v OPENSSL-LISTEN:443,cert=mycert.crt,key=key.pem,verify=0,fork 'SYSTEM:/bin/echo HTTP/1.1 200 OK;/bin/echo;/bin/echo this-is-the-content-of-the-http-answer'
> 2017/08/05 03:13:04.346890 length=212 from=0 to=211
POST / HTTP/1.1\r
User-Agent: Wget/1.19.1 (freebsd10.3)\r
Accept: */*\r
Accept-Encoding: identity\r
Host: localhost:443\r
Connection: Keep-Alive\r
Content-Type: application/x-www-form-urlencoded\r
Content-Length: 6\r
\r
< 2017/08/05 03:13:04.350299 length=16 from=0 to=15
HTTP/1.1 200 OK
> 2017/08/05 03:13:04.350516 length=6 from=212 to=217
abcdef< 2017/08/05 03:13:04.351549 length=1 from=16 to=16
< 2017/08/05 03:13:04.353019 length=39 from=17 to=55
this-is-the-content-of-the-http-answer

How to make an HTTP GET request manually with netcat?

So, I have to retrieve temperature from any one of the cities from http://www.rssweather.com/dir/Asia/India.
Let's assume I want to retrieve of Kanpur's.
How to make an HTTP GET request with Netcat?
I'm doing something like this.
nc -v rssweather.com 80
GET http://www.rssweather.com/wx/in/kanpur/wx.php HTTP/1.1
I don't know exactly if I'm even in the right direction or not. I am not able to find any good tutorials on how to make an HTTP get request with netcat, so I'm posting it on here.
Of course you could dig in standards searched for google, but actually if you want to get only a single URL, it isn't​‎​‎ worth the effort.
You could also start a netcat in listening mode on a port:
nc -l 64738
(Sometimes nc -l -p 64738 is the correct argument list)
...and then do a browser request into this port with a real browser. Just type in your browser http://localhost:64738 and see.
In your actual case the problem is that HTTP/1.1 doesn't close the connection automatically, but it waits your next URL you want to retrieve. The solution is simple:
Use HTTP/1.0:
GET /this/url/you/want/to/get HTTP/1.0
Host: www.rssweather.com
<empty line>
or use a Connection: request header to say the server you want to close after that:
GET /this/url/you/want/to/get HTTP/1.1
Host: www.rssweather.com
Connection: close
<empty line>
Extension: After the GET header write only the path part of the request. The hostname from which you want to get data belongs to a Host: header as you can see in my examples. This is because multiple websites can run on the same webserver, so the browsers need to say him, from which site it wants to load the page.
This works for me:
$ nc www.rssweather.com 80
GET /wx/in/kanpur/wx.php HTTP/1.0
Host: www.rssweather.com
And then hit double <enter>, i.e. once for the remote http server and once for the nc command.
source: pentesterlabs
You don't even need to use/install netcat
Create a tcp socket via an unused file-descriptor i.e I use 88 here
Write the request into it
use the fd
exec 88<>/dev/tcp/rssweather.com/80
echo -e "GET /dir/Asia/India HTTP/1.1\nhost: www.rssweather.com\nConnection: close\n\n" >&88
sed 's/<[^>]*>/ /g' <&88
On MacOS, you need the -c flag as follows:
Little-Net:~ minfrin$ nc -c rssweather.com 80
GET /wx/in/kanpur/wx.php HTTP/1.1
Host: rssweather.com
Connection: close
[empty line]
The response then appears as follows:
HTTP/1.1 200 OK
Date: Thu, 23 Aug 2018 13:20:49 GMT
Server: Apache
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html
The -c flag is described as "Send CRLF as line-ending".
To be HTTP/1.1 compliant, you need the Host header, as well as the "Connection: close" if you want to disable keepalive.
Test it out locally with python3 http.server
This is also a fun way to test it out. On one shell, launch a local file server:
python3 -m http.server 8000
Then on the second shell, make a request:
printf 'GET / HTTP/1.1\r\nHost: localhost\r\n\r\n' | nc localhost 8000
The Host: header is required in HTTP 1.1.
This shows an HTML listing of the directory, just as you would see from:
firefox http://localhost:8000
Next you can try to list files and directories and observe the response:
printf 'GET /my-subdir/ HTTP/1.1\n\n' | nc localhost 8000
printf 'GET /my-file HTTP/1.1\n\n' | nc localhost 8000
Every time you make a successful request, the server prints:
127.0.0.1 - - [05/Oct/2018 11:20:55] "GET / HTTP/1.1" 200 -
confirming that it was received.
example.com
This IANA maintained domain is another good test URL:
printf 'GET / HTTP/1.1\r\nHost: example.com\r\n\r\n' | nc example.com 80
and compare with: http://example.com/
https SSL
nc does not seem to be able to handle https URLs. Instead, you can use:
sudo apt-get install nmap
printf 'GET / HTTP/1.1\r\nHost: github.com\r\n\r\n' | ncat --ssl github.com 443
See also: https://serverfault.com/questions/102032/connecting-to-https-with-netcat-nc/650189#650189
If you try nc, it just hangs:
printf 'GET / HTTP/1.1\r\nHost: github.com\r\n\r\n' | nc github.com 443
and trying port 80:
printf 'GET / HTTP/1.1\r\nHost: github.com\r\n\r\n' | nc github.com 443
just gives a redirect response to the https version:
HTTP/1.1 301 Moved Permanently
Content-Length: 0
Location: https://github.com/
Connection: keep-alive
Tested on Ubuntu 18.04.

Getting 404 error if requesting a page through proxy, but 200 if connecting directly

I am developing an HTTP proxy in Java. I resend all the data from client to server without touching it, but for some URLs (for example this) server returns the 404 error if I am connecting through my proxy.
The requested URL uses Varnish caching, so it might be the root of problem. I cannot reconfigure it - it is not my.
If I request that URL directly with browser, the server returns 200 and the image is shown correctly.
I am stuck because I even do not know what to read and how to compose a search request.
Thanks a lot.
Fix the Host: header of the re-issued request. The request going out from the proxy either has no Host header or it is broken (or only X-Host exists). Also take note that the proxy application will execute its own DNS lookup and that might yield a different IP address than your local computer (where you issued the original request).
This works:
> curl -s -D - -o /dev/null http://212.25.95.152/w/w-200/1902047-41.jpg -H "Host: msc.wcdn.co.il"
HTTP/1.1 200 OK
Content-Type: image/jpeg
Cache-Control: max-age = 315360000
magicmarker: 1
Content-Length: 27922
Accept-Ranges: bytes
Date: Sun, 05 Jul 2015 00:52:08 GMT
X-Varnish: 2508753650 2474246958
Age: 67952
Via: 1.1 varnish
Connection: keep-alive
X-Cache: HIT

Modify HTTP response on the fly NOT using Fiddler2

Let`s assume that program 'xyz' use pretty simple authentication via HTTP/1.1 GET method. What i want to do is modify response header, exactly from "Content: 1; Bad User Name or Password" to "Content: 0; Pass".
To be more specific:
0) I'm using win 7 x86, program 'xyz' was written in VB. Fiddler is not capturing packets, btw don't know why ;-( => i wanted to use AutoResponder. Wireshark captures nicely ;-)
1) I run 'xyz' and fill credentials: User Name = 'fake_foo' and Password = 'fake_bar' than press login
2) In wireshark i see HTTP request packet to 111.222.223.224:80:
GET /result?u=fake_foo&p=fake_bar HTTP/1.1
User-Agent: CheckL
TimeStamp: 1234567890
Host: 111.222.223.224:80
3) Next packet is response from that server:
HTTP/1.1 200 OK
Date: 05:00:00 AM
Content-Type: text/html
Content: 1;Bad User Name or Password
Now when i put reliable credentialas than response is:
HTTP/1.1 200 OK
Date: 05:00:00 AM
Content-Type: text/html
Content: Content: 0; Pass
and program starts.
How to, what software can use to modify on the fly this response to be 'pass' after sending request with fake data?
Thanks

Resources