How to send an incomplete http request using netcat? - http

I'd like to send a incomplete http request, or some kind of request that will temporarily block my server for a while. I wrote the server myself in C, and it is currently designed to only accept one client at a time. I want to test that this is indeed the case.
Would it be possible to send something simple, similar to GET /HTTP/1.0? I'm just doing all my testing in my terminal, not using anything else so far.

Yes, you can do this with netcat.
nc -c servername 80 <<<"GET / HTTP/1.0"
This will send the GET line and then wait for the server to respond. But the server should be waiting for headers and the blank line that ends them, so it will never respond. So nc will wait forever, keeping the connection open.

Related

Sending HTTP request

I am trying to upload data from an Arduino to data.sparkfun.com, but somehow it always fails. To make sure that the HTTP request I am sending is correct, I would like to send it from a computer to the server and see if it uploads the correct values.
According to some examples, the request should be formulated like this:
GET /input/publicKey?private_key=privateKey&dht1_t=24.23&dht1_h=42.4&dht2_t=24.48&dht2_h=41.5&bmp_t=23.3&bmp_p=984021 HTTP/1.1\n
Host: 54.86.132.254\n
Connection: close\n
\n
How do I send this request to the server from my computer? Do I just type in the terminal? Im not sure where to start.
Have a look at curl which should be able to handle your needs.
Even easier and more low level is netcat (here is an example on SO)

Ask CURL to disconnect as soon as it receives a header

I'm pulling data from a server but need to know the type of data before I pull it. I know I can look at content-type in the response header, and I've looked into using
curl --head http://x.com/y/z
however some servers do not support the "HEAD" command (I get a 501 not implemented response).
Is it possible to somehow do a GET with curl, and immediately disconnect after all headers have been received?
Check out the following answer:
https://stackoverflow.com/a/5787827
Streaming. UNIX philosphy and pipes: they are data streams. Since curl and GET are unix filters, ending the receiving pipe (dd) will terminate curl or GET early (SIGPIPE). There is no telling whether the server will be smart enough to stop transmission. However on a TCP level I suppose it would stop retrying packets once there is no more response. #sehe
Using this method you should be able to download as many bytes as you want, and then cancel the request. You could also work some magic to terminate after receiving a blank line, which would mean the end of the header.

HTTP server detecting a broken network connection from a HTTP client

I have an web application in which after making a HTTP request to the server, the client quits ( or network connection is broken) before the response was completely received by the client.
In this scenario the server side of the application needs to do some cleanup work. Is there a way built into HTTP protocol to detect this condition. How does the server know if the client is still waiting for the response or has quit?
Thanks
Vijay Kumar
No, there is nothing built in to the protocol to do this (after all, you can't tell whether the response has been received by the client itself yet, or just a downstream proxy).
Just have your client make a second request to acknowledge that it has received and stored the original response. If you don't see a timely acknowedgement, run the cleanup.
However, make sure that you understand the implications of the Two Generals' Problem.
You might have a network problem... usualy, when you send a HTTP request to the server, first you send headers and then the content of the POST (if it is a post method). Likewise, the server responds with the headers and document body. The first line in the header is the status. Usually, status 200 is the success status, if you get that, then there should be no problem getting the rest of the document. Check this for details on the HTTP response status headers http://www.w3.org/Protocols/rfc2616/rfc2616-sec6.html
LE:
Sorry, missread your question. Basically, you don't have a trigger for when the user disconnects. If you use OOP, you could use the destructor of a class to clean whatever it is you need to clean.

HTTP 504 timeout after exactly 120 seconds

I have a server application which runs in the Amazon EC2 cloud. From my client (the browser) I make a HTTP request which uploads a file to the server which then processes the file. If there is a lot of processing (large file
), the server always times out with a 504 backend continuation error always exactly after 120 seconds. Though I get this error, the server continues to process the request and completes it (verified by checking the database) but I cannot see the final result on my client because of the timeout.
I am clueless as to why this is happening. Has anyone faced a similar 504 timeout ? Is there some intermediate proxy server not in my control which is timing out ?
I have a similar problem and in my case I believe it is due to the connection between the Elastic Load Balancer (ELB) and the EC2 instance.
For a long-term solution I will go with the 303 Status response + back-end processing suggested by james.garriss above.
For short-term solution it may be possible for Amazon support to increase the ELB timeout (see their response in https://forums.aws.amazon.com/thread.jspa?messageID=491594&#491594). Unfortunately there doesn't seem to be any way to change the timeout yourself through either API or console.
[Update] AWS now does allow you to update the idle timeout either through console, CLI or .ebextensions configuration. See http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/config-idle-timeout.html (thanks #Daniel Patz for the update)
Assuming that the correct status code is being returned, the problem is that an intermediate proxy is timing out. "The server, while acting as a gateway or proxy, did not receive a timely response from the upstream server specified by the URI." (http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.5) It most likely indicates that the origin server is having some sort of issue (i.e., taking a long time to process your request), so it's not responding quickly.
Perhaps the best solution is to re-craft your server app so that it responds with a "303 See Other" status code; then your client can retrieve the data at a later data point, once the server is done processing and creates the final result.
Edit: Another idea is to re-craft your server app so that it responds with a "413 Request Entity Too Large" status code when the request entity size is too large. This will get rid of the error, though it may make your app less useful if it can only process "small" files."
Other possible solutions:
Increase timeout value of the proxy (if it's under your control)
Make your request to a different server (if there's another, faster server with the same app)
Make your request differently (if possible) such that you are sending less data at a time
it is possible that the browser timeouts during the script execution.

long running http connection never gets response back

I am making an http request which ends up taking more than 8 mins. For me, this long running request work fine. I am able to get a response back to my browser without any issues. (I am located on the same network as the server).
However, for some users, the browser never returns any response. (Note: When the same http request executes in 1 min, these users are able to see the response without any problem)
These users happen to be on another network. And there probably is a firewall or two between their location and the server.
I can see on their fiddler that the request is just sitting there waiting for a response.
I am right now assuming that firewall is killing the idle http connection.. but I am not sure.
If you have any idea why the response never gets back, or why the connection never breaks.. it will be really helpful.
Also: Is it possible to fix this issue by writing an Applet that somehow manages to keep the sending dummy signal to the server, even after having sent (flushed) the request to the server?
The users might be behind a connection tracking firewall/NAT gateway. Such gateways tend to drop the TCP connection when nothing has happened for a period of time. In a custom protocol you could send some kind of heartbeat messags to keep the TCP connection alive, but with HTTP you don't have proper control over that connection, nor does HTTP facilitate what's needed to keep a tcp connection "alive".
The usual way to handle long running jobs initated by an HTTP request is to fire off that job in the background, sending a proper response back to the client immediately and have an applet/ajax request poll the status of that job and returning the result when it's done.
If you need a quick fix, see if you can control any timeouts on the gateways between the server and the user.
Have you considered that the users might be using a browser which has a HTTP timeout which causes the browser to stop waiting for a response after a certain amount of time?
http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html
http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/usingkeepalive.html
If you are using Linux machine try
# cat /proc/sys/net/ipv4/tcp_keepalive_time
7200
# cat /proc/sys/net/ipv4/tcp_keepalive_intvl
75
# cat /proc/sys/net/ipv4/tcp_keepalive_probes
9
# echo 1500 > /proc/sys/net/ipv4/tcp_keepalive_time
# echo 500 > /proc/sys/net/ipv4/tcp_keepalive_intvl
# echo 20 > /proc/sys/net/ipv4/tcp_keepalive_probes

Resources