All,
My requirement is fairly simple. I have to perform a simple HTTP POST to an IP:port combination. I used simple socket programming to do that and I have been successful in sending across my request to them and also get back response from them. The only problem being that the response is always a HTTP 400: Bad Request followed by my HTTP POST message. I am not sure if the problem is with the client or the server. My only guess being that there might be a problem with my data that I am sending. This is what my POST looks like
POST /<Server Tag> HTTP/5.1
Content-Length: xxx
--Content--
and the response from the server looks something like this
HTTP/1.1 400 Bad Request
Content-Length: xxx
--Same content that I sent them--
I was not sure If I could put in the IP of the server here so kept myself to using . I am pretty sure that the problem would not be there since I get back some response from the server and confident about the connection. Can someone help me ?
PS: Some pointers about my POST:
1) HTTP 5.1 was requested by the server and I am not sure if that is correct
2) I have played around with the number of line spaces after the content length. I have tried giving one and two lines. Not sure if that would make a difference. On wireshark though I see a difference with the number of line spaces as with a single line space the protocol is specified as TCP but with two it changes to HTTP. The response is always received on HTTP protocol. Some explanation on the difference would also help
Thanks
edit: the other thing that confuses me is that the response has a HTTP 1.1 and not a 5.1 that I had sent. I have also tried changing my post to 1.1 with no success
edit2: Based on suggestion form fvu and others, I used WebClient to Upload my request. Still got back a 400. The header that was generated by the WebClient looks like this
POST <server tag> HTTP/1.1
Host: <IP:PORT>
Content-Length: 484
Expect: 100-continue
Connection: Keep-Alive
The issue I see with this might be that the server was not expecting all the details in the header. The server has requested only the Content-Length from us. Would that be a problem?
Thanks
You can use a debugging proxy to view a client request and a server response to figure out what your client socket program needs to do.
But first you need to create a simple web page that a browser displays, allows you to do a POST from the browser to the web server, and get a simple response back from the server.
HTTP/5.1 is either wrong or misused by the programmer of the server application
You should get a valid example from the server api to check your protocol implementation first.
Related
I'm a bit rusty on nuances of the HTTP protocol and I'm wondering if it can support publish/subscribe directly?
HTTP is a request reponse protocol. So client sends a request and the server sends back a response.
In HTTP 1.0 a new connection was made for each request.
Now HTTP 1.1 improved on HTTP 1.0 by allowing the client to keep the connection open and make multiple requests.
I realise you can upgrade an HTTP connection to a websocket for fast 2 way communications. What I'm curious about is whether this is strictly necessary?
For example if I request a resource "http://somewhere.com/fetch/me/slowly"
Is the server free to reply directly twice?
Such as first with a 202 accepted
and then shortly later with the content when it is ready,
but without the client sending an additional request first?
i.e.
Client: GET http://somewhere.com/fetch/me/slowly
Server: 202 "please wait..."
Server: 200 "here's your document"
Would it be correct to implement a publish/subscribe service this way?
For example:
Client: http://somewhere.com/subscribe
Server: item 1
...
Server: item 2
I get the impression that this 'might' work because clients will typically have an event loop watching the connection but is technically wrong (because a client following the protocol need not be implemented that way).
However, if you use chunked transfer encoding this would work.
HTTP/2 seems to allow this as well but I'm not clear whether something changed to make it possible.
I haven't seen much discussion of this in relation to pub/sub so what if anything is wrong with using plain HTTP/1.1 with or without chunked encoding?
If this works why do you need things like RSS or ATOM?
A HTTP request can have multiple 'responses', but the responses all have statuscodes in the 1xx range, such as 102 Processing.
However, these responses are only headers, never bodies.
HTTP/1.1 (like 1.0 before it) is a request/response protocol. Sending a response unsolicited is not allowed. HTTP/2 is a frames protocol which adds server push which allows the server to offer extra data and handle multiple requests in parallel but doesn't change its request/response nature.
It is possible to keep a HTTP connection open and keep sending more data though. Many (audio, video) streaming services will use this.
However, this just looks like a continuous body that keeps on streaming, rather than many multiple HTTP responses.
If this works why do you need things like RSS or ATOM
Because keeping a TCP connection open is not free.
What is the difference between HTTP 100 and 200status code?
Are they the same?
I was told that 200 is the standard code when the HTTP request is successful without any errors whatsoever.
Is that right?
What about this 100 code? I have found different explanations on this status code. could somebody explain that using some real world example please?
Because right now I don't know the difference and both seem to be the same to me.
Let's me give you an example:
You’re sending a large object to the server using a PUT request, you may include a Expect header like this:
PUT /media/file.mp4 HTTP/1.1
Host: api.example.org
Content-Length: 1073741824
Expect: 100-continue
This tells the server that it should respond with a 100 Continue status code if the server is going to be able to accept the request:
HTTP/1.1 100 Continue
When the client receives this, it tells the client the server will accept the request, and it may start sending the request body.
The big benefit here is that if there’s a problem with the request, a server can immediately respond with an error before the client starts sending the request body.
A simple use-case is that a server might first require authentication using 401 Unauthorized, or it might know in advance that the Content-Type that the client wants to send to the server is not something the server will want to accept.
Mainly cited from :
https://evertpot.com/http/100-continue/
https://www.rfc-editor.org/rfc/rfc7231#section-5.1.1
From: http://www.rfc-editor.org/rfc/rfc7231.txt
6.2.1. 100 Continue
The 100 (Continue) status code indicates that the initial part of a
request has been received and has not yet been rejected by the
server. The server intends to send a final response after the
request has been fully received and acted upon.
When the request contains an Expect header field that includes a
100-continue expectation, the 100 response indicates that the server
wishes to receive the request payload body, as described in
Section 5.1.1. The client ought to continue sending the request and
discard the 100 response.
If the request did not contain an Expect header field containing the
100-continue expectation, the client can simply discard this interim
response.
(edited, thank you Julian for noticing :)
I am qurious if there is any standard method in HTTP 1.X protocol to tell there is a problem on the server during http response that started as 200 OK.
How to tell there's any error on the server if 200 OK header is already returned and we are currently sending the response body? In some standards-compilliant way.
UPD : There is a duplicate, but without a single answer (!) HTTP: error during reply after 200 OK status code.
To be specific: I can not use Content-Length for checking at response end, because the length can't be known at response start.
Additionaly, I can't cache the whole response on the server before sending (because it is too big and I will run out of memory, and it's too long to generate so the user can't wait, etc...).
There is no standard method to do what you want.
To be precise, the standard method is to buffer the response on the server, then send a 200 OK and the Content-Length, followed by the content. As stated, this does not work for you.
The only alternative I can think of, is to wrap the content in some format that makes it discoverable whether it was sent correctly. For example, you might end it with a hash or even a digital signature. But obviously, such mechanisms are not part of the HTTP standard.
As an HTTP 1.1 server, I reply to a GET request with 200 OK status code, then start sending the data to the client.
During this send, an error occurs, and I cannot finish.
I cannot send a new status code as a final status code has already been sent.
How should I behave to let the client know an error occurred and I cannot continue with this HTTP request?
I can think of only one solution: close the socket, but it's not perfect: it breaks the keep-alive feature, and no clear explanation of the error is given to the client.
The HTTP standard seems to suppose that the server already knows exactly what to reply before it starts replying.
But this is not always the case.
Examples:
I return a very large file (several GB) from disk, and I get an IO error at some point during the reading of the file.
Same example with a large DB dump.
I cannot construct my whole response in memory then send it.
The HTTP 1.1 standard helps for such usage with the chunked transfer encoding: I don't even need to know the final size before starting to send the reply.
So these usage are not excluded from HTTP 1.1.
I finally found a possible solution for this:
HTTP 1.1 Trailer headers.
In chunked encoded bodies, HTTP 1.1 allows the sender to add data after the last (empty) chunk, in the form of a block of headers.
The specification hints some use-cases like computing on the fly a md5 of the body, and send it after the body so the client can check its integrity.
I think it could be used for error reporting, even if I haven't found anything about this kind of usage.
The issues I see with this are:
this requires using chunked encoding (but it's not much of an issue)
trailers support is probably very low:
server-side (it could be bypassed by manually creating the chunked encoding, but since it's applied after the content-encoding (gzip), it would require a lot of reimplementation)
client-side (bugs fixed only in 2010 in curl for example)
and on proxies (that could then loose the trailers if not properly implemented)
I have pushed the similar question to be answered, so here you can find that there is no solution:
How to tell there's something wrong with the server during response that started as 200 OK. Fail gracefully
I couldn't find RFC that may answer this question. Perhaps you guys can point me to right direction.
I'm implementing strippeddown http server whose only function is to accept big multi-part encoded uploads.
In certain cases, such as file is too big or client is not authorized to upload, I want server to reply with error and close connection immediately.
It looks like Chrome browser doesn't like it because it thinks server returned http code zero.
Could not get any response
This seems to be like an error connecting to http://my_ubuntu:8080/api/upload. The response status was 0.
Check out the W3C XMLHttpRequest Level 2 spec for more details about when this happens.
Therefore question:
Is replying to client before receiving complete request allowed for HTTP server ?
update: Just tested it with iOS 6 client. Same thing, it thinks server abruptly closed connection :(
This is a great question and apparently it is very ambiguous. You will probably enjoy reading this article on the "Million Dollar Bug" - http://jacquesmattheij.com/the-several-million-dollar-bug
I think this is certificate trust issue. Try manually trusting the site and subsequent requests should work.