This question was motivated by the answers here:
What to do with errors when streaming the body of an Http request
In this case, I have already written a HTTP 200 OK header, then I need to amend this if there is an error, by writing a trail header that says there was an error after writing a success header.
I have this Node.js code:
const writeResponse = function(file: string, socket: Socket){
socket.write([
'HTTP/1.1 200 OK',
'Content-Type: text/javascript; charset=UTF-8',
'Content-Encoding: UTF-8',
'Accept-Ranges: bytes',
'Connection: keep-alive',
].join('\n') + '\n\n');
getStream(file)
.pipe(socket)
.once('error', function (e: any) {
// there was an error
// how can I write trail headers here ?
s.write('some bad shit happened\n')
});
}
how do I write a useful trail header to the response that can be displayed well by the browser?
I think this is the relevant spec for trail headers:
https://www.rfc-editor.org/rfc/rfc2616#section-14.40
I think they should be called "trailing headers", but whatever.
Firstly:
I think this is the relevant spec for trail headers: https://www.rfc-editor.org/rfc/rfc2616#section-14.40
RFC 2616 has been obsoleted by RFC 7230. The current spec for trailers is RFC 7230 § 4.1.2.
Secondly:
].join('\n') + '\n\n'
Lines in HTTP message framing are terminated with \r\n, not \n.
Thirdly:
Content-Encoding: UTF-8
Content-Encoding is for content codings (like gzip), not charsets (like UTF-8). You probably don’t need to indicate charset separately from Content-Type.
And lastly:
how do I write a useful trail header to the response that can be displayed well by the browser?
You don’t. Mainstream Web browsers do not care about trailers.
See also (by the same user?): How to write malformed HTTP response to “guarantee” something akin to HTTP 500
I've been dabbling with the underlying client-server communications (cURL, browsers, and server responses, among a variety of others). I think at this point I understand the basic structure of a request and response:
Request:
(Method) (File) (Protocol)
(Headers)
(Empty)
(Body?)
Response:
(Protocol) (Code) (Meaning)
(Headers)
(Empty)
(Body?)
This was working fine until I sent compressed information. More precisely, I used gzip to send HTML in the response.
First, I'm using an updated version of Chrome to test -- and if I use a prebuilt HTTP server solution, gzip (In exactly the way I'm using it now) works fine.
In the 'Accept-Encoding' header sent from Chrome, it states that it will accept gzip. My guess is there's something specific I haven't run across yet (And haven't been able to find through searching).
All that said, here's the response:
HTTP/1.1 200 OK
Connection: keep-alive
Content-Type: text/html
Content-Encoding: gzip
Content-Length: 205
(Zipped data...)
If I remove gzip and leave everything else exactly the same, this works fine. The Content-Length is the length of the zipped data, not the raw data, though I commented that out to make sure it wasn't doing something I wasn't expecting. Chrome doesn't 'require' Content-Length to figure out what it wants to do.
At one stage, I also had Content-Type: text/html; charset=utf-8 which didn't make a difference (Underneath the zip, the string is converted to utf-8 on the server).
Originally I zipped the headers and the body for consistency (And lowest-length). I figured it failed because the headers had to be visible to the browser to figure out the body would be zipped.
In both cases, however, Chrome displays "ERR_CONTENT_DECODING_FAILED" which is classic "We don't know how to decode the information."
Prior to trying to embed this into the same response, I sent the headers first and the body second, thinking that may be the best approach -- two different responses.
Chrome took this to mean the server was pushing downloadable content and entered an infinite acceptance loop, so clearly that wasn't the solution. In addition, the content it received wasn't used.
Technically, a normal web server would end the transmission after sending the second response, but I'm experimenting without that to better understand the connections.
Is there something I'm missing, or are there any thoughts on what might be going wrong?
Thanks!
Edit
The code is scattered out across various functions (I doubt anyone wants to look at ~300 lines), but I picked out the relevant pieces. Right before gzip, self.written is html.
const zipper = require('zlib');
push(value, callback)
{
this.socket.write(value, callback);
}
let self = this;
zipper.gzip(self.written, function(error, zipped)
{
self.setHeaders(
{
'Content-Encoding': 'gzip',
'Content-Length': zipped.length
});
self.getHeaderString(function(error, headers)
{
self.push(headers + zipped, callback);
});
});
When I console.log the final response:
I've been playing with express.js trying to return simple json object and noticed that even though I explicitly set Content-Type header to be application/json it is only visible on first response when status code is 200. Every following response with 304 won't have Content-Type header.
My code sample:
app.get('/user', function (req, res) {
res.set('Content-Type', 'application/json');
res.send([
{ user: "john", email: "john#example.com"},
{ user: "marry", email: "marry#example.com"},
{ user: "dan", email: "dan#example.com"}
]);
});
What is the reason for that?
304 Not Modified means that the request contained a conditional header asking the server to respond with the contents of the resource only if the the resource has been modified.
Since no content is being returned, the Content-Type header is not sent. This is the recommended behavior for a 304 Not Modified HTTP reply.
From RFC 7232 §4.1 :
The server generating a 304 response MUST generate any of the
following header fields that would have been sent in a 200 (OK)
response to the same request: Cache-Control, Content-Location, Date,
ETag, Expires, and Vary.
Since the goal of a 304 response is to minimize information transfer
when the recipient already has one or more cached representations,
a sender SHOULD NOT generate representation metadata other than the
above listed fields unless said metadata exists for the purpose of
guiding cache updates (e.g., Last-Modified might be useful if the
response does not have an ETag field).
I don't know anything about express.js, but it I would look into what sort of caching is being done.
In an HTTP GET request, parameters are sent as a query string:
http://example.com/page?parameter=value&also=another
In an HTTP POST request, the parameters are not sent along with the URI.
Where are the values? In the request header? In the request body? What does it look like?
The values are sent in the request body, in the format that the content type specifies.
Usually the content type is application/x-www-form-urlencoded, so the request body uses the same format as the query string:
parameter=value&also=another
When you use a file upload in the form, you use the multipart/form-data encoding instead, which has a different format. It's more complicated, but you usually don't need to care what it looks like, so I won't show an example, but it can be good to know that it exists.
The content is put after the HTTP headers. The format of an HTTP POST is to have the HTTP headers, followed by a blank line, followed by the request body. The POST variables are stored as key-value pairs in the body.
You can see this in the raw content of an HTTP Post, shown below:
POST /path/script.cgi HTTP/1.0
From: frog#jmarshall.com
User-Agent: HTTPTool/1.0
Content-Type: application/x-www-form-urlencoded
Content-Length: 32
home=Cosby&favorite+flavor=flies
You can see this using a tool like Fiddler, which you can use to watch the raw HTTP request and response payloads being sent across the wire.
Short answer: in POST requests, values are sent in the "body" of the request. With web-forms they are most likely sent with a media type of application/x-www-form-urlencoded or multipart/form-data. Programming languages or frameworks which have been designed to handle web-requests usually do "The Right Thing™" with such requests and provide you with easy access to the readily decoded values (like $_REQUEST or $_POST in PHP, or cgi.FieldStorage(), flask.request.form in Python).
Now let's digress a bit, which may help understand the difference ;)
The difference between GET and POST requests are largely semantic. They are also "used" differently, which explains the difference in how values are passed.
GET (relevant RFC section)
When executing a GET request, you ask the server for one, or a set of entities. To allow the client to filter the result, it can use the so called "query string" of the URL. The query string is the part after the ?. This is part of the URI syntax.
So, from the point of view of your application code (the part which receives the request), you will need to inspect the URI query part to gain access to these values.
Note that the keys and values are part of the URI. Browsers may impose a limit on URI length. The HTTP standard states that there is no limit. But at the time of this writing, most browsers do limit the URIs (I don't have specific values). GET requests should never be used to submit new information to the server. Especially not larger documents. That's where you should use POST or PUT.
POST (relevant RFC section)
When executing a POST request, the client is actually submitting a new document to the remote host. So, a query string does not (semantically) make sense. Which is why you don't have access to them in your application code.
POST is a little bit more complex (and way more flexible):
When receiving a POST request, you should always expect a "payload", or, in HTTP terms: a message body. The message body in itself is pretty useless, as there is no standard (as far as I can tell. Maybe application/octet-stream?) format. The body format is defined by the Content-Type header. When using a HTML FORM element with method="POST", this is usually application/x-www-form-urlencoded. Another very common type is multipart/form-data if you use file uploads. But it could be anything, ranging from text/plain, over application/json or even a custom application/octet-stream.
In any case, if a POST request is made with a Content-Type which cannot be handled by the application, it should return a 415 status-code.
Most programming languages (and/or web-frameworks) offer a way to de/encode the message body from/to the most common types (like application/x-www-form-urlencoded, multipart/form-data or application/json). So that's easy. Custom types require potentially a bit more work.
Using a standard HTML form encoded document as example, the application should perform the following steps:
Read the Content-Type field
If the value is not one of the supported media-types, then return a response with a 415 status code
otherwise, decode the values from the message body.
Again, languages like PHP, or web-frameworks for other popular languages will probably handle this for you. The exception to this is the 415 error. No framework can predict which content-types your application chooses to support and/or not support. This is up to you.
PUT (relevant RFC section)
A PUT request is pretty much handled in the exact same way as a POST request. The big difference is that a POST request is supposed to let the server decide how to (and if at all) create a new resource. Historically (from the now obsolete RFC2616 it was to create a new resource as a "subordinate" (child) of the URI where the request was sent to).
A PUT request in contrast is supposed to "deposit" a resource exactly at that URI, and with exactly that content. No more, no less. The idea is that the client is responsible to craft the complete resource before "PUTting" it. The server should accept it as-is on the given URL.
As a consequence, a POST request is usually not used to replace an existing resource. A PUT request can do both create and replace.
Side-Note
There are also "path parameters" which can be used to send additional data to the remote, but they are so uncommon, that I won't go into too much detail here. But, for reference, here is an excerpt from the RFC:
Aside from dot-segments in hierarchical paths, a path segment is considered
opaque by the generic syntax. URI producing applications often use the
reserved characters allowed in a segment to delimit scheme-specific or
dereference-handler-specific subcomponents. For example, the semicolon (";")
and equals ("=") reserved characters are often used to delimit parameters and
parameter values applicable to that segment. The comma (",") reserved
character is often used for similar purposes. For example, one URI producer
might use a segment such as "name;v=1.1" to indicate a reference to version
1.1 of "name", whereas another might use a segment such as "name,1.1" to
indicate the same. Parameter types may be defined by scheme-specific
semantics, but in most cases the syntax of a parameter is specific
to the implementation of the URIs dereferencing algorithm.
You cannot type it directly on the browser URL bar.
You can see how POST data is sent on the Internet with Live HTTP Headers for example.
Result will be something like that
http://127.0.0.1/pass.php
POST /pass.php HTTP/1.1
Host: 127.0.0.1
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:18.0) Gecko/20100101 Firefox/18.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
DNT: 1
Referer: http://127.0.0.1/pass.php
Cookie: passx=87e8af376bc9d9bfec2c7c0193e6af70; PHPSESSID=l9hk7mfh0ppqecg8gialak6gt5
Connection: keep-alive
Content-Type: application/x-www-form-urlencoded
Content-Length: 30
username=zurfyx&pass=password
Where it says
Content-Length: 30
username=zurfyx&pass=password
will be the post values.
The default media type in a POST request is application/x-www-form-urlencoded. This is a format for encoding key-value pairs. The keys can be duplicate. Each key-value pair is separated by an & character, and each key is separated from its value by an = character.
For example:
Name: John Smith
Grade: 19
Is encoded as:
Name=John+Smith&Grade=19
This is placed in the request body after the HTTP headers.
Form values in HTTP POSTs are sent in the request body, in the same format as the querystring.
For more information, see the spec.
Some of the webservices require you to place request data and metadata separately. For example a remote function may expect that the signed metadata string is included in a URI, while the data is posted in a HTTP-body.
The POST request may semantically look like this:
POST /?AuthId=YOURKEY&Action=WebServiceAction&Signature=rcLXfkPldrYm04 HTTP/1.1
Content-Type: text/tab-separated-values; charset=iso-8859-1
Content-Length: []
Host: webservices.domain.com
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding: identity
User-Agent: Mozilla/3.0 (compatible; Indy Library)
name id
John G12N
Sarah J87M
Bob N33Y
This approach logically combines QueryString and Body-Post using a single Content-Type which is a "parsing-instruction" for a web-server.
Please note: HTTP/1.1 is wrapped with the #32 (space) on the left and with #10 (Line feed) on the right.
First of all, let's differentiate between GET and POST
Get: It is the default HTTP request that is made to the server and is used to retrieve the data from the server and query string that comes after ? in a URI is used to retrieve a unique resource.
this is the format
GET /someweb.asp?data=value HTTP/1.0
here data=value is the query string value passed.
POST: It is used to send data to the server safely so anything that is needed, this is the format of a POST request
POST /somweb.aspHTTP/1.0
Host: localhost
Content-Type: application/x-www-form-urlencoded //you can put any format here
Content-Length: 11 //it depends
Name= somename
Why POST over GET?
In GET the value being sent to the servers are usually appended to the base URL in the query string,now there are 2 consequences of this
The GET requests are saved in browser history with the parameters. So your passwords remain un-encrypted in browser history. This was a real issue for Facebook back in the days.
Usually servers have a limit on how long a URI can be. If have too many parameters being sent you might receive 414 Error - URI too long
In case of post request your data from the fields are added to the body instead. Length of request params is calculated, and added to the header for content-length and no important data is directly appended to the URL.
You can use the Google Developer Tools' network section to see basic information about how requests are made to the servers.
and you can always add more values in your Request Headers like Cache-Control , Origin , Accept.
There are many ways/formats of post parameters
formdata
raw data
json
encoded data
file
xml
They are controlled by content-type in Header that are representes as mime-types.
In CGI Programming on the World Wide Web the author says:
Using the POST method, the server sends the data as an input stream to
the program. ..... since the server passes information to this program
as an input stream, it sets the environment variable CONTENT_LENGTH to
the size of the data in number of bytes (or characters). We can use
this to read exactly that much data from standard input.
Why .Net WebApi don't detect the request contentType automatically and do auto-binding?
If I make a request without informing the contentType a HTTP 500 error occour:
No MediaTypeFormatter is available to read an object of type 'ExampleObject' from content with media type ''undefined''.
why not try to detect the incoming data and bind automatically?
Another case:
This request with Content-Type: application/x-www-form-urlencoded send a JSON:
User-Agent: Fiddler
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
Host: localhost:10329
Content-Length: 42
Request Body:
{"Name":"qq","Email":"ww","Message":"ee"}:
My Action don't detect the JSON request data automatically in object param:
public void Create(ExampleObject example) //example is null
{
{
Instead of letting the object null why they do not try to solve it?
Then, for the binding occurs I need to send with Content-Type: application/json.
It would be best if .Net WebAPI detects the type of request data and do a auto-binding? Why not in this way?
application/x-www-form-urlencoded means you will be sending data in the x-www-form-urlencoded standard. Sending data in another standard will not work.
Sounds like what you want to do is accept multiple formats from the server.
the way http works is that the client makes a request to the server for a resource and tells the server what content types it understands. This means that the client doesnt get a response it isnt able to decode, and the server knows which responses are more appropriate on the client. For example if you are a web-browser the most appropriate content type is text/html but if you get XML you can probably do something with that too. So you would make a request with the following:
accept: text/html, application/xml
this says you prefer html but also understand XML
In your example if your client wants application/x-www-form-urlencoded but can also deal with JSON then you should do the following when making a request
accept: application/x-www-form-urlencoded, application/json
For more details see the HTTP Spec on accept headers here http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
You may also want to create a new media type formatter so your server knows how to give clients application/x-www-form-urlencoded, take a look at this blog post for more info on how to do this http://www.strathweb.com/2012/04/rss-atom-mediatypeformatter-for-asp-net-webapi/