Write HTTP trailer headers manually - http

This question was motivated by the answers here:
What to do with errors when streaming the body of an Http request
In this case, I have already written a HTTP 200 OK header, then I need to amend this if there is an error, by writing a trail header that says there was an error after writing a success header.
I have this Node.js code:
const writeResponse = function(file: string, socket: Socket){
socket.write([
'HTTP/1.1 200 OK',
'Content-Type: text/javascript; charset=UTF-8',
'Content-Encoding: UTF-8',
'Accept-Ranges: bytes',
'Connection: keep-alive',
].join('\n') + '\n\n');
getStream(file)
.pipe(socket)
.once('error', function (e: any) {
// there was an error
// how can I write trail headers here ?
s.write('some bad shit happened\n')
});
}
how do I write a useful trail header to the response that can be displayed well by the browser?
I think this is the relevant spec for trail headers:
https://www.rfc-editor.org/rfc/rfc2616#section-14.40
I think they should be called "trailing headers", but whatever.

Firstly:
I think this is the relevant spec for trail headers: https://www.rfc-editor.org/rfc/rfc2616#section-14.40
RFC 2616 has been obsoleted by RFC 7230. The current spec for trailers is RFC 7230 § 4.1.2.
Secondly:
].join('\n') + '\n\n'
Lines in HTTP message framing are terminated with \r\n, not \n.
Thirdly:
Content-Encoding: UTF-8
Content-Encoding is for content codings (like gzip), not charsets (like UTF-8). You probably don’t need to indicate charset separately from Content-Type.
And lastly:
how do I write a useful trail header to the response that can be displayed well by the browser?
You don’t. Mainstream Web browsers do not care about trailers.
See also (by the same user?): How to write malformed HTTP response to “guarantee” something akin to HTTP 500

Related

Can the "Accept" HTTP header have a "charset" parameter?

I have encountered an HTTP client that uses the following header:
Accept: application/vnd.api+json; charset=utf-8'
According to the HTTP spec, Accept headers can have parameters. The most common one being the q parameter, which sets the priority of different content types. However, there are a number of reasons I don't think charset is a valid Accept parameter:
Accept already has the Accept-Charset parameter, which seems like it makes this redundant
MDN doesn't include it in their documentation on Accept, even though they do include in on their Content-Type page
Werkzeug, the flask HTTP parser, doesn't bother to parse charsets for Accept, even though it does for Content-Type
So, it seems this Accept; charset is unusual. But is it wrong?
You quoted the spec, which says they are ok. What else needs to be said?

Sabre json api bargain finder max response as gzip

I am trying to get the response from bargain finder as compress. I am setting the "Accept-Encoding": "gzip" but the response is coming as simple json file not compressed.
Response header contains following information also.
'content-encoding': 'gzip', 'Content-Type': 'application/json;charset=UTF-8', 'Transfer-Encoding': 'chunked', 'Server': 'Sabre Gateway'}
There are 2 types of compressed responses:
Accept-Encoding: gzip
This is done by HTTP, you don't see the compression in the response, because, as far as I know it is the protocol that decompresses it.
As you can see in the HTTP header (of the response), there's one that statues content-encoding: gzip which means that it was returned zipped, if not you would likely see json.
The reason why I say "likely" is that the endpoint seems to be configured to return the BFM response gzipped always, whether you request it (using the Accept-encodding: gzip) or not.
"CompressResponse": { "Value" : true }
This element is available in the schema, but not covered in the service description because it does not seem to be available for REST, only for SOAP, I have tested it.

http Response Structure Zipped

I've been dabbling with the underlying client-server communications (cURL, browsers, and server responses, among a variety of others). I think at this point I understand the basic structure of a request and response:
Request:
(Method) (File) (Protocol)
(Headers)
(Empty)
(Body?)
Response:
(Protocol) (Code) (Meaning)
(Headers)
(Empty)
(Body?)
This was working fine until I sent compressed information. More precisely, I used gzip to send HTML in the response.
First, I'm using an updated version of Chrome to test -- and if I use a prebuilt HTTP server solution, gzip (In exactly the way I'm using it now) works fine.
In the 'Accept-Encoding' header sent from Chrome, it states that it will accept gzip. My guess is there's something specific I haven't run across yet (And haven't been able to find through searching).
All that said, here's the response:
HTTP/1.1 200 OK
Connection: keep-alive
Content-Type: text/html
Content-Encoding: gzip
Content-Length: 205
(Zipped data...)
If I remove gzip and leave everything else exactly the same, this works fine. The Content-Length is the length of the zipped data, not the raw data, though I commented that out to make sure it wasn't doing something I wasn't expecting. Chrome doesn't 'require' Content-Length to figure out what it wants to do.
At one stage, I also had Content-Type: text/html; charset=utf-8 which didn't make a difference (Underneath the zip, the string is converted to utf-8 on the server).
Originally I zipped the headers and the body for consistency (And lowest-length). I figured it failed because the headers had to be visible to the browser to figure out the body would be zipped.
In both cases, however, Chrome displays "ERR_CONTENT_DECODING_FAILED" which is classic "We don't know how to decode the information."
Prior to trying to embed this into the same response, I sent the headers first and the body second, thinking that may be the best approach -- two different responses.
Chrome took this to mean the server was pushing downloadable content and entered an infinite acceptance loop, so clearly that wasn't the solution. In addition, the content it received wasn't used.
Technically, a normal web server would end the transmission after sending the second response, but I'm experimenting without that to better understand the connections.
Is there something I'm missing, or are there any thoughts on what might be going wrong?
Thanks!
Edit
The code is scattered out across various functions (I doubt anyone wants to look at ~300 lines), but I picked out the relevant pieces. Right before gzip, self.written is html.
const zipper = require('zlib');
push(value, callback)
{
this.socket.write(value, callback);
}
let self = this;
zipper.gzip(self.written, function(error, zipped)
{
self.setHeaders(
{
'Content-Encoding': 'gzip',
'Content-Length': zipped.length
});
self.getHeaderString(function(error, headers)
{
self.push(headers + zipped, callback);
});
});
When I console.log the final response:

Why response 304 doesn't have Content-Type header?

I've been playing with express.js trying to return simple json object and noticed that even though I explicitly set Content-Type header to be application/json it is only visible on first response when status code is 200. Every following response with 304 won't have Content-Type header.
My code sample:
app.get('/user', function (req, res) {
res.set('Content-Type', 'application/json');
res.send([
{ user: "john", email: "john#example.com"},
{ user: "marry", email: "marry#example.com"},
{ user: "dan", email: "dan#example.com"}
]);
});
What is the reason for that?
304 Not Modified means that the request contained a conditional header asking the server to respond with the contents of the resource only if the the resource has been modified.
Since no content is being returned, the Content-Type header is not sent. This is the recommended behavior for a 304 Not Modified HTTP reply.
From RFC 7232 §4.1 :
The server generating a 304 response MUST generate any of the
following header fields that would have been sent in a 200 (OK)
response to the same request: Cache-Control, Content-Location, Date,
ETag, Expires, and Vary.
Since the goal of a 304 response is to minimize information transfer
when the recipient already has one or more cached representations,
a sender SHOULD NOT generate representation metadata other than the
above listed fields unless said metadata exists for the purpose of
guiding cache updates (e.g., Last-Modified might be useful if the
response does not have an ETag field).
I don't know anything about express.js, but it I would look into what sort of caching is being done.

How to interpret HTTP Accept headers?

According to the HTTP1.1 spec, an Accept header of the following
Accept: text/plain; q=0.5, text/html, text/x-dvi; q=0.8, text/x-c
is interpreted to mean
text/html and text/x-c are the preferred media types, but if they do not
exist, then send the text/x-dvi entity, and if that does not exist, send
the text/plain entity
Let's change the header to:
Accept: text/html, text/x-c
What is returned if neither of this is accepted ? e.g. let's pretend that I only support application/json
Maybe you should respond with a 406 Not Acceptable. That's how I read this.
Or a 415 Unsupported Media Type?
I would opt for a 406, because in that case and according to the specs, a response SHOULD include a list of alternatives. Although is not clear to me how that list should look like.
"If an Accept header field is present, and if the server cannot send a response which is acceptable according to the combined Accept field value, then the server SHOULD send a 406 (not acceptable) response." -- RFC2616, Section 14.1
You have a choice. You can either reply with 406 and include an "entity" (e.g. HTML or text file) describing the available formats; OR if you are using HTTP 1.1, you can send the format you support even though it wasn't listed in the Accept header.
(see section 10.4.7 of RFC 2616)
"Note: HTTP/1.1 servers are allowed
to return responses which are not
acceptable according to the accept
headers sent in the request. In some
cases, this may even be preferable to
sending a 406 response. User agents
are encouraged to inspect the headers
of an incoming response to determine
if it is acceptable."

Resources