I'm creating zip files on the fly for download, I don't know how big they are going to turn out exactly but I can give a pretty good guess. I added it for the sake of client side fluff since the downloads are pretty big but having a progress bar isn't essential. https://www.rfc-editor.org/rfc/rfc7230#section-3.3.2 says Content-Length is for "anticipated size" which can be interpreted as either exact or approximate.
So far all the browsers I've tried don't have a problem with an approximate content size header but are there any out there that do?
Read it completely!
If you are using HTTP/1.1, and do not use chunked encoding, the content-length absolutely needs to be precise; it's needed for the message framing.
Related
I got a file that seems to not have anything readable into it (for a human)
How can I be sure that it hasn't anything readable for a human? Because it's way too large to read it entirely (maybe a program that searches for words or entropy or I don't know.)
How can I know if this file is compressed or encrypted, or both? And is it possible that it has a proprietary compression so I can't distinguish it from encryption?
Because if I can make sure that it's encrypted, I can stop my work directly, but if it's just encoded/compressed, maybe I can find a way to read it
(I tried to compress it with the basic Windows archiver and it loses 18% of its size. Does it mean that it's not encrypted? Does an encryption permit that much compression?)
Yes, it is certainly possible to create a compression format for which all possible sequences of bits is valid. In that case, you would not be able to distinguish the compressed data from random or encrypted data.
I am not aware of a commonly implemented compressed format that has that property. You could try all of the decompressors you can find on the data to see if any continue to decompress through all of the data without erroring out. You can also try starting at different locations in your data, since there may be some sort of header before the compressed data.
Online Decryption
If you would like to decrypt the file. You could simply copy and paste everything inside of https://online-toolz.com/tools/text-encryption-decryption.php
that feature can decrypt messages fast.
Encoder & Decoder
https://www.base64decode.org/
I found this website a while ago, this website is trusted and fast with great reviews.
This method can also help with your request.
I've been trying to update Ross-Gill's Twitter API for REBOL2 to support uploading media. From looking at its source, the REBOL cookbook, the codeconscious site, and other questions here, my understanding is that read/custom is the preferred way to POST data to websites.
However, I haven't been able to find any real documentation on read/custom. For example: Does it support sending multipart/form-data? (I've managed to work around this by manually composing each part, but it doesn't seem to work for all image files on Twitter's end and is a bit of a hack). Does read/custom only return text on an HTTP/1.0 200 OK response? (It appears so, which is problematic when I receive HTTP/1.0 202 Accepted and need to read the resulting data). Is there a reason that read/custom/binary doesn't appear to send binary data correctly without converting the data using to-string?
TL;DR: Is there good documentation on REBOL2's read/custom somewhere? Alternatively, is read/custom only meant for basic POSTs and I should be using ports and handling the HTTP responses manually?
You guessed right, read/custom is meant for simple HTTP posts, handling web forms data only (that is why it will fail on binary data). No official documentation for it. But that is not an issue as you can access the source code of the HTTP implementation:
probe system/schemes/HTTP
There you can see that /custom refinement supports two keywords, post and header (for setting custom HTTP headers). It also appears that even if you use both keywords, Content-Type will be forced to application/x-www-form-urlencoded no matter what (which is probably the reason why your binary data gets rejected by the server, as the provided mime type is wrong).
In order to work around that, you can save the HTTP object, modify its implementation to fit your needs and reload it.
Saving:
save %http-scheme.r system/schemes/HTTP
Reloading:
system/schemes/HTTP: do load %http-scheme.r
If you just disable the hard-coded Content-Type setting in the HTTP code, and then provide your own one using header keyword, it should work fine, even with binary data:
read/custom <url> [header [Content-Type: <...>] post <data>]
Hope this helps.
By browsing the source code and playing with some toy examples I got to the conclusion that Netty currently (as of 5.0.0 alpha2) supports only multipart/form-data, but not multipart/mixed, at least not as specified in RFC1342 (sec. 7.2). It looks like mixed is supported inside a part in multipart/form-data though.
Is that really the case or am I missing something?
Since I get the very same question, I post here what could be an beginning of answear...
However, the current implementation seems to have 2 limitations:
1) it supports only multipart/form-data. I would like to also be able
to use multipart/mixed, which is very similar on the wire (see
http://www.w3.org/Protocols/rfc1341/7_2_Multipart.html ). I think that
the encoder/decoder could be extended to understand multipart/mixed
and still create the same kinds of HttpDatas.
Yes, the current codec is focused on multipart/form-data. I shall be possible to extend or propose a new one (based on it probably) to enable the support of multipart/mixed.
The current codec was made based on user needs (mine in the beginning, others following). Since no one yet has requested a support for multipart/mixed, it was not coded, except for internal multipart/mixed code.
The reference is RFC1867.
As Netty loves contributions, you are more than welcome to propose yours ;-)
2) it seems that is it only possible to use efficient HttpDatas like
FileUpload if you are in multipart/form-data. I would like to be able
to add a FileUpload to the request, and by this way make the contents
of the file be the body of the request, without making it a multipart
request. I think this could be done by extending the Standard Post
Encoder to understand FileUploads.
This could a bit more complicated since it has to be done without multipart, which holds currently the FileUpload class.
Maybe a good direction could be to switch to ChunkFile or ChunkNioFile and to combine it with "your" HttpCodec or in your "HttpHandler" when doing the body request, in order to pass the content through the ChunkFile.
Hoping this helps you in the right direction...
I have seen the xbuf from gwan. Not sure when it is not a good use for it. Can it be used for integers or float? When is it not recommended to be used? I am very much inclined to use it as often as possible.
As an application server, G-WAN is expected to generate dynamic contents.
In this case, the server is building a reply served to clients.
Part of these dynamic contents are binary (like pictures) and this is why G-WAN offers a native ultra-fast in-memory GIF, charts, and frame-buffer API. More complex images can be generated with general-purpose libraries like Cairo (used by Internet browsers).
But most dynamic contents are text (like HTML pages, JSON payloads, etc.).
And this is the purpose of the G-WAN xbuffer API which works as an extended snprintf() - supporting strings, integers, floats, base64, hexdump, binary formating (3 => "11") and more.
The loan.c example illustrates very well how relevant, fast and versatile xbuffers are.
when should you not use it?
When sending an empyt reply (HTTP status code 204), or (less likely but still possible) when appending data directly in a previously resized reply buffer. G-WAN examples show how to do that, look at the fractal.c file.
I need to send certain data to the server in a .zip archive, over HTTP POST request, MIME encoded. I take it that means only that I need to specify MIME type in a request header. But I'm confused as to what should I put in request's body. So far I can see two ways to do it:
Usually, as I take it (sorry, I'm not a web coder, so kinda lame with HTTP), POST request body consists of pairs parameter_name=some+data divided by '&'. Should I do it the same way and write contents of my file in base64 in one of parameters? That would also let me provide supplemental parameters.
Or should I just fill POST body with contents of my file (in base64, right?)? If so, is there any way to provide additional info about the file?
Is only one of theese ways acceptable or are both? If so, what would be the best practice?
Also, code sample in C++ for Qt would be very-very much appreciated, but totally not necessary :)
The whole key=value body in POST requests is just for when you are sending form-data to your server. If you want to POST only the contents of a .zip file you can just send that as the body of your POST, no need to set it up like a form post as you describe. You can set the following headers in the request:
Content-Type: application/zip
Content-Disposition: attachment; filename=myzip.zip
You don't even necessarily have to base64 encode the body, although you should if that's what your server is expecting.
The Content-Disposition is the thing you need to describe more about your file upload. You can find some details about it here:
http://en.wikipedia.org/wiki/MIME#Content-Disposition
and here
http://www.ietf.org/rfc/rfc2183.txt
At the server end, you just need to write some code which will get the response body in its entirity (which is straightforward, although YMMV depending on language and framework), and handle it however you want.
For a real world example, you might find it useful to look at, say, AtomPub for how this is done:
http://bitworking.org/projects/atom/rfc5023.html