How to force a endpoint which takes files as base64 strings to stream? - retrofit

I have a 3rd party (sort of) end point which accepts images as base64 strings in a json. Obviously this will lead to timeouts etc.
Is there a some magical way (some server config etc) to turn this into streamed upload as if it were a file? Or will regular multipart upload need to be implemented

Related

Best way to upload video via a presigned URL to S3?

I'm wondering what the best way is to upload a video to S3 via a presigned URL. I am primarily considering using a standard HTTP PUT request, placing video/mp4 as the Content-Type, and then attaching the video file as the body.
I'm wondering if there are more efficient approaches to doing this, such as using a third party library or possibly compressing the video before sending it via the PUT request?
In general, when your object size reaches 100 MB, you should consider
using multipart uploads instead of uploading the object in a single
operation.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html
I had most success using Uppy for this
https://uppy.io/docs/aws-s3-multipart/
You will need to provide some backend endpoints though:
https://uppy.io/docs/aws-s3-multipart/#createMultipartUpload-file
https://uppy.io/docs/aws-s3-multipart/#listParts-file-uploadId-key
https://uppy.io/docs/aws-s3-multipart/#prepareUploadParts-file-partData
https://uppy.io/docs/aws-s3-multipart/#abortMultipartUpload-file-uploadId-key
https://uppy.io/docs/aws-s3-multipart/#completeMultipartUpload-file-uploadId-key-parts
On compression part of you question, S3 does not have any compute. It will not modify your sent bytes, it will just store it. If want to use compression, you need to do that before upload, upload to cloud, unzip there with some compute (Ec2, Lambda etc.) and then put to S3.

upload file api with uploadtask in symfony 2.8

We realize that if we want to produce a multipart query that contains a video file of 15GB, it is impossible to allocate in memory the size needed for such a large amount of data, most devices have only 2 or 3GB of RAM.
It is therefore absolutely necessary to switch to the uploadTask method which will push to the server the contents of a block file of the maximum size allowed by the IP packets sent to the server.
This is a POST method. However, it does not contain parameters such as the folder id or the file name. So you need a way to transmit these parameters. The best way is to code them in the URL.
I proposed an encoding format in the form of a path behind the endpoint of the API, but we can also very well encode these two parameters in a classic way in the URL, eg:
/api/upload?id=123&filename=video.mp4
From what I read on Stackoverflow, it's trivial with Symfony to retrieve id and filename. Then all the data received in the body of the POST request can be written in a raw way directly into a file, without also passing through a buffer in server-side memory.
The user data must imperatively be streamed, whether mobile side or server side, and whether upload or download. Loading user content in memory is also very dangerous in terms of security.
In symfony, how can I do that?
This goes way beyond Symfony and depends on the web server you are using.
By default with apache/nginx and php you will receive an already buffered request, so you cannot stream it to a file.
However, there are solutions, for example with Apache you can stream requests, see http://hc.apache.org/httpclient-3.x/performance.html#Request_Response_entity_streaming
Probably nginx also has options for it, but I don't know about those.
Another option might be websockets, see http://en.wikipedia.org/wiki/WebSocket

JSch SFTP file upload/download - why use the methods that return a stream?

The ChannelSftp class has versions of get() and put() methods not returning anything, or returning InputStream/OutputStream.
What's the use case for using the methods returning streams, and reading/writing the files byte by byte, versus using the easy get() and put() methods where you specify the source and destination paths, and let the program do everything for you?
My guess is if you are downloading and playing a video/audio file would be one case, but what if you just move files to/from one server to another? Any point in using the streams then?
Here is the documentation:
http://epaul.github.io/jsch-documentation/javadoc/com/jcraft/jsch/ChannelSftp.html#get(java.lang.String,%20java.lang.String)
As with any other I/O interface, the variants with streams are useful when you do not manipulate files, but in-memory data.
For example, you might have produced the content based on user input and you want to upload it. You do not need the local copy in a file. So you stream the in-memory data to SFTP.
Streams are also useful abstraction.
If you are uploading from a file or downloading to a file, use the overloads that take paths. Creating a file stream is unnecessary overhead in this case.

Render HTML from a ZIP stream on client side

This is both a strategy and a technical question, I'm building a web posting mechanism and I will need to store a lot of HTML posts (discussions, comments etc.)
I'm thinking about saving all my HTML posts into database as a ZIP compressed stream (instead of plain text or XML) in order to save space and increase security by encrypting those ZIP data steams, so it will be saved to the database compressed (hopefully close to 90% smaller) and secure. (it does not need to be searchable, I'm going to create the search index myself out of the content of each post)
I want to deliver the ZIP object to the web page/cache and then have the client side unzip the stream and render the HTML that it represent.
This is a Microsoft based MVC web site (c#)
I'm trying to figure out reasons not to do it... other than performance, can anyone pinpoint any other issues with doing something like that?
Also, is there any recommended libraries or built-in ones that I should use for better performance - that both server side and client side can understand (zip and unzip with encryption key/password)?
Thanks in advance.
In normal operation, http allows to send the html in a gzipped stream. The webserver compresses the data and sets the corresponding header. The client then unzips transparently.
You simply have to make sure to set the correct header and not have the webserver zip again the already zipped stream.
I see a major drawbacks :
You cannot alter the data. That means you cannot add the code for your template nor link between the pages.
I don't think this is a good approach. Store your data as you like and decompress it on the server.

Save file directly to disk in ASP.NET without loading it into memory

I have an ASP.NET web application and I want my users to be able to upload large files. However, some files are very large and uses too much memory.
In principle it should be possible to receive the request stream and write it directly to a FileWriter stream, removing any need to load the entire file into memory first.
I've tried accessing Request.InputStream and writing it directly to a file. It works, but a test using larger files reveal that Request.InputStream is only available after the entire request is already loaded into memory.
Can someone tell me an approach I can use to receive a normal Request.InputStream in ASP.NET and directly write it to a file without first loading it into memory?
Note, the file is sent through a normal request in a browser by posting a form with a file field.
(I actually use BlueImp JQuery File Upload but I don't think it's relevant to this question)
The process is called byte serving.
Byte Serving:
Byte serving is the process of sending only a portion of an HTTP/1.1 message from a server to a client. Byte serving begins when an HTTP server advertises its willingness to serve partial requests using the Accept-Ranges response header. A client then requests a specific part of a file from the server using the Range request header.
Is seems that IIS and ASP.NET are capable of handling Accept-Range headers. There is a Range Controller on Microsoft git repositories.
Here is an article that may be useful in configuring IIS to handle these requests.

Resources