I am looking for a way on how to implement file download functionality using gRPC but I can't find in the documentation how this is done.
What is an optimal way to do this? I wanted to have a gRPC server that holds files and a gRPC client to request something from gRPC.
I have looked at the examples in Java but could not figure out a way on how to do this. I just started reading about gRPC today.
You must define a specific message as data container and you must transfer it to the client in chunks using a stream.
Example of proto Message and RPC service:
message DataChunk {
bytes data = 1;
}
rpc DownloadProductImage(DownloadProductImageRequest) returns(stream DataChunk){}
In java you can use BufferedInputStream to stream the resource to the client.
protobuf / grpc
server side code
client side code
There are a few possible approaches.
If your file size is small enough, you can define a proto file with a single String field, and dump the entire text content into that field.
If your file size is too large to do this, you might want to chunk the file and use the streaming API to send the file chunk by chunk.
Below is a good guide for getting started with gRPC.
https://grpc.io/docs/tutorials/basic/java.html
I have taken the working example from db80 and have created a PR with a full example: https://github.com/grpc/grpc-java/pull/5126
Hopefully they will merge this then everyone can have a working example of this common use-case for GRPC.
Related
I want to understand the difference between SAX.writer and CGI wrapper. I can't find any get started information, any suggested content OR video link it can be very appreciated thanks.
The SAX writer is a set of language statements/elements that allow the creation of well-formed XML documents. This XML is output to the location specified by the SET-OUTPUT-DESTINATION method. Output destinations include streams (which might include a Classic WebSpeed stream (WEB-STREAM).
CGI wrapper is more of an approach, with a bunch of (internal) procedures that let you create a fully-formed HTTP response (and read from an incoming HTTP request). This approach should not be used for new web services, even though it still works. In newer version of OpenEdge the PASOE server provides what are known as WebHandlers, which replace the CGI wrapper approach.
The {&OUT} 'syntax' is really just a preprocessor that does something like PUT UNFORMATTED STREAM WEB-STREAM - you can see this if you compile your programs with the PREPROCESS option, or use the equivalent command in PDSOE (a right-click option).
I am very new to grpc and I am refactoring some http handlers to grpc. In there I have found a handler which is relevant to upload a file. In the request, it is sending a file as http.FormFile using http multipart form data.
What I found is, there is a way using request chunk data stream to upload file. But what i need is avoid streaming and do it in stateless manner.
I have searched a way to solve this, but I couldn't find a way to do that. Highly appreciate if someone give me a propper solution to do this
Tl;dr
gRPC was not designed to handle large file uploads in the same way that you would using http multipart form data uploads. gRPC has a (slightly) arbitrary 4MB message limit (also see this). In my experience, the proper solution is to not use gRPC for large file uploads. That being said, there may be a few options you can try.
Changing gRPC Call Options
You can manually override the default 4MB message limit using connection options when you dial the gRPC server. For example, see this:
client, err := grpc.Dial("...",
grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(4096)))
Using gRPC Streams
I have needed to use gRPC for file uploads without changing the default message limit options by implementing my own chunking package to handle it for me over a unary gRPC stream. You mentioned you wanted to avoid using a stream, however I'm providing this as a resource for those who want to avoid changing their gRPC message limit. Using the library, you'll need to wrap your gRPC client. Once a wrapper is created, you can upload anything that is compatible with the io.Reader interface.
err := chunk.UploadFrom(reader, WrapUploadFileClient(client))
I am totally newbie in c#, .net core 2 and protocol buffers but I have to work with thoses 3 technologies for a personnel project (server/client architecture). I have some questions about serialization/deserialization in multiple message.
I have already see this:
https://developers.google.com/protocol-buffers/docs/techniques#streaming
So I know it's need a special technique. After a little google session I found something about put and pop limits in c++ but I haven't see docs on c#.
I have an another question, does google protocol buffers handle nicely reading ? My sockets are monitored with select so when I want to read my messages (using https://developers.google.com/protocol-buffers/docs/reference/csharp/class/google/protobuf/message-parser#class_google_1_1_protobuf_1_1_message_parser_1a110e5d9bc61837e369e5deb093f59161)
I am not sure how protobuf will stop reading (I don't want to read beyond the data available on the socket because it will make my server blocking...)
Does it manage it ? Thank you...
Standalone protobufs aren't delimited in any way - they don't encode their length and have no fixed start nor end.
But the API gives you some tools for sending and storing multiple messages - specifically, you can use WriteDelimitedTo() to write multiple protobufs to some output, and then read them using parseDelimitedFrom()
I'm looking for use-cases for using reactive streams within a servlet container (or just a HTTP server).
The Jetty project has started being asked: "is Jetty reactive?" and we've noticed the proposal to add reactive streams to java 9.
So we've started some experiments with using the reactive streams API for async servlet IO, which are interesting enough..... but lack any focus because we lack real use-cases to focus which concerns are most important.
So does anybody have any good use-cases that they could share/explain so that we can direct our jetty experiments to meet their needs. The sort of thing I've imagined is having a RS based database publisher sending objects all the way out on a HTTP response or websocket connection using Flow.Processors for the conversions along the way.
A viable use case is when consuming the POSTing of multi-part form data, particularly when uploading files.
The Typesafe ConductR project (disclaimer: I'm the Tech Lead for it), receives multi-part form data when a user loads a bundle. We use akka-streams/http.
We read off the first two parts of the stream as our protocol specifies that they must declare some meta data in order for us to know which node to write the bundles to. After some validation, we then determine the node to write them to, and connect the partially consumed stream. Thus the node that receives the request to upload the bundle negotiates which node it is going to write it to, while not having to consume the entire stream (which could be 200MB) and then write it out again.
Writing out multi-part form data is also a great use-case given that you can stream the file from disk as a source and pass it on to some http endpoint i.e. the client-side of what I describe above.
The benefits with both use-cases are that you minimise the amount of memory needed to move bytes over a network, and you only perform file IO where it is necessary.
I am doing file transfers, but the filereference API doesn't support file chunking. Has anyone done this before? For example, I would like to be able to upload a 1 gig file from an AIR client to a custom PHP/Java/etc. service.
It seems that all you should have to do is use the upload() routine. The php or java service should be doing the chunking.
var myHugeFile = new air.File('myHugeLocal.file');
myHugeFile.upload(new URLRequest("http://your.website.com/uploadchunker.php"));
There is a much more elaborate example of using filereference in the adobe learning area here:
http://www.adobe.com/devnet/air/flex/articles/uploading_air_app_to_server.html
Three options jump out on this:
Use an FTP service that supports resumable transfers, assuming flash supports this as well. Maybe not an option if you are wanting to communicate with a custom service of your own.
Leverage the http file part header support. Only applicable if AIR allows access to the appropriate http headers (content-range & content-length). This is what BITS does. Probably a bit harder to implement.
Hand roll your own TCP or UDP protocol exchange. Not for the faint of heart. I'd look in the OSS space before going this route.
I think FileReference does chunk, at least that is what I have observed. Using a tool like Fiddler, you can watch it in action. If you analyze the outgoing headers of a FileReference upload, they are chunked.
If resumes are what you're after, I cannot say how you would go about that with FileReference. I have uploaded small files in generic posts, but that requires the flash/air client to load all bytes into the app. In Air that may or may not crash flash with a 1GB file (depends on your system I guess).