other ways to transfer PDF Byte as a HttpResponseMessage? - asp.net

I have a function that retrieves PDF bytes from another Webservice. What I wanted to do is make the PDF bytes also available to others by creating an API call that returns HttpResponseMessage.
Now, my problem is I don't think that passing it through json is possible, because it converts the PDF bytes into a string?
Is there any other practical way of passing the PDF, or making the PDF visible to the requestors?
(Note: saving the PDF file in a specific folder and then returning the URL is prohibited in this specific situation)

I just solved it. There is a new paramater responseType: 'arrayBuffer' which addresses this problem. Sample: $http.post('/api/konto/setReport/pdf', $scope.konta, { responseType: 'arraybuffer' }) View my question and answer on SO: How to display a server side generated PDF stream in javascript sent via HttpMessageResponse Content

Related

Send a File as well as parameters (through JSON) inside one HTTP request

I am creating a server using Go that allows the client to upload a file and then use a server function to parse the file. Currently, I am using two separate requests:
1) First request sends the file the user has uploaded
2) Second request sends the parameters to the server that the server needs to parse the file.
However, I have realised that due to the nature of the program, there can be concurrency problem if multiple users try to use the server at the same time. My solution to that was using mutex locks. However, I am receiving the file, sending a response, and then receiving the parameters and it seems that Go cannot send a response back when the mutex is locked. I am thinking about solving this problem by sending both the file and the parameters in one single HTTP request. Is there a way to do that? Thanks
Sample code (only relevant parts):
Code to send file from client:
handleUpload() {
const data = new FormData()
for(var x = 0; x < this.state.selectedFile.length; x++) {
data.append('myFile', this.state.selectedFile[x])
}
var self = this;
let url = *the appropriate url*
axios.post(url, data, {})
.then(res => {
//other logic
self.handleParser();
})
}
Code for handleParser():
handleNessusParser(){
let parserParameter = {
SourcePath : location,
ProjectName : this.state.projectName
}
// fetch the the response from the server
let self = this;
let url = *url*
fetch(url, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(parserParameter),
}).then( (response) => {
if(response.status === 200) {
//success logic
}
}).catch (function (error) {
console.log("error: ", error);
});
}
The question is not really about Go or reactjs or any particular software library.
To solve your problem you'd first need to understand how HTTP POST works,
hence I invite you to first read this intro on MDN.
In short:
There are multiple ways to encode the data sent in a POST request.
The way the receiver should deal with this data depends on how it's encoded by the sender.
The sender has to communicate the encoding with its request — usually via the Content-Type header field.
I won't go into the details of possible encodings — the referenced introductory material covers them, and you should do your own research on them, but to maybe recap what's written there, here is some perspective.
Back in the 80s and 90s the web was "static" and the dreaded era of JavaScript-heavy "web apps" did not yet come. "Static" means you could not run any code in the client's browser, and had to encode any communication with the server in terms of plain HTML.
An HTML document could have two ways to make the client rendering it to send something back to the server: a) embed an URL which would include query parameters; this would make the client to perform a GET request with these parameters sent to the server; b) embed an HTML "form" which, when "submitted", would result in performing a rather more complex POST request with the data taken from the filled in form.
The latter approach was the way to leverage the browser's ability to perform reasonably complex data processing — such as slurpling a file selected by the user in a specific form's control, encoding it appropriately and sending it to the server along with the other form's data.
There were two ways to encode the form's data, and they are both covered by the linked article, please read about them.
The crucial thing to understand about this "static web with forms" approach is that it worked like this: the server sends an HTML document containing a web form, the browser renders the document, the user fills the form in and clicks the "submit" button rendered by the browser; the browser collects the data from the form's controls, for entries of type "file" it reads and encodes the contents of those files and finally performs an HTTP POST request with this stuff encoded to the URL specified by the form. The server would typically respond with another HTML document and so on.
OK, so here came "web 2.0", and an "XHR" (XMLHttpRequest) was invented. It has "XML" in its name because that was the time when XML was perceived by some as a holy grail which would solve any computing problem (which it, of course, failed to do). That thing was invended to be able to send almost arbitrary data payloads; XML and JSON encoding were supported at least.
The crucial thing to understand is that this way to communicate with the server is completely parallel to the original one, and the only thing they share is that they both use HTTP POST requests.
By now you should possibly see the whole picture: contemporary JS libs allow you to contruct and perform any sort of request: they allow you to create a "web form"-style request or to create a JS object, and serialise it to JSON, and send the result in an HTTP POST request.
As you can see, any approach allows you to pass structured data containing multiple distinct pieces of data to the server, and the way to handle this all is a matter of agreement between the server and the client, that is, the API convention, if you want.
The difference between various approaches is that the web-form-style approach would take care of encoding the contents of the file for you, while if you opt to send your file in a JSON object, you'll need to encode it yourself — say, using base64 encoding.
Combined approaches are possible, too.
For instance, you can directly send binary data of a file as a POST request's body, and submit a set of parameters along with the request by encoding them as query-parameters of the URL. Again, it's up to the agreement between the client and the server about how the latter encodes the data to be sent and the former decodes them.
All-in-all, I'd recommend to take a pause and educate yourself on the stuff I have outlined above, and then have another stab at solving the problem, but this time — with reasonably complete understanding about how the stuff works under the hood, and how you intend to wield it.

Streaming a file in Liferay Portlet

I have written downloading a file in a simple manner:
#ResourceMapping(value = "content")
public void download(ResourceRequest request, ResourceResponse response) {
//...
SerializableInputStream serializableInputStream = someService.getSerializableInputStream(id_of_some_file);
response.addProperty(HttpHeaders.CACHE_CONTROL, "max-age=3600, must-revalidate");
response.setContentType(contentType);
response.addProperty(HttpHeaders.CONTENT_TYPE, contentType);
response.addProperty(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename*=UTF-8''"
+ URLEncoder.encode(fileName, "UTF-8"));
OutputStream outputStream = response.getPortletOutputStream();
byte[] parcel = new byte[4096];
while (serializableInputStream.read(parcel) > 0)
outputStream.write(parcel);
outputStream.flush();
serializableInputStream.close();
outputStream.close();
//...
}
The SerializableInputStream is described here - JavaDocs. It allows an InputStream to be serialized and, for instance, passed over remoting.
I read from input and write it to the output, not all bytes at once. But unfortunately the portlet isn't "streaming" the contents - the file (e.g. an image) is sent to the browser only after reading the entire input stream - this is how it looks like. I see the file being read from the database (from live logs), but I don't see any "growing" image on the screen.
What am I doing wrong? Is it possible to really stream a file in Liferay 6.0.6 and Spring Portlet MVC?
Where are you doing this? I fear that you're doing this instead of rendering your portlet's HTML (e.g. render phase). Typically the portlet content is embedded in an HTML page, thus you need the resource phase, which (roughly) behaves like a servlet.
Also, the code you give does not match the actual question you ask: You use a comment //read from input stream (file), write file to os and ask what to do differently in order to not have the full content in memory.
As the comment does not have anything in memory and you could loop through reading from the input file while writing to the output stream: What's the underlying question? Do you have problems with implementing download-streaming in a portal environment or difficulties (i.e. using too much memory) reading from a file while writing to a stream?
Edit: Thanks for clarifying. Have you tried to flush the stream earlier? You can do that whenever you want - e.g. every loop (though that might be a bit too much). Also, keep in mind that the browser as well as the file itself must handle it in a way that you expect: If an image is not encoded "incrementally" a browser might not show it that way.
Have you tried this with huge files as well? It might be that the automatic flushing is just not triggered because your files are too small for it to be triggered...
Also, I think that filename*=UTF-8'' looks strange. Might be valid encoding, but I've never seen this

Render multiple PDF to browser with Report VIewer - ASP.NET

Current situation
I have ASP.NET web application that render PDF for users using MS Report Viewer. The PDF is rendered with this method:
Byte pdfByte = Byte();
pdfByte = ReportViewer.LocalReport.Render("PDF", Nothing, mimeType, encoding, extensions, stream, warning)
And send to browser as an attachment with response object:
Response.Clear()
Response.ContentType = mimeType
Response.AddHeader("content-disposition", "attachment; filename=myfile." + extension)
Response.BinaryWrite(pdfByte)
Response.Flush()
Response.End()
This work great! The user browser will get the PDF as download-able attachment.
What I am trying to achieve
Render multiple PDF and send all of them separately to user's browser. User will get separate PDF documents. It doesn't matter whether they will get them all at once or one by one.
The problem
The problem is after Response.End() the next line of code is not executed. I have tried to store the pdfByte object in session, looping through it and send them to user's browser with Response object but after the first PDF get sent then it stop.
I have also tried removing Response.End() thinking the code will keep running but still it stop after the first PDF get sent.
Please advice any workaround or tips. Thanks!
You cannot send multiple files (as separate entities) in a single HTTP response (the protocol does not support it. However, what you can do is to archive all files together and send the that single zip (or whatever format you want) to the client.
You can use libraries such as DotNetZip/SharpZipLib to combine (and compress) files together. Based on library API, you may need to save PDF files to disk before adding to zip file. Also do not forget to change your content type appropriate while sending the zip file to client.
Yet another alternative is to provide user with a page having multiple links to download files. It may mean that you either have to store your PDFs for some time so that they can served later (via links) or make link point to a handler that will re-run the report again to get the PDF out of it.
Admittedly the method I'm using doesn't feel very elegant, but here's what I'm doing:
create one IFRAME on the page for each document you want to send to the client (maybe create the IFRAMEs dynamically in server-side code if the number of documents is variable);
create a HttpHandler that generates the PDF documents, depending on a parameter you're passing in through the QueryString, just like you're doing above;
set the src on all IFRAMES to the URL of the HttpHandler with the appropriate parameters attached.
Of course the HttpHandler needs to do implement security logic, if required.
This works quite beautifully: If I want to send 3 documents, I create 3 IFRAMEs, set their src, and the user will see 3 "Save As..." dialogs pop up.

File upload and read from database

I am using file upload mechanism to upload file for an employee and converting it into byte[] and passing it to varBinary(Max) to store into database.
Now I what I have to do is, if any file is already uploaded for employee, simply read it from table and show file name. I have only one column to store a file and which is of type VarBinary.
Is it possible to get all file information from VarBinary field?
Any other way around, please let me know.
If you're not storing the filename, you can't retrieve it.
(Unless the file itself contains its filename in which case you'd need to parse the blob's contents.)
If the name of the file (and any other data about the file that's not part of the file's byte data) needs to be used later, then you need to save that data as well. I'd recommend adding a column for the file name, perhaps one for its type (mime type or something like that for properly sending it back to the client's browser, etc.) and maybe even one for size so you don't have to calculate that on the fly for each file (useful when displaying a grid of files and not wanting to touch the large blob field in the query that populates the grid).
Try to stay away from using the file name for system-internal identity purposes. It's fine for allowing the users to search for a file by name, select it, etc. But when actually making the request to the server to display the file it's better to use a simple integer primary key from the table to actually identify it. (On a side note, it's probably a good idea to put a unique constraint on the file name column.)
If you also need help displaying the file to the user, you'll probably want to take the approach that's tried and true for displaying images from a database. Basically it involves having a resource (generally an .aspx page, but could just as well be an HttpHandler instead) which accepts the file ID as a query string parameter and outputs the file.
This resource would have no UI (remove everything from the .aspx except the Page directive) and would manually manipulate the response headers (this is where you'd set the content type from the file's type), write the byte stream to the client, and end the response. From the client's perspective, something like ~/MyContent/MyFile.aspx?fileID=123 would be the file. (You can suggest a file name to the browser for saving purposes in the response headers, which you'd probably want to do with the file's stored name.)
There's no shortage of quick tutorials (some several years old, it's been around for a while) on how to do this with images. Just remember that there's essentially no difference from the server's perspective if it's an image or any other kind of file. All the server needs to do is send the type in the response headers and write the file's bytes to the client. How the client handles the file is up to the browser. In the vast majority of cases, the browser will know what to do (display an image, display via a plugin a PDF, save a .doc, etc.).

How can I return a pdf from a web request in ASP.NET?

Simply put, I'd like someone to be able to click a link, and get a one-time-use pdf. We have the library to create PDF files, so that's not an issue.
We could generate a link to an aspx page, have that page generate the pdf, save the pdf to the filesystem, and then Response.Redirect to the saved pdf. Then we'd somehow have to keep track of and clean up the PDF file.
Since we don't ever need to keep this data, what I'd like to do instead, if possible, is to have the aspx page generate the pdf, and serve it directly back as a response to the original request. Is this possible?
(In our case, we're using C#, and we want to serve a pdf back, but it seems like any solution would probably work for various .NET languages and returned filetypes.)
Assuming you can get a byte[] representing your PDF:
Response.Clear();
Response.ContentType = "application/pdf";
Response.AddHeader("Content-Disposition",
"attachment;filename=\"FileName.pdf\"");
Response.BinaryWrite(yourPdfAsByteArray);
Response.Flush();
Response.End();
Look at how HTTP works. The client (=browser) doesn't rely on extensions, it only wants the server to return some metadata along with the document.
Metadata can be added with Response.AddHeader, and one 'metadata line' consists of Name and Value.
Content-Type is the property you are interested in, and the value is MIME type of the data (study: RFC1945 for HTTP headers, google for MIME type).
For ordinal aspx pages (html, ....) the property is 'text/html' (not so trivial, but for this example it is enough.). If you return JPG image, it can have name 'image.gif', but as long as you send 'image/jpeg' in Content-Type, it is processed as JPG image.
Content-type for pdf is 'application/pdf'.
The browser will act according to default behaviour, for example, with Adobe plugin, it will display the PDF in it's window, if you don't have any plugin for PDF, it should download the file, etc..
Content-Disposition header says, what you should do with the data. If you want explicitly the client to 'download' some HTML/PDF/whatever, and not display it by default, value 'attachment' is what you want. It should have another parameter, (as suggested by Justin Niessner), which is used in case of something like:
http://server/download.aspx?file=11 -> Content-Disposition: attachment;filename=file.jpg says, how the file should be by default named.

Resources