Chrome browser does not show images generated by HTTP handler - asp.net

Basically I have a web site that renders HTML preview of some documents (mainly office). The resulting HTML fragment is included in the page returned by the same web site, however images are returned by HTTP handler from another site with the following links:
<img width="50" height="50" src="http://portal/Service/GetFile.asxh?id=123&inline=true">
For some reason all browsers except Chrome (e.g. IE6/7/8, Firefox, Opera, Safari) show everything just fine, however for these images Chrome shows "broken image" icon. If I choose "Open image in new tab" then the image is shown just fine.
Edit I thought I have solved this issue, but apparently with Fiddler turned on it works fine.
I had context.Response="utf-8" left in code, but removing it had no difference.
Headers:
HTTP/1.1 200 OK
Date: Wed, 05 Jan 2011 14:26:57 GMT
Server: Microsoft-IIS/6.0
MicrosoftOfficeWebServer: 5.0_Pub
X-Powered-By: ASP.NET
X-AspNet-Version: 4.0.30319
Transfer-Encoding: chunked
Cache-Control: no-cache
Pragma: no-cache
Expires: -1
Content-Type: image/jpeg
Code:
context.Response.ContentType = file.ContentType;
context.Response.Cache.SetCacheability(HttpCacheability.NoCache);
byte[] buff = new byte[BuffSize];
using (var stream = repository.GetFileContentsAsStream(file.ContentId))
{
int bytesRead;
do
{
bytesRead = stream.Read(buff, 0, BuffSize);
if (bytesRead > 0)
{
context.Response.OutputStream.Write(buff, 0, bytesRead);
}
} while (bytesRead > 0);
}
context.Response.Flush();
context.Response.Close();

I'm pretty sure Chrome requires the Length to be set for images, so try adding the Content-Length header to your response when handling the image.

It's the context.Response.Close(); Chrome doesn't like it. Get rid of that line and all will be good. I fought with this for months.

You should add this:
Response.AddHeader("Content-Disposition", "inline;Filename=\"Picture.gif\"");
Response.AddHeader("Content-Length", filesize.ToString());

https://en.wikipedia.org/wiki/Comparison_of_web_browsers#Image_format_support
My issue after trying all of the above was simply that chrome among others do not support my image type. Tiff is not supported by chrome.

Related

HTTP caching: why is browser not checking server at all before presuming cached file is current?

This is about some code I inherited; the intent is clear, but (at least in Firefox and Chrome) it is not behaving as intended.
The idea is clearly to build a PNG based on client-side data and to cache it unless and until that data changes. The intent presumably is that the state of the PNG is preserved regardless of whether or not the client is using cookies, local storage, etc., but at the same time the server does not preserve data about this client.
Client-side JavaScript:
function read_or_write_png(name, value) {
// WRITE if value is defined, non-null, etc., get otherwise
if (value) {
// WRITE
// Use cookie to convey new data to server
document.cookie = 'bx_png=' + value + '; path=/';
// bx_png.php generates the image
// based off of the http cookie and returns it cached
var img = new Image();
img.style.visibility = 'hidden';
img.style.position = 'absolute';
img.src = 'bx_png.php?name=' + name; // the magic saying "load this".
// 'name' is not consulted server-side,
// it's here just to get uniqueness
// for what is cached.
} else {
// READ
// Kill cookie so server should send a 304 header
document.cookie = 'bx_png=; expires=Mon, 20 Sep 2010 00:00:00 UTC; path=/';
// load the cached .png
var img = new Image();
img.style.visibility = 'hidden';
img.style.position = 'absolute';
img.src = 'bx_png.php?name=' + name;
}
}
Server-side PHP in bx_png.php:
if (!array_key_exists('bx_png', $_COOKIE) || !isset($_COOKIE['bx_png'])) {
// we don't have a cookie. Client side code does this on purpose. Force cache.
header("HTTP/1.1 304 Not Modified");
} else {
header('Content-Type: image/png');
header('Last-Modified: Wed, 30 Jun 2010 21:36:48 GMT');
header('Expires: Tue, 31 Dec 2030 23:30:45 GMT');
header('Cache-Control: private, max-age=630720000');
// followed by the content of the PNG
}
This works fine to write the PNG the first time and cache it, but clearly the intention is to be able to call this again, pass a different value for the same name, and have that cached. In practice, once the PNG has been cached, it would appear (via Fiddler) that the server is not called at all. That is, on an attempted read, rather than go to the server and get a 304 back, the browser just takes the content from the cache without ever talking to the server. In and of itself, that part is harmless, but of course what is harmful is that the same thing happens on an attempted write, and the server never has a chance to send back a distinct PNG based on the new value.
Does anyone have any idea how to tweak this to fulfill its apparent intention? Maybe something a bit different in the headers? Maybe some way of clearing the cache from client-side? Maybe something else entirely that I haven't thought of? I'm a very solid developer in terms of both server-side and client-side, but less experienced with trickiness like this around the HTTP protocol as such.
You need to add must-revalidate to your Cache-Control header to tell the browser to do that.
Try cache-control: no-store as it fixed this exact same problem for me in Safari/WebKit. (I think Chrome fixed it in the time since your question.)
It's still an open WebKit bug but they added a fix for this header.

How to debug corrupt zip file generation?

We have a web page that is grabs a series of strings from a url, finds some pdfs associated with those strings, zips them up using DotNetZip, and returns them to the user. The page that does this is very simple - here's the Page_Load:
protected void Page_Load(object sender, EventArgs e)
{
string[] fileNames = Request.QueryString["requests"].Split(',');
Response.Clear();
Response.ClearHeaders();
Response.ContentType = "application/zip";
string archiveName = String.Format("MsdsRequest-{0}.zip", DateTime.Now.ToString("yyyy-mm-dd-HHmmss"));
Response.AddHeader("Content-Disposition", "attachment; filename=\"" + archiveName + "\"");
using (ZipFile zip = new ZipFile())
{
foreach (string fileName in fileNames)
{
zip.AddFile(String.Format(SiteSettings.PdfPath + "{0}.pdf", msdsFileName), "");
}
zip.Save(Response.OutputStream);
}
Response.Flush();
}
(Before you ask, it would be fine if someone put other values in this url...these are not secure files.)
This works fine on my development box. However, when testing on our QA system, it downloads the zipped file, but it is corrupt. No error is thrown, and nothing is logged in the event log.
It may be possible for me to find a way to interactively debug on the QA environment, but since nothing is actually failing by throwing an error (such as if the dll wasn't found, etc.), and it's successfully generating a non-empty (but corrupt) zip file, I'm thinking I'm not going to discover much by stepping through it.
Is it possible that this is some kind of issue where the web server is "helping" me by "fixing" the file in some way?
I looked at the http response headers where it was working on my local box and not working on the qa box, but while they were slightly different I didn't see any smoking gun.
As an other idea I rejected, the content length occured to me as a possibility since if the content length value was too small I guess that would make it corrupt...but I'm not clear why that would happen and I don't think that's exactly it since if I try to zip and download 1 file I get a small zip...while downloading several files gives me a much larger zip. So that, combined with the fact that no errors are being logged, makes me think that the zip utility is correctly finding and compressing files and the problem is elsewhere.
Here are the headers, to be complete.
The response header on my development machine (working)
HTTP/1.1 200 OK
Date: Wed, 02 Jan 2013 21:59:31 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
X-AspNet-Version: 2.0.50727
Content-Disposition: attachment; filename="MsdsRequest-2013-59-02-165931.zip"
Transfer-Encoding: chunked
Cache-Control: private
Content-Type: application/zip
The response header on the qa machine (not working)
HTTP/1.1 200 OK
Date: Wed, 02 Jan 2013 21:54:37 GMT
Server: Microsoft-IIS/6.0
P3P: CP="NON DSP LAW CUR TAI HIS OUR LEG"
SVR: 06
X-Powered-By: ASP.NET
X-AspNet-Version: 2.0.50727
Content-Disposition: attachment; filename="MsdsRequest-2013-54-02-165437.zip"
Cache-Control: private
Content-Type: application/zip
Set-Cookie: (cookie junk removed);expires=Wed, 02-Jan-2013 21:56:37 GMT;path=/;httponly
Content-Length: 16969
Not sure how to approach this since nothing is claiming a failure. I feel like this could be a web server configuration issue (since I don't have any better ideas), but am not sure where to look. Is there a tact I can take?
As it is you have miss to give an End() to the page right after the Flush() as:
...
zip.Save(Response.OutputStream);
}
Response.Flush();
Response.End();
}
But this is not the correct way, to use a page to send a zip file, probably IIS also gZip the page and this may cause also issues. The correct way is to use a handler and also avoid extra gZip compression for that handler by ether configure the IIS, ether if you make the gZip compression avoid it for that one.
a handler with a name for example download.ashx for your case will be as:
public void ProcessRequest(HttpContext context)
{
string[] fileNames = Request.QueryString["requests"].Split(',');
context.Response.ContentType = "application/zip";
string archiveName = String.Format("MsdsRequest-{0}.zip", DateTime.Now.ToString("yyyy-mm-dd-HHmmss"));
context.Response.AddHeader("Content-Disposition", "attachment; filename=\"" + archiveName + "\"");
// render direct
context.Response.BufferOutput = false;
using (ZipFile zip = new ZipFile())
{
foreach (string fileName in fileNames)
{
zip.AddFile(String.Format(SiteSettings.PdfPath + "{0}.pdf", msdsFileName), "");
}
zip.Save(context.Response.OutputStream);
}
}

How does HTTP file upload work?

When I submit a simple form like this with a file attached:
<form enctype="multipart/form-data" action="http://localhost:3000/upload?upload_progress_id=12344" method="POST">
<input type="hidden" name="MAX_FILE_SIZE" value="100000" />
Choose a file to upload: <input name="uploadedfile" type="file" /><br />
<input type="submit" value="Upload File" />
</form>
How does it send the file internally? Is the file sent as part of the HTTP body as data? In the headers of this request, I don't see anything related to the name of the file.
I just would like the know the internal workings of the HTTP when sending a file.
Let's take a look at what happens when you select a file and submit your form (I've truncated the headers for brevity):
POST /upload?upload_progress_id=12344 HTTP/1.1
Host: localhost:3000
Content-Length: 1325
Origin: http://localhost:3000
... other headers ...
Content-Type: multipart/form-data; boundary=----WebKitFormBoundaryePkpFF7tjBAqx29L
------WebKitFormBoundaryePkpFF7tjBAqx29L
Content-Disposition: form-data; name="MAX_FILE_SIZE"
100000
------WebKitFormBoundaryePkpFF7tjBAqx29L
Content-Disposition: form-data; name="uploadedfile"; filename="hello.o"
Content-Type: application/x-object
... contents of file goes here ...
------WebKitFormBoundaryePkpFF7tjBAqx29L--
NOTE: each boundary string must be prefixed with an extra --, just like in the end of the last boundary string. The example above already includes this, but it can be easy to miss. See comment by #Andreas below.
Instead of URL encoding the form parameters, the form parameters (including the file data) are sent as sections in a multipart document in the body of the request.
In the example above, you can see the input MAX_FILE_SIZE with the value set in the form, as well as a section containing the file data. The file name is part of the Content-Disposition header.
The full details are here.
How does it send the file internally?
The format is called multipart/form-data, as asked at: What does enctype='multipart/form-data' mean?
I'm going to:
add some more HTML5 references
explain why he is right with a form submit example
HTML5 references
There are three possibilities for enctype:
x-www-urlencoded
multipart/form-data (spec points to RFC2388)
text-plain. This is "not reliably interpretable by computer", so it should never be used in production, and we will not look further into it.
How to generate the examples
Once you see an example of each method, it becomes obvious how they work, and when you should use each one.
You can produce examples using:
nc -l or an ECHO server: HTTP test server accepting GET/POST requests
a user agent like a browser or cURL
Save the form to a minimal .html file:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8"/>
<title>upload</title>
</head>
<body>
<form action="http://localhost:8000" method="post" enctype="multipart/form-data">
<p><input type="text" name="text1" value="text default">
<p><input type="text" name="text2" value="aωb">
<p><input type="file" name="file1">
<p><input type="file" name="file2">
<p><input type="file" name="file3">
<p><button type="submit">Submit</button>
</form>
</body>
</html>
We set the default text value to aωb, which means aωb because ω is U+03C9, which are the bytes 61 CF 89 62 in UTF-8.
Create files to upload:
echo 'Content of a.txt.' > a.txt
echo '<!DOCTYPE html><title>Content of a.html.</title>' > a.html
# Binary file containing 4 bytes: 'a', 1, 2 and 'b'.
printf 'a\xCF\x89b' > binary
Run our little echo server:
while true; do printf '' | nc -l 8000 localhost; done
Open the HTML on your browser, select the files and click on submit and check the terminal.
nc prints the request received.
Tested on: Ubuntu 14.04.3, nc BSD 1.105, Firefox 40.
multipart/form-data
Firefox sent:
POST / HTTP/1.1
[[ Less interesting headers ... ]]
Content-Type: multipart/form-data; boundary=---------------------------735323031399963166993862150
Content-Length: 834
-----------------------------735323031399963166993862150
Content-Disposition: form-data; name="text1"
text default
-----------------------------735323031399963166993862150
Content-Disposition: form-data; name="text2"
aωb
-----------------------------735323031399963166993862150
Content-Disposition: form-data; name="file1"; filename="a.txt"
Content-Type: text/plain
Content of a.txt.
-----------------------------735323031399963166993862150
Content-Disposition: form-data; name="file2"; filename="a.html"
Content-Type: text/html
<!DOCTYPE html><title>Content of a.html.</title>
-----------------------------735323031399963166993862150
Content-Disposition: form-data; name="file3"; filename="binary"
Content-Type: application/octet-stream
aωb
-----------------------------735323031399963166993862150--
For the binary file and text field, the bytes 61 CF 89 62 (aωb in UTF-8) are sent literally. You could verify that with nc -l localhost 8000 | hd, which says that the bytes:
61 CF 89 62
were sent (61 == 'a' and 62 == 'b').
Therefore it is clear that:
Content-Type: multipart/form-data; boundary=---------------------------735323031399963166993862150 sets the content type to multipart/form-data and says that the fields are separated by the given boundary string.
But note that the:
boundary=---------------------------735323031399963166993862150
has two less dadhes -- than the actual barrier
-----------------------------735323031399963166993862150
This is because the standard requires the boundary to start with two dashes --. The other dashes appear to be just how Firefox chose to implement the arbitrary boundary. RFC 7578 clearly mentions that those two leading dashes -- are required:
4.1. "Boundary" Parameter of multipart/form-data
As with other multipart types, the parts are delimited with a
boundary delimiter, constructed using CRLF, "--", and the value of
the "boundary" parameter.
every field gets some sub headers before its data: Content-Disposition: form-data;, the field name, the filename, followed by the data.
The server reads the data until the next boundary string. The browser must choose a boundary that will not appear in any of the fields, so this is why the boundary may vary between requests.
Because we have the unique boundary, no encoding of the data is necessary: binary data is sent as is.
TODO: what is the optimal boundary size (log(N) I bet), and name / running time of the algorithm that finds it? Asked at: https://cs.stackexchange.com/questions/39687/find-the-shortest-sequence-that-is-not-a-sub-sequence-of-a-set-of-sequences
Content-Type is automatically determined by the browser.
How it is determined exactly was asked at: How is mime type of an uploaded file determined by browser?
application/x-www-form-urlencoded
Now change the enctype to application/x-www-form-urlencoded, reload the browser, and resubmit.
Firefox sent:
POST / HTTP/1.1
[[ Less interesting headers ... ]]
Content-Type: application/x-www-form-urlencoded
Content-Length: 51
text1=text+default&text2=a%CF%89b&file1=a.txt&file2=a.html&file3=binary
Clearly the file data was not sent, only the basenames. So this cannot be used for files.
As for the text field, we see that usual printable characters like a and b were sent in one byte, while non-printable ones like 0xCF and 0x89 took up 3 bytes each: %CF%89!
Comparison
File uploads often contain lots of non-printable characters (e.g. images), while text forms almost never do.
From the examples we have seen that:
multipart/form-data: adds a few bytes of boundary overhead to the message, and must spend some time calculating it, but sends each byte in one byte.
application/x-www-form-urlencoded: has a single byte boundary per field (&), but adds a linear overhead factor of 3x for every non-printable character.
Therefore, even if we could send files with application/x-www-form-urlencoded, we wouldn't want to, because it is so inefficient.
But for printable characters found in text fields, it does not matter and generates less overhead, so we just use it.
Send file as binary content (upload without form or FormData)
In the given answers/examples the file is (most likely) uploaded with a HTML form or using the FormData API. The file is only a part of the data sent in the request, hence the multipart/form-data Content-Type header.
If you want to send the file as the only content then you can directly add it as the request body and you set the Content-Type header to the MIME type of the file you are sending. The file name can be added in the Content-Disposition header. You can upload like this:
var xmlHttpRequest = new XMLHttpRequest();
var file = ...file handle...
var fileName = ...file name...
var target = ...target...
var mimeType = ...mime type...
xmlHttpRequest.open('POST', target, true);
xmlHttpRequest.setRequestHeader('Content-Type', mimeType);
xmlHttpRequest.setRequestHeader('Content-Disposition', 'attachment; filename="' + fileName + '"');
xmlHttpRequest.send(file);
If you don't (want to) use forms and you are only interested in uploading one single file this is the easiest way to include your file in the request.
Update:
In all modern browsers you can these days also use the fetch API for (binary) upload. The same as mentioned in the example above would then look like this:
const promise = fetch(target, {
method: 'POST',
body: file,
headers: {
'Content-Type': mimeType,
'Content-Disposition', `attachment; filename="${fileName}"`,
},
});
promise.then(
(response) => { /*...do something with response*/ },
(error) => { /*...handle error*/ },
);
I have this sample Java Code:
import java.io.*;
import java.net.*;
import java.nio.charset.StandardCharsets;
public class TestClass {
public static void main(String[] args) throws IOException {
ServerSocket socket = new ServerSocket(8081);
Socket accept = socket.accept();
InputStream inputStream = accept.getInputStream();
InputStreamReader inputStreamReader = new InputStreamReader(inputStream, StandardCharsets.UTF_8);
char readChar;
while ((readChar = (char) inputStreamReader.read()) != -1) {
System.out.print(readChar);
}
inputStream.close();
accept.close();
System.exit(1);
}
}
and I have this test.html file:
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>File Upload!</title>
</head>
<body>
<form method="post" action="http://localhost:8081" enctype="multipart/form-data">
<input type="file" name="file" id="file">
<input type="submit">
</form>
</body>
</html>
and finally the file I will be using for testing purposes, named a.dat has the following content:
0x39 0x69 0x65
if you interpret the bytes above as ASCII or UTF-8 characters, they will actually will be representing:
9ie
So let 's run our Java Code, open up test.html in our favorite browser, upload a.dat and submit the form and see what our server receives:
POST / HTTP/1.1
Host: localhost:8081
Connection: keep-alive
Content-Length: 196
Cache-Control: max-age=0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Origin: null
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.97 Safari/537.36
Content-Type: multipart/form-data; boundary=----WebKitFormBoundary06f6g54NVbSieT6y
DNT: 1
Accept-Encoding: gzip, deflate
Accept-Language: en,en-US;q=0.8,tr;q=0.6
Cookie: JSESSIONID=27D0A0637A0449CF65B3CB20F40048AF
------WebKitFormBoundary06f6g54NVbSieT6y
Content-Disposition: form-data; name="file"; filename="a.dat"
Content-Type: application/octet-stream
9ie
------WebKitFormBoundary06f6g54NVbSieT6y--
Well I am not surprised to see the characters 9ie because we told Java to print them treating them as UTF-8 characters. You may as well choose to read them as raw bytes..
Cookie: JSESSIONID=27D0A0637A0449CF65B3CB20F40048AF
is actually the last HTTP Header here. After that comes the HTTP Body, where meta and contents of the file we uploaded actually can be seen.
An HTTP message may have a body of data sent after the header lines. In a response, this is where the requested resource is returned to the client (the most common use of the message body), or perhaps explanatory text if there's an error. In a request, this is where user-entered data or uploaded files are sent to the server.
http://www.tutorialspoint.com/http/http_messages.htm

HTTP Compression: Some external scripts/CSS not decompressing properly some of the time

I am implementing page/resource compression to improve website performance.
I have tried to implement both blowery and wicked HttpCompress but end up getting the same result. This only seems to affect Firefox, I have tested on Chrome and IE.
What happens is the first time I request the page all the external resources decompress ok. The 2nd or 3rd time the page has errors because the resource doesn't seem to be decompressed. I get unicode characters like:
������í½`I%&/mÊ{JõJ×àt¡`$Ø#ìÁÍæìiG#)«*ÊeVe]f
(actually they can't be displayed properly here)
Inspecting the page via firebug displays the response header as:
Cache-Control private
Content-Type text/html; charset=utf-8
Content-Encoding gzip
Server Microsoft-IIS/7.5
X-AspNetMvc-Version 2.0
X-AspNet-Version 2.0.50727
X-Compressed-By HttpCompress
X-Powered-By ASP.NET Date Fri, 09 Jul
2010 06:51:40 GMT Content-Length 2622
This clearly states that the resource is compressed by gzip. So something seems to be going wrong on the deflate side on the client?
I have added the following sections (in the appropriate locations) in the web.config:
<sectionGroup name="blowery.web">
<section name="httpCompress" type="blowery.Web.HttpCompress.SectionHandler, blowery.Web.HttpCompress"/>
</sectionGroup>
<blowery.web>
<httpCompress preferredAlgorithm="gzip" compressionLevel="high">
<excludedMimeTypes>
<add type="image/jpeg"/>
<add type="image/png"/>
<add type="image/gif"/>
</excludedMimeTypes>
<excludedPaths>
<add path="NoCompress.aspx"/>
</excludedPaths>
</httpCompress>
</blowery.web>
<add name="CompressionModule" type="blowery.Web.HttpCompress.HttpModule, blowery.web.HttpCompress"/>
Any help?
This is an issue that I have face before and the problem is that the Content-Length is not correct. Why is not correct ? because its probably calculate before the compression.
If you set Content-Lenght by hand, just remove it and let the module set it if he can.
I note that you use the Blowery compression. Probably this is a bug/issue inside Blowery. If you can not locate it and fix it, why not use the Ms compression ?
#ptutt if you are on shared iis, then maybe there have all ready set compression, so there is one compression over the other, and you only need to remove yours. If this is the issue then for sure the content-lenght is false because after the first compression, the second is break it.
Check it out using this site https://www.giftofspeed.com/gzip-test/ if your pages is all ready compressed by default by iis.
If not compressed by default then you can do it very easy. On Global.asax
protected void Application_BeginRequest(Object sender, EventArgs e)
{
string cTheFile = HttpContext.Current.Request.Path;
string sExtentionOfThisFile = System.IO.Path.GetExtension(cTheFile);
if (sExtentionOfThisFile.Equals(".aspx", StringComparison.InvariantCultureIgnoreCase))
{
string acceptEncoding = MyCurrentContent.Request.Headers["Accept-Encoding"].ToLower();;
if (acceptEncoding.Contains("deflate") || acceptEncoding == "*")
{
// defalte
HttpContext.Current.Response.Filter = new DeflateStream(prevUncompressedStream,
CompressionMode.Compress);
HttpContext.Current.Response.AppendHeader("Content-Encoding", "deflate");
} else if (acceptEncoding.Contains("gzip"))
{
// gzip
HttpContext.Current.Response.Filter = new GZipStream(prevUncompressedStream,
CompressionMode.Compress);
HttpContext.Current.Response.AppendHeader("Content-Encoding", "gzip");
}
}
}
Please note, I just write this code and have not tested. My code is a little more complicate, so I just create a simple verion of it.
Find more examples:
http://www.google.com/search?q=Response.Filter+GZipStream
Reference:
ASP.NET site sometimes freezing up and/or showing odd text at top of the page while loading, on load balanced servers

Java web server and PDF files

I have created my own HTTP server. I need to return a PDF file (generated by Jasper Reports) to the web browser. However, when I read the PDF file and write its contents to the socket, the web browser receives a blank PDF file. When I save this file and compare it to the original, I see that many of the characters have been converted from their original value to 0x3F (which is '?').
When I read the file, my debug output shows that the correct values are read and that the correct values are written to the socket. Can anyone help me?
Here is the code (minus all the debug code) that reads the PDF file:
File f = new File(strFilename);
long len = f.length();
byteBuffPdfData = ByteBuffer.allocate( (int)len );
FileInputStream in = new FileInputStream(strFilename);
boolean isEOF = false;
while (!isEOF)
{
int iValue = in.read();
if (iValue == -1)
{
isEOF = true;
}
else
{
byteBuffPdfData.put( (byte)iValue );
}
}
Next is the code that writes from the byte buffer to the socket...
printWriter = new PrintWriter( socket.getOutputStream(), true );
printWriter.write(strHttpHeaders);
// Headers:
// HTTP/1.0 200 OK
// Date: Wed, 18 Nov 2009 21:04:36
// Expires: Wed, 18 Nov 2009 21:09:36
// Cache-Control: public
// Content-Type: application/pdf
// Content-Length: 1811
// Connection: keep-alive
//
byteBuffPdfData.rewind();
while(byteBuffPdfData.hasRemaining())
{
printWriter.print( (char)byteBuffPdfData.get() );
}
printWriter.flush();
socket.close();
Any help that can be offered is greatly appreciated. I am sure that I need to do something with the character sets but at this point I have tried a million things and nothing seems to work.
John
I'm not sure what language you're writing in here (looks like Java, though), but is it possible something is trying to do a charset conversion (perhaps to or from Unicode characters)? The ? seems reasonable as a 'substitution' for characters that can't be represented in ASCII. (I recognize that you aren't deliberately trying to do any such conversion, but maybe something of the sort is happening in the libraries you're using.)

Resources