How to debug corrupt zip file generation? - asp.net

We have a web page that is grabs a series of strings from a url, finds some pdfs associated with those strings, zips them up using DotNetZip, and returns them to the user. The page that does this is very simple - here's the Page_Load:
protected void Page_Load(object sender, EventArgs e)
{
string[] fileNames = Request.QueryString["requests"].Split(',');
Response.Clear();
Response.ClearHeaders();
Response.ContentType = "application/zip";
string archiveName = String.Format("MsdsRequest-{0}.zip", DateTime.Now.ToString("yyyy-mm-dd-HHmmss"));
Response.AddHeader("Content-Disposition", "attachment; filename=\"" + archiveName + "\"");
using (ZipFile zip = new ZipFile())
{
foreach (string fileName in fileNames)
{
zip.AddFile(String.Format(SiteSettings.PdfPath + "{0}.pdf", msdsFileName), "");
}
zip.Save(Response.OutputStream);
}
Response.Flush();
}
(Before you ask, it would be fine if someone put other values in this url...these are not secure files.)
This works fine on my development box. However, when testing on our QA system, it downloads the zipped file, but it is corrupt. No error is thrown, and nothing is logged in the event log.
It may be possible for me to find a way to interactively debug on the QA environment, but since nothing is actually failing by throwing an error (such as if the dll wasn't found, etc.), and it's successfully generating a non-empty (but corrupt) zip file, I'm thinking I'm not going to discover much by stepping through it.
Is it possible that this is some kind of issue where the web server is "helping" me by "fixing" the file in some way?
I looked at the http response headers where it was working on my local box and not working on the qa box, but while they were slightly different I didn't see any smoking gun.
As an other idea I rejected, the content length occured to me as a possibility since if the content length value was too small I guess that would make it corrupt...but I'm not clear why that would happen and I don't think that's exactly it since if I try to zip and download 1 file I get a small zip...while downloading several files gives me a much larger zip. So that, combined with the fact that no errors are being logged, makes me think that the zip utility is correctly finding and compressing files and the problem is elsewhere.
Here are the headers, to be complete.
The response header on my development machine (working)
HTTP/1.1 200 OK
Date: Wed, 02 Jan 2013 21:59:31 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
X-AspNet-Version: 2.0.50727
Content-Disposition: attachment; filename="MsdsRequest-2013-59-02-165931.zip"
Transfer-Encoding: chunked
Cache-Control: private
Content-Type: application/zip
The response header on the qa machine (not working)
HTTP/1.1 200 OK
Date: Wed, 02 Jan 2013 21:54:37 GMT
Server: Microsoft-IIS/6.0
P3P: CP="NON DSP LAW CUR TAI HIS OUR LEG"
SVR: 06
X-Powered-By: ASP.NET
X-AspNet-Version: 2.0.50727
Content-Disposition: attachment; filename="MsdsRequest-2013-54-02-165437.zip"
Cache-Control: private
Content-Type: application/zip
Set-Cookie: (cookie junk removed);expires=Wed, 02-Jan-2013 21:56:37 GMT;path=/;httponly
Content-Length: 16969
Not sure how to approach this since nothing is claiming a failure. I feel like this could be a web server configuration issue (since I don't have any better ideas), but am not sure where to look. Is there a tact I can take?

As it is you have miss to give an End() to the page right after the Flush() as:
...
zip.Save(Response.OutputStream);
}
Response.Flush();
Response.End();
}
But this is not the correct way, to use a page to send a zip file, probably IIS also gZip the page and this may cause also issues. The correct way is to use a handler and also avoid extra gZip compression for that handler by ether configure the IIS, ether if you make the gZip compression avoid it for that one.
a handler with a name for example download.ashx for your case will be as:
public void ProcessRequest(HttpContext context)
{
string[] fileNames = Request.QueryString["requests"].Split(',');
context.Response.ContentType = "application/zip";
string archiveName = String.Format("MsdsRequest-{0}.zip", DateTime.Now.ToString("yyyy-mm-dd-HHmmss"));
context.Response.AddHeader("Content-Disposition", "attachment; filename=\"" + archiveName + "\"");
// render direct
context.Response.BufferOutput = false;
using (ZipFile zip = new ZipFile())
{
foreach (string fileName in fileNames)
{
zip.AddFile(String.Format(SiteSettings.PdfPath + "{0}.pdf", msdsFileName), "");
}
zip.Save(context.Response.OutputStream);
}
}

Related

SOAP UI - Save HTTP request to a GZIP file

I m using Soap UI free version for some rest mocking.
I need to persist my HTTP POST request (request received already compressed gzip) to a gzip file.
I have tried different ways to do that, however after to execute the code, when I try to decompress manually the file I have the following error: "The archive is either in unknown format or damaged".
The HTTP POST request has the following header:
Host : 127.0.0.1:8091
Content-Length : 636
User-Agent : Java/1.7.0_07
Connection : keep-alive
Content-Type : application/octet-stream
Accept : text/plain, application/json, application/*+json, */*
Pragma : no-cache
Cache-Control : no-cache
Below the solutions that I have tried:
Solution#1:
byte[] buffer = new byte[1024];
byte[] data = mockRequest.getRequestContent().getBytes();
def path="myfile.gz";
BufferedOutputStream bos = new BufferedOutputStream(new FileOutputStream(path));
bos.write(data);
bos.flush();
bos.close();
Solution#2
byte[] buffer = new byte[1024];
byte[] data = mockRequest.getRawRequestData();
def path="myfile.gz";
BufferedOutputStream bos = new BufferedOutputStream(new FileOutputStream(path));
bos.write(data);
bos.flush();
bos.close();
Can someone please help and let me know why I cannot decompress the gzip file and how I can do that?
Thanks,
This is Groovy, so you don't need all this Java clutter.
Here's some code that might work:
new File( path) << mockRequest.rawRequestData
EDIT
Ok, based on your comments, for zip files to be copied correctly, you probably need something a little different:
import java.nio.file.*
Files.copy(new ByteArrayInputStream(mockRequest.requestContent.bytes),
Paths.get( 'destination.zip' ) )
Tested this with an actual zip file's byte[] as source and it worked. If it does not work for you, then the byte array you're getting from requestContent.bytes just isn't a zip file.

jersey does support treturn gzip byte

current now i used response to return a xml file . but seems the performance is not good when file is lager.
so i would like know that how to return a byte[] (gzip /xml)
also the IE/firefox can dispaly this xml file from gzip byte array
before i use servlet it can auto show the xml file
#GET
#Path("/Test/{CustomerId}")
#Produces("application/xml")
public Response getTest() throws IOException {
return Response.ok().entity(new FileInputStream("CC100_PC.xml")).build();
}
by the way Jersey how to support init and Destroy function , i want add some database connection into init function and destroy it
Just add the GZIPContentEncodingFilter to your Jersey app - see http://jersey.java.net/nonav/apidocs/latest/jersey/com/sun/jersey/api/container/filter/GZIPContentEncodingFilter.html
That will automatically compress it using GZIP if the client supports it (which it figures out from the Accept-Encoding HTTP header).

Chrome browser does not show images generated by HTTP handler

Basically I have a web site that renders HTML preview of some documents (mainly office). The resulting HTML fragment is included in the page returned by the same web site, however images are returned by HTTP handler from another site with the following links:
<img width="50" height="50" src="http://portal/Service/GetFile.asxh?id=123&inline=true">
For some reason all browsers except Chrome (e.g. IE6/7/8, Firefox, Opera, Safari) show everything just fine, however for these images Chrome shows "broken image" icon. If I choose "Open image in new tab" then the image is shown just fine.
Edit I thought I have solved this issue, but apparently with Fiddler turned on it works fine.
I had context.Response="utf-8" left in code, but removing it had no difference.
Headers:
HTTP/1.1 200 OK
Date: Wed, 05 Jan 2011 14:26:57 GMT
Server: Microsoft-IIS/6.0
MicrosoftOfficeWebServer: 5.0_Pub
X-Powered-By: ASP.NET
X-AspNet-Version: 4.0.30319
Transfer-Encoding: chunked
Cache-Control: no-cache
Pragma: no-cache
Expires: -1
Content-Type: image/jpeg
Code:
context.Response.ContentType = file.ContentType;
context.Response.Cache.SetCacheability(HttpCacheability.NoCache);
byte[] buff = new byte[BuffSize];
using (var stream = repository.GetFileContentsAsStream(file.ContentId))
{
int bytesRead;
do
{
bytesRead = stream.Read(buff, 0, BuffSize);
if (bytesRead > 0)
{
context.Response.OutputStream.Write(buff, 0, bytesRead);
}
} while (bytesRead > 0);
}
context.Response.Flush();
context.Response.Close();
I'm pretty sure Chrome requires the Length to be set for images, so try adding the Content-Length header to your response when handling the image.
It's the context.Response.Close(); Chrome doesn't like it. Get rid of that line and all will be good. I fought with this for months.
You should add this:
Response.AddHeader("Content-Disposition", "inline;Filename=\"Picture.gif\"");
Response.AddHeader("Content-Length", filesize.ToString());
https://en.wikipedia.org/wiki/Comparison_of_web_browsers#Image_format_support
My issue after trying all of the above was simply that chrome among others do not support my image type. Tiff is not supported by chrome.

HTTP Compression: Some external scripts/CSS not decompressing properly some of the time

I am implementing page/resource compression to improve website performance.
I have tried to implement both blowery and wicked HttpCompress but end up getting the same result. This only seems to affect Firefox, I have tested on Chrome and IE.
What happens is the first time I request the page all the external resources decompress ok. The 2nd or 3rd time the page has errors because the resource doesn't seem to be decompressed. I get unicode characters like:
������í½`I%&/mÊ{JõJ×àt¡`$Ø#ìÁÍæìiG#)«*ÊeVe]f
(actually they can't be displayed properly here)
Inspecting the page via firebug displays the response header as:
Cache-Control private
Content-Type text/html; charset=utf-8
Content-Encoding gzip
Server Microsoft-IIS/7.5
X-AspNetMvc-Version 2.0
X-AspNet-Version 2.0.50727
X-Compressed-By HttpCompress
X-Powered-By ASP.NET Date Fri, 09 Jul
2010 06:51:40 GMT Content-Length 2622
This clearly states that the resource is compressed by gzip. So something seems to be going wrong on the deflate side on the client?
I have added the following sections (in the appropriate locations) in the web.config:
<sectionGroup name="blowery.web">
<section name="httpCompress" type="blowery.Web.HttpCompress.SectionHandler, blowery.Web.HttpCompress"/>
</sectionGroup>
<blowery.web>
<httpCompress preferredAlgorithm="gzip" compressionLevel="high">
<excludedMimeTypes>
<add type="image/jpeg"/>
<add type="image/png"/>
<add type="image/gif"/>
</excludedMimeTypes>
<excludedPaths>
<add path="NoCompress.aspx"/>
</excludedPaths>
</httpCompress>
</blowery.web>
<add name="CompressionModule" type="blowery.Web.HttpCompress.HttpModule, blowery.web.HttpCompress"/>
Any help?
This is an issue that I have face before and the problem is that the Content-Length is not correct. Why is not correct ? because its probably calculate before the compression.
If you set Content-Lenght by hand, just remove it and let the module set it if he can.
I note that you use the Blowery compression. Probably this is a bug/issue inside Blowery. If you can not locate it and fix it, why not use the Ms compression ?
#ptutt if you are on shared iis, then maybe there have all ready set compression, so there is one compression over the other, and you only need to remove yours. If this is the issue then for sure the content-lenght is false because after the first compression, the second is break it.
Check it out using this site https://www.giftofspeed.com/gzip-test/ if your pages is all ready compressed by default by iis.
If not compressed by default then you can do it very easy. On Global.asax
protected void Application_BeginRequest(Object sender, EventArgs e)
{
string cTheFile = HttpContext.Current.Request.Path;
string sExtentionOfThisFile = System.IO.Path.GetExtension(cTheFile);
if (sExtentionOfThisFile.Equals(".aspx", StringComparison.InvariantCultureIgnoreCase))
{
string acceptEncoding = MyCurrentContent.Request.Headers["Accept-Encoding"].ToLower();;
if (acceptEncoding.Contains("deflate") || acceptEncoding == "*")
{
// defalte
HttpContext.Current.Response.Filter = new DeflateStream(prevUncompressedStream,
CompressionMode.Compress);
HttpContext.Current.Response.AppendHeader("Content-Encoding", "deflate");
} else if (acceptEncoding.Contains("gzip"))
{
// gzip
HttpContext.Current.Response.Filter = new GZipStream(prevUncompressedStream,
CompressionMode.Compress);
HttpContext.Current.Response.AppendHeader("Content-Encoding", "gzip");
}
}
}
Please note, I just write this code and have not tested. My code is a little more complicate, so I just create a simple verion of it.
Find more examples:
http://www.google.com/search?q=Response.Filter+GZipStream
Reference:
ASP.NET site sometimes freezing up and/or showing odd text at top of the page while loading, on load balanced servers

Java web server and PDF files

I have created my own HTTP server. I need to return a PDF file (generated by Jasper Reports) to the web browser. However, when I read the PDF file and write its contents to the socket, the web browser receives a blank PDF file. When I save this file and compare it to the original, I see that many of the characters have been converted from their original value to 0x3F (which is '?').
When I read the file, my debug output shows that the correct values are read and that the correct values are written to the socket. Can anyone help me?
Here is the code (minus all the debug code) that reads the PDF file:
File f = new File(strFilename);
long len = f.length();
byteBuffPdfData = ByteBuffer.allocate( (int)len );
FileInputStream in = new FileInputStream(strFilename);
boolean isEOF = false;
while (!isEOF)
{
int iValue = in.read();
if (iValue == -1)
{
isEOF = true;
}
else
{
byteBuffPdfData.put( (byte)iValue );
}
}
Next is the code that writes from the byte buffer to the socket...
printWriter = new PrintWriter( socket.getOutputStream(), true );
printWriter.write(strHttpHeaders);
// Headers:
// HTTP/1.0 200 OK
// Date: Wed, 18 Nov 2009 21:04:36
// Expires: Wed, 18 Nov 2009 21:09:36
// Cache-Control: public
// Content-Type: application/pdf
// Content-Length: 1811
// Connection: keep-alive
//
byteBuffPdfData.rewind();
while(byteBuffPdfData.hasRemaining())
{
printWriter.print( (char)byteBuffPdfData.get() );
}
printWriter.flush();
socket.close();
Any help that can be offered is greatly appreciated. I am sure that I need to do something with the character sets but at this point I have tried a million things and nothing seems to work.
John
I'm not sure what language you're writing in here (looks like Java, though), but is it possible something is trying to do a charset conversion (perhaps to or from Unicode characters)? The ? seems reasonable as a 'substitution' for characters that can't be represented in ASCII. (I recognize that you aren't deliberately trying to do any such conversion, but maybe something of the sort is happening in the libraries you're using.)

Resources