Extracting zip from stream with DotNetZip - asp.net

public void ZipExtract(Stream inputStream, string outputDirectory)
{
using (ZipFile zip = ZipFile.Read(inputStream))
{
Directory.CreateDirectory(outputDirectory);
zip.ExtractSelectedEntries("name=*.jpg,*.jpeg,*.png,*.gif,*.bmp", " ", outputDirectory,
ExtractExistingFileAction.OverwriteSilently);
}
}
[HttpPost]
public ContentResult Uploadify(HttpPostedFileBase filedata)
{
var path = Server.MapPath(#"~/Files");
var filePath = Path.Combine(path, filedata.FileName);
if (filedata.FileName.EndsWith(".zip"))
{
ZipExtract(Request.InputStream,path);
}
filedata.SaveAs(filePath);
_db.Photos.Add(new Photo
{
Filename = filedata.FileName
});
_db.SaveChanges();
return new ContentResult{Content = "1"};
}
I try to read zip archive from stream and extract files. Got the following exception in the line "using (ZipFile zip = ZipFile.Read(inputStream))" : ZipEntry::ReadDirEntry(): Bad signature (0xC618F879) at position 0x0000EE19
Any ideas how to handle this exception?

The error is occurring because the stream you are trying to read is not a valid zip bytestream. In most cases, Request.InputStream will not represent a zip file. It will represent an HTTP message, which will look like this:
POST /path/for/your/app.aspx HTTP/1.1
User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.2; ...)
Content-Type: application/x-www-form-urlencoded
Content-Length: 11132
...more stuff here...
I think what you are doing is trying to read that message as if it were a zip file. That's not gonna work. The file content is actually embedded in the "... more stuff here..." part.
To work toward solving this, I suggest you work in smaller steps.
First, get the file upload to work, saving the content of the uploaded file to a filesystem file on the server. Then, on the server, try to open the file as a zipfile. If it works, then you should be able to replace the file saving portion, with ZipFile.Read(). If you cannot open the file that you saved, then it means that the file that you saved is not a zip file. Either it is incomplete, or, more likely, it includes extraneous data, like the HTTP headers.
If you have trouble successfully uploading a binary file like a zip file, first work on uploading a text file. You can more easily verify the upload of a text file on the server, by simply opening the uploaded content in a text editor, and checking that it contains exactly the content of the file that was uploaded from the client. Once you have this working, move to a binary file. Then you can move to a full streaming approach, using DotNetZip to read the stream. Once you get to this point, there should be no need to save the file to the filesystem, before reading it as a zip file, but you may want to save it anyway, for other reasons.
To help, you may want to use Fiddler2, the debugging HTTP proxy. Install it on the browser machine, turn it on, and it will help you see the messages that get sent from the browser to the ASPNET application on the server. You'll see that a file upload contains more that just the bare file data.

A more stable solution could be to use ICSharpCode ZipLib: http://www.sharpdevelop.net/OpenSource/SharpZipLib/Default.aspx

Related

#PutChild Upload file with milton webdav in Mac Finder failed

I'm using milton, and my upload code as follows:
#PutChild
#Transactional
public FileContentItem uploadFile(FolderContentItem parent, String name, byte[] bytes){
String traceId = UuidGenUtil.createUuid();
try {
QUERY_LOGGER.info("[uploadFile][NetdiskController],action=Request, name={}, size={},traceId={}",name,bytes.length,traceId);
In windows, i can upload file successfully, but with Mac Finder, the length of bytes is always 0, and the error as follow:
The Finder can't complete the operation because some data in "Shot.png" can't be read or written (Error code -36)
Anyone know why? Thanks
Update: I try ForkLift webdav client in mac and can upload file successfully
The problem is that mac finder sends first request for creating new file without any byte
After it - call LOCK, which is not available for Dav Level 1, that's why you have bad response from server and mac stop uploading a file. This method availiable only for Dav level 2, so you have to get enterprise license of milton to make it work
After Locking object Finder uploads the file
After - calls UNLOCK method
SO if you want to use mac finder for webdav in milton you have several options:
Get the trial enterprise license and look into this example:https://github.com/miltonio/milton2/tree/master/examples/milton-anno-ref
Realize these methods by yourself by webdav specs
Mock it - extend from MiltonFilter or look into MyOwnServlet in example and in method doFilter/service write something like this:
//mock method, do not use it in production!
HttpServletRequest request = (HttpServletRequest)req;
HttpServletResponse response = (HttpServletResponse) resp;
if(request.getMethod().equals("LOCK")){
response.setStatus(200);
response.addHeader("Lock-Token", "<opaquelocktoken:e71d4fae-5dec-22d6-fea5-00a0c91e6be4>");
} else if(request.getMethod().equals("UNLOCK")){
response.setStatus(204);
}else {
doMiltonProcessing((HttpServletRequest) req, (HttpServletResponse) resp);
}
I've checked this code working in the examble by link above: make in web.xml method serving by MyOwnServlet, disable authentication in init by implementing empty security manager, set controller packages to scan "com.mycompany"
p.s. to build the example project I've to delete milton client dependency from pom.xml file

IIS 7.0+ HTTP PUT Completes, but No File Saved

I'm struggling to figure out what exactly is happening. I am using GdPicture to save a scanned document through java script using their COM+ code and source project as my starting ground. Long story short is their function issues a HTTP PUT command specifying the file name to be saved.
When I execute the command I see that the request is getting to my server, and even has the appropriate content size to include the pdf document. I even get a 200 response back to my browser, no errors or anything...... yet the pdf doesn't get saved. Is that because PUT isn't the right way to do this? I don't have the option to POST the file because the transfer is wrapped in GdPicture's api... so with that said.
I have done the following
Ensured that IIS_IUSRS group has write permissions to the "Upload" virtual directory
Added a handler that specifically allows the PUT verb for "*.pdf"
Removed the StaticFileHandler for the "Upload" virtual directory
I aplogize for the links, but I don't have 10 rep points yet
PUT Request from FIDDLER
Response
** Edit **
More information about GdPicture, I have already contacted them and their function is not the problem. The implementation is as simple as
var status = oGdViewer.SaveDocumentToPDF_2("http://domain.com/Annotation/Upload/" + FileName, "user", "pass");
Thanks!

File upload in servlet corrupts the file

I am uploading a file using using (Valums uploader) and I using servlet at server side. File type is application/pdf. Code is :
String filename= request.getHeader("X-File-Name");
InputStream is = request.getInputStream();
File tmp = File.createTempFile(filename, "");
tmp.deleteOnExit();
FileOutputStream fos = new FileOutputStream(tmp);
IOUtils.copy(is, fos);
byte[] bytes = new byte[(int) tmp.length()];
is.read(bytes);
Now these bytes are getting stored into database as longblob. But it seems that inputStream in above code is adding some more data in the file thats why file data is getting corrupted. I download the same data as pdf file, found that both- original uploaded file and now downloaded file have the same size, but when the downloaded file is opened in Acrobat, it reports "File is corrupted". For upload request I have used only file input. So there are no chances of other input params in inputStream. Also the bytes array in above code are as it is passed for download. Why is data getting corrupted?
Your problem might be the data length you are reading. I had similar problem and posted on this issue link
Java: Binary File Upload using Restlet + Apache Commons FileUpload
Hope this helps

Split zip file using DotNetZip Library

I'm using DotNetZip Library to create a zip file with about 100MB.I'm saving the zip file directly to the Response.OutputStream
Response.Clear();
// no buffering - allows large zip files to download as they are zipped
Response.BufferOutput = false;
String ReadmeText= "Dynamic content for a readme file...\n" +
DateTime.Now.ToString("G");
string archiveName= String.Format("archive-{0}.zip",
DateTime.Now.ToString("yyyy-MMM-dd-HHmmss"));
Response.ContentType = "application/zip";
Response.AddHeader("content-disposition", "attachment; filename=" + archiveName);
using (ZipFile zip = new ZipFile())
{
// add a file entry into the zip, using content from a string
zip.AddFileFromString("Readme.txt", "", ReadmeText);
// add the set of files to the zip
zip.AddFiles(filesToInclude, "files");
// compress and write the output to OutputStream
zip.Save(Response.OutputStream);
}
Response.Close();
what i need is to split this 100MB file in to with about 20MB sections and provide the download facility to the user.how can i achieve this?
Your question is sort of independent of the ZIP aspect. Basically it seems you want to make available for download a large file of 100mb or more, and you want to do it in parts. Some options:
Save it to a regular file, then transmit it in parts. The client would have to make a distinct download request for each of the N parts, selecting the appropriate section of the file via the HTTP Range header. The server would have to be set up to server ZIP files with the appropriate MIME type etc.
save it to a split (spanned) zip file, which implies N different files. The client would then make an HTTP GET for each of the distinct files. The server would have to be set up to server .zip, .z00, .z01, etc. I'm not sure if built-in OS tools handle split zip files appropriately.
save the file as one large blob, and have the client use BITS or some other restartable download facility.

ASP.NET Webservice corrupts uploaded file

I have a webservice through which I can upload documents to our ASP.NET web site.
The problem is when I upload PDF & word documents, they get corrupted when I try to open them. Text documents always upload fine.
What is even strange is that on my development machine, these files upload fine but when I try to upload to our demo site, they get corrupted.
Any ideas?
my code is of the format:
WebServicesSoapClient proxy = new WebServicesSoapClient();
byte[] data = GetFileByteStream("C:\\temp\\sample.pdf");
string response = proxy.UploadDocument("james", "password",
orderId, "Sample.pdf", data, true);
Are your pdf files larger than 4MB? That is the default maximum request length for ASP.NET. You can override that setting in your web.config with:
<httpRuntime maxRequestLength="8192" />
However, be aware that this will increase your memory usage on your server - by default asp.net will cache the entire request in memory.
Also, I'm not entirely certain this is the problem in your case, since normally this exceeding the request length would cause an exception to be thrown - not silent file corruption.
see also http://support.microsoft.com/default.aspx?scid=kb;EN-US;295626

Resources