How to decompress zlib files created with ByteArray.compress? - apache-flex

I work on a Flex application that creates compressed files and uploads them on a server. The files are created with ByteArray.compress method, which is zlib compression. I can decompress them using Python API on the server but I prefer to keep the files compressed there. I want to be able to download and decompress the files later, however WinZip and WinRar fail to decompress them. When I google for zlib utility, I only find zlib dll library. I need a simple application for Windows (and/or Linux) that is capable of decompressing zlib files.

So, zlib compression will certainly compress the data down, but it doesn't include the file headers that make it a "ZIP" file that can be opened using apps like Windows, WinZip or WinRar.
Adobe has some documents that explain how to READ a zip file, including information about the header. If you want to WRITE a zip file, just use this information to write out the header with the data.
Good luck!

Related

How to use LibTiff.NET Tiff2Pdf in .NET 6

I want to provide support to convert single-page and multi-page tiff files into PDFs. There is an executable in Bit Miracle's LibTiff.NET called Tiff2Pdf.
How do I use Tiff2Pdf in my application to convert tiff data stream (not a file) into a pdf data stream (not a file)?
I do not know if there is an API exposed because the documentation only lists Tiff2Pdf as a tool. I also do not see any examples in the examples folder using it in a programmatic way to determine if it can handle data streams or how to use it in my own program.
libtiff tools expect a filename so the background run shown below is simply from upper right X.tif to various destinations, first is default
tiff2pdf x.tif
and we can see it writes a tiff2pdf file stream to console (Standard Output) however it failed in memory without a directory to write to. However on second run we can redirect
tiff2pdf x.tif > a.pdf
or alternately specify a destination
tiff2pdf -o b.pdf x.tif
So in order to use those tools we need a File System to receive the file objects, The destination folder/file directory can be a Memory File System drive or folder.
Thus you need to initiate that first.
NuGet is a package manager simply bundling the lib and as I don't use .net your a bit out on a limb as BitMiricle are not offering free support (hence point you at Stack Overflow, a very common tech support PLOY, Pass Liability Over Yonder) however looking at https://github.com/BitMiracle/libtiff.net/tree/master/Samples
they suggest memory in some file names such as https://github.com/BitMiracle/libtiff.net/tree/master/Samples/ConvertToSingleStripInMemory , perhaps get more ideas there?

SFTP polling using java

My scenario as follows:
One java program is updating some random files to a SFTP location.
My requirement is as soon as a file is uploaded by the previous java program, using java I need to download the file. The files can be of size 100MB. I am searching for some java API which is helpful in this way. Here I even don't know the name of files. But I can keep a regular expression for this. A same file can be uploaded by previous program periodically. Since file size is high I need to wait until the complete file to be uploaded.
I used Jsch to download files, but I am not getting how to poll using jsch.
Polling
All you can do is to keep listing remote directory periodically, until you find a new file. There's no better way with SFTP. For that you obviously use ChannelSftp.ls().
Regarding selecting files matching certain pattern, see:
JSch ChannelSftp.ls - pass match patterns in java
Waiting until the upload is complete
Again, there's no support for this in widespread implementations of SFTP.
For details, see my answer at:
SFTP file lock mechanism.

Missing files while decrypting PGP encrypted tar archive

I am having trouble with encrypting/decrypting a tar archive using Bouncy Castle OpenPGP library.
I'm using TarArchiveOutputStream to add files to a tar archive and Bouncy Castle OpenPGP to encrypt the archive. Afterwards I am using Kleopatra to manually decrypt the file using the option "Input file is an archive; unpack with: TAR(PGP compatible)".
After unpacking the archive all files except one are lost and the one remaining has all contents removed. (Also happens with other decrypting programs)
I have already confirmed that the tar archive contains all the files before it is encrypted. I have also tried decrypting with that option unchecked and then the archive also contains all the files. My question is why it doesn't work with that option checked since the input file is indeed an archive so it makes sense to check that option.
What I have also tried:
Using another library to make the tar file (JTar)
Comparing a manually made tar file to the one generated. The main difference that I saw was that the one made manually was smaller (22KB vs 30KB) while containing same files.
I am open to suggestions.
Thanks!

Checking uploaded pdf for virus in ASP.NET

So I did some research on checking an uploaded pdf for viruses and I found these 2 solutions:
Save the file to the hard disk, let the antivirus quarantine/delete it if it was infected, then check if the file still exists on the disk.
Use an antivirus that supports calling it through .net and scan the file
What I am thinking instead is to read the uploaded pdf file stream using something like iTextSharp then writing a new file but after stripping any macros.
One of the benefits would be making sure that the uploaded file is a pdf since it will be parsed by the iTextSharp, but would it also protect it from viruses?

Determine file compression type

I backed up a large number of files to S3 from a PC before switching to a Mac several months ago. Several months later, I'm now trying to open the files and realized the files were all compressed by the S3 GUI tool I used so I can not open them.
I can't remember what program I used to upload the files and standard decompression commands from the command line are not working e.g.,
unzip
bunzip2
tar -zxvf
How can I determine what the compression type is of the file? Alternatively, what other decompression techniques can I try?
PS - I know the files are not corrupted because I tested downloading and opening them back when I originally uploaded to S3.
You can use Universal Extractor (open source) to determine compression types.
Here is a link: http://legroom.net/software/uniextract/
The little downside is that it looks in the first place for the extension, but I manage to change the extensions myself for a inknown file and it works almost always, eg .rar or .exe etc..
EDIT:
I found a huge list of archive programs, maybe one of them will work? It's ridiciously big:
http://www.maximumcompression.com/data/summary_mf.php
http://www.maximumcompression.com/index.html

Resources