Determine file compression type - unzip

I backed up a large number of files to S3 from a PC before switching to a Mac several months ago. Several months later, I'm now trying to open the files and realized the files were all compressed by the S3 GUI tool I used so I can not open them.
I can't remember what program I used to upload the files and standard decompression commands from the command line are not working e.g.,
unzip
bunzip2
tar -zxvf
How can I determine what the compression type is of the file? Alternatively, what other decompression techniques can I try?
PS - I know the files are not corrupted because I tested downloading and opening them back when I originally uploaded to S3.

You can use Universal Extractor (open source) to determine compression types.
Here is a link: http://legroom.net/software/uniextract/
The little downside is that it looks in the first place for the extension, but I manage to change the extensions myself for a inknown file and it works almost always, eg .rar or .exe etc..
EDIT:
I found a huge list of archive programs, maybe one of them will work? It's ridiciously big:
http://www.maximumcompression.com/data/summary_mf.php
http://www.maximumcompression.com/index.html

Related

How to use LibTiff.NET Tiff2Pdf in .NET 6

I want to provide support to convert single-page and multi-page tiff files into PDFs. There is an executable in Bit Miracle's LibTiff.NET called Tiff2Pdf.
How do I use Tiff2Pdf in my application to convert tiff data stream (not a file) into a pdf data stream (not a file)?
I do not know if there is an API exposed because the documentation only lists Tiff2Pdf as a tool. I also do not see any examples in the examples folder using it in a programmatic way to determine if it can handle data streams or how to use it in my own program.
libtiff tools expect a filename so the background run shown below is simply from upper right X.tif to various destinations, first is default
tiff2pdf x.tif
and we can see it writes a tiff2pdf file stream to console (Standard Output) however it failed in memory without a directory to write to. However on second run we can redirect
tiff2pdf x.tif > a.pdf
or alternately specify a destination
tiff2pdf -o b.pdf x.tif
So in order to use those tools we need a File System to receive the file objects, The destination folder/file directory can be a Memory File System drive or folder.
Thus you need to initiate that first.
NuGet is a package manager simply bundling the lib and as I don't use .net your a bit out on a limb as BitMiricle are not offering free support (hence point you at Stack Overflow, a very common tech support PLOY, Pass Liability Over Yonder) however looking at https://github.com/BitMiracle/libtiff.net/tree/master/Samples
they suggest memory in some file names such as https://github.com/BitMiracle/libtiff.net/tree/master/Samples/ConvertToSingleStripInMemory , perhaps get more ideas there?

What are alternatives to saving a file with a really long filename?

I have an unarchiver that takes in an archive name, and a directory name, and dumps all files from that archive into that directory. No other command-line options. However, someone zipped a file in the archive I am looking to decompress, with 500-ish characters in the filename, and now that program fails when it hits that file (practically all file systems have a limit of 256). What alternative do I have, short of changing the source code and recompiling the unarchiver?
I must mount something as a directory, which would take the files that the unarchiver is writing, and dump them elsewhere-- possibly even as one big file. This something should not send fail messages, even if some write really did fail. Is this possible?

Best method for dealing with Unix compressed files (.Z) in IDL?

I'm working on some code in IDL that retrieves data files through FTP that are Unix compressed (.Z) files. I know IDL can work with .gz compressed files with the /compress keyword however it doesn't seem capable of playing nicely with the .Z compression.
What are my options for working with these files? The files I am downloading are coming from another institution so I have no control in the compression being used. Downloading and decompressing the files manually before running the code is an absolute last resort as it makes things a lot more difficult as I don't always know which files I need from the FTP site in advance so the code grabs the ones needed based on the parameters in real time.
I'm currently running on Windows 7 but once the code is finished it will be used on a Unix system as well (computer cluster).
You can use SPAWN as you note in your comment (assuming you can find an equivalent of the Unix uncompress command that runs on Windows), or for higher speed you can use an external C function with CALL_EXTERNAL to do the decompression. Just by coincidence, I posted an answer on stackexchange the other day with just such a C function to decompress .Z files here.

IExpress - Disable Compression

Does anybody know if there is a way to configure IExpress (presumably via the SED file) to not compress the files it builds into an installer package? The reason for this is the files I'm packaging are already compressed (except for setup.exe, which is very small), and the extra compression only adds to the build time without saving any additional space.
I have seen on this SED Overview that there are some options to control compression type. I have tried various configurations, but none of them seem to make a difference. The IExpress build process uses the Microsoft makecab utility, and it doesn't appear to pass the correct parameters to makecab when the SED file specifies NONE for CompressionType.
According to MSDN there is a way to disable compression in cabinet files. I just need to figure out how to tell IExpress to do it.
As an aside, another motivation for disabling this compression is that I've noticed Microsoft Security Essentials seems to take particular interest in IExpress Packages. It appears to decompress them to scan the contents whenever the file is copied, which can take a significant amount of time on a 100MB package. I was thinking that the scanning might go quicker if it didn't have to decompress the package first.
I built a .sed file with IExpress, then added
Compress=0
just before the line InsideCompressed=0. Seems to work!

How do I Download efficiently with rsync?

A couple of questions related to one theme: downloading efficiently with Rsync.
Currently, I move files from an 'upload' folder onto a local server using rsync. Files to be moved are often dumped there, and I regularly run rsync so the files don't build up. I use '--remove-source-files' to remove files that have been transferred.
1) the '--delete' options that remove destination files have various options that allow you to choose when to remove the files. This would be handly for '--remove-source-files' since is seems that, by default, rsync only removes the files after all files have been transferred, rather than after each file; Othere than writing a script to make rsync transfer files one-by-one, is there a better way to do this?
2) on the same problem, if a large (single) file is transferred, it can only be deleted after the whole thing has been sucessfully moved. It strikes me that I might be able to use 'split' to split the file up into smaller chunks, to allow each to be deleted as the file downloads; is there a better way to do this?
Thanks.

Resources