Does anybody know if there is a way to configure IExpress (presumably via the SED file) to not compress the files it builds into an installer package? The reason for this is the files I'm packaging are already compressed (except for setup.exe, which is very small), and the extra compression only adds to the build time without saving any additional space.
I have seen on this SED Overview that there are some options to control compression type. I have tried various configurations, but none of them seem to make a difference. The IExpress build process uses the Microsoft makecab utility, and it doesn't appear to pass the correct parameters to makecab when the SED file specifies NONE for CompressionType.
According to MSDN there is a way to disable compression in cabinet files. I just need to figure out how to tell IExpress to do it.
As an aside, another motivation for disabling this compression is that I've noticed Microsoft Security Essentials seems to take particular interest in IExpress Packages. It appears to decompress them to scan the contents whenever the file is copied, which can take a significant amount of time on a 100MB package. I was thinking that the scanning might go quicker if it didn't have to decompress the package first.
I built a .sed file with IExpress, then added
Compress=0
just before the line InsideCompressed=0. Seems to work!
Related
I want to provide support to convert single-page and multi-page tiff files into PDFs. There is an executable in Bit Miracle's LibTiff.NET called Tiff2Pdf.
How do I use Tiff2Pdf in my application to convert tiff data stream (not a file) into a pdf data stream (not a file)?
I do not know if there is an API exposed because the documentation only lists Tiff2Pdf as a tool. I also do not see any examples in the examples folder using it in a programmatic way to determine if it can handle data streams or how to use it in my own program.
libtiff tools expect a filename so the background run shown below is simply from upper right X.tif to various destinations, first is default
tiff2pdf x.tif
and we can see it writes a tiff2pdf file stream to console (Standard Output) however it failed in memory without a directory to write to. However on second run we can redirect
tiff2pdf x.tif > a.pdf
or alternately specify a destination
tiff2pdf -o b.pdf x.tif
So in order to use those tools we need a File System to receive the file objects, The destination folder/file directory can be a Memory File System drive or folder.
Thus you need to initiate that first.
NuGet is a package manager simply bundling the lib and as I don't use .net your a bit out on a limb as BitMiricle are not offering free support (hence point you at Stack Overflow, a very common tech support PLOY, Pass Liability Over Yonder) however looking at https://github.com/BitMiracle/libtiff.net/tree/master/Samples
they suggest memory in some file names such as https://github.com/BitMiracle/libtiff.net/tree/master/Samples/ConvertToSingleStripInMemory , perhaps get more ideas there?
I'm working on some code in IDL that retrieves data files through FTP that are Unix compressed (.Z) files. I know IDL can work with .gz compressed files with the /compress keyword however it doesn't seem capable of playing nicely with the .Z compression.
What are my options for working with these files? The files I am downloading are coming from another institution so I have no control in the compression being used. Downloading and decompressing the files manually before running the code is an absolute last resort as it makes things a lot more difficult as I don't always know which files I need from the FTP site in advance so the code grabs the ones needed based on the parameters in real time.
I'm currently running on Windows 7 but once the code is finished it will be used on a Unix system as well (computer cluster).
You can use SPAWN as you note in your comment (assuming you can find an equivalent of the Unix uncompress command that runs on Windows), or for higher speed you can use an external C function with CALL_EXTERNAL to do the decompression. Just by coincidence, I posted an answer on stackexchange the other day with just such a C function to decompress .Z files here.
I backed up a large number of files to S3 from a PC before switching to a Mac several months ago. Several months later, I'm now trying to open the files and realized the files were all compressed by the S3 GUI tool I used so I can not open them.
I can't remember what program I used to upload the files and standard decompression commands from the command line are not working e.g.,
unzip
bunzip2
tar -zxvf
How can I determine what the compression type is of the file? Alternatively, what other decompression techniques can I try?
PS - I know the files are not corrupted because I tested downloading and opening them back when I originally uploaded to S3.
You can use Universal Extractor (open source) to determine compression types.
Here is a link: http://legroom.net/software/uniextract/
The little downside is that it looks in the first place for the extension, but I manage to change the extensions myself for a inknown file and it works almost always, eg .rar or .exe etc..
EDIT:
I found a huge list of archive programs, maybe one of them will work? It's ridiciously big:
http://www.maximumcompression.com/data/summary_mf.php
http://www.maximumcompression.com/index.html
I'm using the xz zipping utility on a PBS cluster; I've just realised that the time I've allowed for my zipping jobs won't be long enough, and so would like to restart them (and then, presumably, I'll need to include the .xz that has already been created in the new archive file?). Is it safe to kill the jobs, or is this likely to corrupt the .xz files that have already been created?
I am not sure about the implications of using xz in a cluster, but in general killing an xz process (or any decent compression utility) should only affect the file being compressed at the time the process terminates. More specifically:
Any output files from input files that have already been compressed should not be affected. The resulting .xz compressed files should remain perfectly usable.
Any input files that have not been processed yet should not be altered at all.
The input file that was being compressed at the time of termination should not be affected.
Provided that the process is terminated using the SIGTERM signal, rather than a signal than cannot be caught like SIGKILL, xz should clean-up after itself before exiting. More specifically, it should not leave any partial output files around.
If xz is killed violently, the worst that should (as opposed to might) happen is for a partial compressed file to remain on the disk, right along its corresponding input file. You may want to ensure that such files are cleaned up properly - a good way is to have xz work in a separate directory from the actual storage area and move files in and out for compression.
That said, depending on the importance of the compressed data, you may still want to incorporate measures to detect and deal with any corrupt files. There can be a lot of pathological situations where things do not happen as they are supposed to...
Files in var/blobstorage can be listed and sorted by their sizes via Unix commands. This way shows big files on top list. How can I identify these files belongs to which IDs/paths in a Plone site?
There is no 'supported' way to do this. You could probably write a script to inspect the ZODB storage, but it'd be complicated. If you want to find the biggest files in your Plone site, you're probably better off writing a script that runs in Plone and using it to search (using portal_catalog) for all File objects (or whatever content type is most likely to have big files) and calling get_size() on it. That should return the (cached) size, and you can delete what you want to clean up.