Missing files while decrypting PGP encrypted tar archive - encryption

I am having trouble with encrypting/decrypting a tar archive using Bouncy Castle OpenPGP library.
I'm using TarArchiveOutputStream to add files to a tar archive and Bouncy Castle OpenPGP to encrypt the archive. Afterwards I am using Kleopatra to manually decrypt the file using the option "Input file is an archive; unpack with: TAR(PGP compatible)".
After unpacking the archive all files except one are lost and the one remaining has all contents removed. (Also happens with other decrypting programs)
I have already confirmed that the tar archive contains all the files before it is encrypted. I have also tried decrypting with that option unchecked and then the archive also contains all the files. My question is why it doesn't work with that option checked since the input file is indeed an archive so it makes sense to check that option.
What I have also tried:
Using another library to make the tar file (JTar)
Comparing a manually made tar file to the one generated. The main difference that I saw was that the one made manually was smaller (22KB vs 30KB) while containing same files.
I am open to suggestions.
Thanks!

Related

What are alternatives to saving a file with a really long filename?

I have an unarchiver that takes in an archive name, and a directory name, and dumps all files from that archive into that directory. No other command-line options. However, someone zipped a file in the archive I am looking to decompress, with 500-ish characters in the filename, and now that program fails when it hits that file (practically all file systems have a limit of 256). What alternative do I have, short of changing the source code and recompiling the unarchiver?
I must mount something as a directory, which would take the files that the unarchiver is writing, and dump them elsewhere-- possibly even as one big file. This something should not send fail messages, even if some write really did fail. Is this possible?

Unix command to remove a file that can't be retrieved at any cost

rm command removes only the reference and not the actual data from the disk, so that can be retrieved later, is there any command that delete the reference and the data at the same time.
It really depends on what you need.
If need to reclaim the storage space without waiting for all the processes that hold the file open to close it or die, invoke truncate -s 0 FILENAME to free the data, and then remove the file with a regular rm FILENAME. The two operations will not be atomic, though, and the programs that have already opened the file can fail in various ways (including a segmentation fault, if they have mapped some portions of the file into memory). However, given that you intend to delete both the file and its contents, there is no general way to prevent the programs that depend on the contents from failing.
If your goal is for the data to not be retrievable with forensic analysis after removal, use a command such as shred, which is specifically designed to overwrite the data before removing the file. And - pay close attention to the limitations of such tools before trusting them to reliably destroy sensitive data.
If, as your comment suggests, you are on OSX, you can use srm to do "secure removals".
SRM(1) SRM(1)
NAME
srm - securely remove files or directories
SYNOPSIS
srm [OPTION]... FILE...
DESCRIPTION
srm removes each specified file by overwriting, renaming, and truncat-
ing it before unlinking. This prevents other people from undeleting or
recovering any information about the file from the command line.
Online manpage is here.
Alternatively, shred is available within the GNU coreutils, which you can easily install on OS X using homebrew, with the command
brew install coreutils

In Autoit what is the fileinstall limitation?

I am creating a Zip file up to 4GB and using FileInstall() but I'm not able to extract and also not able to create a single .exe file.
FileInstall("myPath\myfile.zip","DestinationFolderName")
Is there a limitation regarding size for the FileInstall() function?
FileInstall() is not going to unzip your zip archive no matter what the size is. It will only place the zip in the destination. It's only used to include files in the compiled AutoIt script.
Also, if you use a folder name as the destination, you need a trailing backslash (\).
I highly recommend reading the entry for FileInstall() in the help docs.

Non-proprietary directory encryption

We store measurement results in directories. Each directory has a meta.xml which describes common things about the result file, and several files of data. This result has to be encrypted.
I would dream of a solution like this:
We can use ZIP-, TAR- or a similar algorithm for packing the directory into a file
[optional] We can extend the archive header with our own MIME type (MIME recognition without file extensions)
We can use the encryption algorithm defined in the archive standard (e.g. ZIP) to encrypt/decrypt our result
We can extract single files from the archive, without decrypting the whole file (there are 100Mb files, but most of the time I'm only interested in the meta.xml)
We can use regular tools (7Zip, WinZip, zip on Unix) to access the encrypted file
[optional] We can use more than one key, to encrypt our result file
Is this solution realizable? Are there open-source libraries which do the job? Which encryption algorithm to use?
Best regards!
The use of AES encryption in zip files is supported by PKZip, WinZip, and 7-Zip and is specified in the PKWare zip appnote and well described here: Encryption Specification AE-1 and AE-2. Unfortunately neither Info-ZIP zip nor unzip currently support it (those are what you find on Unixish systems). 7-Zip is open source. As noted, the original zip "encryption" hardly even deserves the name and so should be avoided at all costs. The standardized AES encryption is strong, usable, and relatively widely supported.
Update:
I just noticed another part of your question. Each zip entry can be separately encrypted with a different password, and in fact you can mix unencrypted entries as well in the same zip file.

Which archiving utility should I use in Ubuntu?

I am a Mac/Ubuntu user. I have folders such as "AWK", "awk", "awk_tip" and "awk_notes". I need to archive them, but the variety of utilities confuse me. I had a look at Tar, cpio and pax, but Git has started to fascinate me. I occasionally need encryption and backups.
Please, list the pros and cons of different archiving utilities.
Tar, cpio and pax are ancient Unix utilities. For instance, tar (which is probably the most common of these) was originally intended for making backups on tapes (hence the name, tar = tape archive).
The most commonly used archive formats today are:
tar (in Unix/Linux environments)
tar.gz or tgz (a gzip compressed tar file)
zip (in Windows environments)
If you want just one simple tool, take zip. It works right out of the box on most platforms, and it can be password protected (although the protection is technically weak).
If you need stronger protection (encryption), check out TrueCrypt. It is very good.
Under what OS / toolchain are you working? This might limit the range of existing solutions. Your name suggests Unix, but which one? Further, do you need portability or not?
The standard linux solution (at least to a newbie like me) might be to tar and gzip or bzip2 the folders, then encrypt them with gnupg if you really have to (encrypting awk tutorials seems a bit of overkill to me). You can also use full-fledged backup solutions like bacula, sync to a different location with rsync (perhaps sync to a backup server?).
If you've backing up directories from an ext2/ext3 filesystem, you may want to consider using dump. Some nice features:
it can backup a directory or a whole partition
it saves permissions and timestamps,
it allows you to do incremental backups,
it can compress (gzip or bzip2)
it will automatically split the archive into multiple parts based on a size-limit if you want
it will backup over a network or to a tape as well as a file
It doesn't support encryption, but you can always encrypt the dump files afterwards.

Resources