GnuPG decrypt multiple files - encryption

I need to decrypt multiple files, in my batch file I have
--decrypt-files c:\PGP\unprocessed\*.pgp
but my script doesn't work. I receive
gpg: can't open c:\PGP\unprocessed*.pgp
instead, and I don't know why. --decrypt c:\PGP\unprocessed\filename.pgp works fine.
Another question is how to use --output when decrypting multiple files? Because when I try to combine two commands I receive an error message indicating that output doesn't work with this command.

For multiple files, the important are options
--multifile --decrypt
Work on CMD:
gpg --pinentry-mode=loopback --passphrase-file "C:\key.txt" --batch --ignore-mdc-error --skip-verify --multifile --decrypt "C:\\files\*.pgp"

The Windows command line is very limited in different ways, one is the lack of reasonable globbing: it does not expand ...\*.pgp to the actual files in that folder. Use a more capable shell (PowerShell, or install one of the shells from the unix world like bash using for example cygwin). Solutions for sticking with cmd.exe would be to pass the filenames through stdin (something like dir *.pgp | gpg --decrypt-files or writing a loop over all *.pgp files and decrypt them individually.
Latter would also help with the second part of your problem: --output can only define a single output; thus it does not work when multiple input files are passed.

Related

How to make a groovy script which uploads a file to JFrog's artifactory

I'm trying to write a simple Groovy script which deploys a text file into my artifactory. I read the REST API in order to understand how to write the script but I've seen so many vastly different versions online I'm confused.
I want it to be a simple groovy script using the REST API and curl.
This is what JFrog are suggesting in their website:
curl -u myUser:myP455w0rd! -X PUT "http://localhost:8081/artifactory/my-repository/my/new/artifact/directory/file.txt" -T Desktop/myNewFile.txt
And it might work perfectly but I don't understand each part here, and I don't know if I can simply integrate this into a groovy script as is or some adjustments are needed.
I'm a beginner in this field and I would love any help!
Thanks in advance
As you are using the '-T' flag it is not required also to use the '-X PUT'.
Also, the use of '-T' allows you to not specify the file name on the destination so for example, your path will be "http://localhost:8081/artifactory/my-repository/my/new/artifact/directory/' and the file name will be the same as it is on the origin.
The full command will look like that:
curl -u user:password -T Desktop/myNewFile.txt "http://localhost:8081/artifactory/my-repository/my/new/artifact/directory/"
Now just to be on the safe side, you are going to have the file name and path to file on the destination as variables right?
The -T flag should only be used for uploading files so don't take it as obvious that you can replace all '-X PUT' with '-T' but for this specific case of uploading a file, it is possible.

PGP Encryption Fails on Large Files

I'm in this weird situation.
I'm trying to encrypt this 11GB which has ~42 Million rows in it using PGP with RSA/Armored Public Key.
Here are the commands I used:
Import Key ->
gpg --import ~/underwood/keys/my_pub_4096_RSA_key.asc
PGP Encryption -
gpg -r "underwood#publickey.com" -o /usr/local/encrrypted-file/encrypted-11GB-file.txt.pgp
--armor --encrypt /usr/local/file-to-encrrypt/this-is-a-11GB-file.txt
`
Issue :
The file size of /usr/local/encrrypted-file/encrypted-11GB-file.txt.pgp is 4GB and row count is only 8M. I'm not sure what happened here. The command completed successfully after 3min without errors.
Question:
How do I further investigate this issue ?
Is there a cap on gpg command on file size ? Because this command workds perfectly fine with 500MB file.
How do I achieve full encryption on 11GB file ?
One solution I can think on top of my head is to chunk the 11GB in 500MB files and do this. But the problem here, I'm not allowed to chunk the file.
Please let me know if there is a better solution to this.
See unix split function to split a binary file into pieces.

Accurev binaries and recursive keep

My problem is in two parts:
My team and I are using an Test Design Studio to write .vbs files in a Accurev Workspace. The problem is that Accurev recognize them as binaries instead text/ptext files... which causes problems when merging. Is there a setting in Accurev I can change to force it to recognize .vbs files as text/ptext?
All those binaries that are already in the stream, I need solution to convert them all into text/ptext. I've given up on the Client UI, because it means I'd have to go in the Workspace explorer and go through every single folder, one by one, and keep those binaries. Then I thought of the commands. I tried
2.1. accurev keep -c "keep ptext" -n -E ptext -R target_folder
2.2. accurev keep -c "keep ptext" -n -E ptext -R .
2.3. But I get a No Element Selected. That's because the "-n" flag is required for recursive, but it means it'll ignore non-modified files... and most of my files are backed and not modified... otherwise I can't even select the directory for keeping (I'll report "can't keep a directory"). I could create a file-list, but it would take as long as manually keeping all the files one by one. I also tried if I could work directly in the stream (since it has another empty stream above, it lists all it's files as outgoing), but I do not have the keep option in the stream. Is there an easy way to convert all files in stream/workspace as text/ptext?
Yes, you will need to enable a pre-create-trigger using the elem_type.pl script found in "accurev install dir/examples" on your server. Inside the elem_type file, you will see the directions for setting this trigger.
Yes, run the following command to generate a list of all the files in your workspace.
"accurev stat -a -ffl > list.txt"
Then run the this command to convert the files to ptext:
"accurev keep -c "ptext conversion" -E ptext -l list.txt"
Then you can promote those files.
Check the files with a hex editor to see if there are any non-ASCII characters.
If there's binary content in the file AccuRev will see those files as binary.
Overwrite the keep as jstanley suggested to change the type.
On the add use "accurev add -E ptext -c "your favorite comment" file.vbs

Reading a vi encrypted file programmatically

I have a vi encrypted text file for storing the login details of DB. Now in my shell script I wanted to get the content say grep DB_NAME login_details.txt.
How do I pass the vi encryption password to the grep command ?
I believe everything is explained on this vim wikia page: http://vim.wikia.com/wiki/Encryption.
Read it whole (it warns you about when the file is actually encrypted, that the viminfo should be unset, etc...)
Especially is show how yo know the encryption used with :setlocal cm? ("show encryption method for the current file") and how to change it with :setlocal cm=... too.
But this is not interactive "per say" ... but you can use command line equivalent to have vim do this from the command line (which then can be used in a script), adding commands to just print the relevant line(s)
If you meant vi instead of vim, you need to specify which OS it is on, and look at vi encryption: what algorithm has been used?
This page shows 2 solutions depending on the type of OS used (And I'm quite sure there is a way to do the equivalent "on the fly", ie without having the decrypted file on disk... look for mcrypt -d --force ... (without specifying a destination file so it has to go to stdout. You need --force otherwise mcrypt refuses to output to stdout)
thing=$(echo '1,$'|vim --cmd "set key=${password}" ${filename} -es|grep needle)
This will load vim with the password and file read in from variables that you set previously somehow, and then dump the entire file contents to stdout, and then grep for the string "needle" and if it exists, it will be stored as the $thing variable.
This example is a bad practice, you should use existing tooling to accomplish secure decryption.

Add last n lines of files to tar/zip

I need to regularly send a collection of log files that can grow quite large, so I would like to only send the last n lines of the each of the files.
for example:
/usr/local/data_store1/file.txt (500 lines)
/usr/local/data_store2/file.txt (800 lines)
Given a file with a list of needed files named files.txt, I would like to create an archive (tar or zip) with the last 100 lines of each of those files.
I can do this by creating a separate directory structure with the tail-ed files, but that seems like a waste of resources when there's probably some piping magic that can happen to accomplish it. Full directory structure also must be preserved since files can have the same names in different directories.
I would like the solution to be a shell script if possible, but perl (without added modules) is also acceptable (this is for Solaris machines that don't have ruby/python/etc.. installed on them.)
You could try
tail -n 10 your_file.txt | while read line; do zip /tmp/a.zip $line; done
where a.zip is the zip file and 10 is n or
tail -n 10 your_file.txt | xargs tar -czvf test.tar.gz --
for tar.gz
You are focusing in an specific implementation instead of looking at the bigger picture.
If the final goal is to have an exact copy of the files on the target machine while minimizing the amount of data transfered, what you should use is rsync, which automatically sends only the parts of the files that have changed and also can automatically compress while sending and decompress while receiving.
Running rsync doesn't need any more daemons on the target machine that the standard sshd one, and to setup automatic transfers without passwords you just need to use public key authentication.
There is no piping magic for that, you will have to create the folder structure you want and zip that.
mkdir tmp
for i in /usr/local/*/file.txt; do
mkdir -p "`dirname tmp/${i:1}`"
tail -n 100 "$i" > "tmp/${i:1}"
done
zip -r zipfile tmp/*
Use logrotate.
Have a look inside /etc/logrotate.d for examples.
Why not put your log files in SCM?
Your receiver creates a repository on his machine from where he retrieves the files by checking them out.
You send the files just by commiting them. Only the diff will be transmitted.

Resources