Converting .asc back to .gpg - encryption

I normally wouldn't ask but I can't find the answer on SO or google.
I'm using a library that accepts base64 encoded pgp public keys in string format. However, one of the components I need to convert has a key in .asc format. How can I convert it back to binary-pgp or base64-gp.
Sorry if it's really dumb question, I'm new to the pgp toolsets.
Thanks a lot!
Shane

Turns out it's really simple:
gpg --dearmor file.asc
Source:
https://lists.gnupg.org/pipermail/gnupg-devel/2011-October/026253.html

If you want to avoid a final filename of .asc.gpg, you can use the -o option of gpg, like so:
gpg -o foobar.gpg --dearmor foobar.asc
(I don't recommend using gpg --dearmor < file.asc > file.gpg because if gpg fail or does not exist, file.gpg will still be created (empty file), which lead to unwanted side effect when working with automation).

Related

How to convert EBCDIC file to ASCII file using eighter UNIX or Informatica Power Center?

Can you please let me know the approach to convert ebcdic file to ascii file using unix or informatica?
I have searched in google but no clue and expert says it can be done through power exchange but not sure about it.
Below is the sample file for your reference and few files may come in fixed width and few may come in delimited format since we have multiple source applications which generates files.
Thanks in advance for your help and form past several days i have searched in google.
You can use command like:
iconv -f EBCDIC -t ASCII filename >output_filename
or with dd
dd conv=ascii if=filename of=output_filename

Replace  with space in a file

In my file somehow  is getting added. I am not sure what it is and how it is getting added.
12345A 210 CBCDEM
I want to remove this character from the file . I tried basic sed command to get it remove but unsuccessful.
sed -i -e 's/\Â//g'
I also read that dos2unix will do the job but unfortunately that also didn't work .Assuming it was hex character I also tried to remove it using hex value sed -i 's/\xc2//g' but that also didnt work
I really want to understand what this character is and how it is getting added. Moreover , is there possible way to delete all such characters in a file .
Adding encoding details :--
file test.txt
test.txt: ISO-8859 text
echo $LANG
en_US.UTF-8
OS Details :--
uname -a
Linux vm-testmachine-001 3.10.0-693.11.1.el7.x86_64 #1 SMP Fri Oct 27 05:39:05 EDT 2017 x86_64 x86_64 x86_64 GNU/Linux
Regards.
It looks like you have an encoding mismatch between the program that writes the file (in some part of ISO-8859) and the program reading the file (assuming it to be UTF-8). This is a textbook use-case for iconv. In fact the sample in the man-page is almost exactly applicable to your case:
iconv -f iso-8859-1 -t utf-8 test.txt
iconv is a fairly standard program on almost every Unix distribution I have seen, so you should not have any issues here.
Based on the fact that you appear to be writing with English as your primary language, you are probably looking for iso-8859-1, which is quite popular apparently.
If that does not fix your issue, You probably need to find the proper encoding for the output of your database. You can do
iconv -l
to get a list of encodings available for iconv, and use the one that works for you. Keep in mind that the output of file saying ISO-8859 text is not absolute. There is no way to distinguish things like pure ASCII and UTF-8 in many cases. If I am not mistaken, file uses heuristics based on frequencies of character codes in the file to determine the encoding. It is quite liable to make a mistake if the sample is small and/or ambiguous.
If you want to save the output of iconv and your version supports the -o flag, you can use it. Otherwise, use redirection, but carefully:
TMP=$(mktemp)
iconv -f iso-8859-1 -t utf-8 test.txt > "$TMP" && mv "$TMP" test.txt

unix diff to file

I'm having a little trouble getting the output of diff to write to file. I have a new and old version of a .strings file and I want to be able to write the diff between these two files to a .strings.diff file.
Here's where I am right now:
diff -u -a -B $PROJECT_DIR/new/Localizable.strings $PROJECT_DIR/old/Localizable.strings >> $PROJECT_DIR/diff/Localizable.strings.diff
fgrep + $PROJECT_DIR/diff/Localizable.strings.diff > $PROJECT_DIR/diff/Localizable.txt
The result of the diff command writes to Localizable.strings.diff without any issues, but Localizable.strings.diff appears to be a binary file. Is there any way to output the diff to a UTF-8 encoded file instead?
Note that I'm trying to just get the additions using fgrep in my second command. If there's an easier way to do this, please let me know.
Thanks,
Sean
First, you probably need to identify the encoding of the Localizable.strings files. This might be done in a manner described by How to find encoding of a file in Unix via script(s), for example.
Then probably you need to convert the Localizable.strings file to UTF-8 with a tool like iconv using commands something like:
iconv -f x -t UTF-8 $PROJECT_DIR/new/Localizable.strings >Localizable.strings.new.utf8
iconv -f x -t UTF-8 $PROJECT_DIR/old/Localizable.strings >Localizable.strings.old.utf8
Where x is the actual encoding in a form recognized by iconv. You can use iconv --list to show all the encodings it knows about.
Then, you probably need to diff without having to use -a.
diff -u -B Localizable.strings.old.utf8 Localizable.strings.new.utf8 >Localizable.strings.diff.utf8

How can convert a dictionary file (.dic) with an affix file (.aff) to create a list of words?

Im looking at a dictionary file (".dic") and its associated "aff" file. What I'm trying to do is combine the rules in the "aff" file with the words in the "dic" file to create a global list of all words contained within the dictionary file.
The documentation behind these files is difficult to find. Does anyone know of a resource that I can learn from?
Is there any code out there that will already do this (am I duplicating an effort that I don't need to)?
thanks!
According to Pillowcase, here it's an example of usage:
# Download dictionary
wget -O ./dic/es_ES.aff "https://raw.githubusercontent.com/sbosio/rla-es/master/source-code/hispalabras-0.1/hispalabras/es_ES.aff"
wget -O ./dic/es_ES.dic "https://raw.githubusercontent.com/sbosio/rla-es/master/source-code/hispalabras-0.1/hispalabras/es_ES.dic"
# Compile program
wget -O ./dic/unmunch.cxx "https://raw.githubusercontent.com/hunspell/hunspell/master/src/tools/unmunch.cxx"
wget -O ./dic/unmunch.h "https://raw.githubusercontent.com/hunspell/hunspell/master/src/tools/unmunch.h"
g++ -o ./dic/unmunch ./dic/unmunch.cxx
# Generate dictionary
./dic/unmunch ./dic/es_ES.dic ./dic/es_ES.aff 2> /dev/null > ./dic/es_ES.txt.bk
sort ./dic/es_ES.txt.bk > ./dic/es_ES.txt # Opcional
rm ./dic/es_ES.txt.bk # Opcional
You need a utility called munch.exe to apply the aff rules to the dic file.
These could be Hunspell dictionary files. Unfortunately, the command to create a "global" or unmunched wordlist only fully support simple .aff and .dic files.
From the documentation.
unmunch: list all recognized words of a MySpell dictionary
Syntax:
unmunch dic_file affix_file
Try it and see what happens. For generating all wordforms for one word only, look here.

paste without temporary files in Unix

I'm trying to use the Unix command paste, which is like a column-appending form of cat, and came across a puzzle I've never known how to solve in Unix.
How can you use the outputs of two different programs as the input for another program (without using temporary files)?
Ideally, I'd do this (without using temporary files):
./progA > tmpA;
./progB > tmpB; paste tmpA tmpB
This seems to come up relatively frequently for me, but I can't figure out how to use the output from two different programs (progA and progB) as input to another without using temporary files (tmpA and tmpB).
For commands like paste, simply using paste $(./progA) $(./progB) (in bash notation) won't do the trick, because it can read from files or stdin.
The reason I'm wary of the temporary files is that I don't want to have jobs running in parallel to cause problems by using the same file; ensuring a unique file name is sometimes difficult.
I'm currently using bash, but would be curious to see solutions for any Unix shell.
And most importantly, am I even approaching the problem in the correct way?
Cheers!
You do not need temp files under bash, try this:
paste <(./progA) <(./progB)
See "Process Substitution" in the Bash manual.
Use named pipes (FIFOs) like this:
mkfifo fA
mkfifo fB
progA > fA &
progB > fB &
paste fA fB
rm fA fB
The process substitution for Bash does a similar thing transparently, so use this only if you have a different shell.
Holy moly, I recent found out that in some instances, you can get your process substitution to work if you set the following inside of a bash script (should you need to):
set +o posix
http://www.linuxjournal.com/content/shell-process-redirection
From link:
"Process substitution is not a POSIX compliant feature and so it may have to be enabled via: set +o posix"
I was stuck for many hours, until I had done this. Here's hoping that this additional tidbit will help.
Works in all shells.
{
progA
progB
} | paste

Resources