Get thumbprint of currently imported certificate - x509certificate

I need to import a certificate and get its thumbprint. I tried cmdlet + pipe like this:
Import-Certificate <parameters> | Get-ItemProperty -Name Thumbprint
But I get error Get-ItemProperty : Cannot use interface. The IPropertyCmdletProvider interface is not supported by this provider. I get this error any time I try to use the Get-ItemProperty cmdlet.
And I don't know how would I use this further in the script, even if it went through. Because if I put a variable definition in front of it, like $Thumbprint = Import-Certificate <parameters> | Get-ItemProperty -Name Thumbprint, I suppose it wouldn't really import the certificate to the store, but just store it to the variable, right?
I need to use PowerShell v4

I believe, you want to use Select-Object instead of Get-ItemProperty:
Import-Certificate <parameters> | Select-Object Thumbprint
But as you guessed, Import-Certificate imports certificate to local certificate store. If you want to view certificate without installing it anywhere, then use this syntax:
New-Object Security.Cryptography.X509Certificates.X509Certificate2 $path | Select-Object Thumbprint

Related

How to prevent specific files from an R package to be pushed/uploaded to Github? [duplicate]

I have some files in my repository, and one contains a secret Adafruit key. I want to use Git to store my repository, but I don't want to publish the key.
What's the best way to keep it secret, without having to blank it out everytime I commit and push something?
Depending on what you're trying to achieve you could choose one of those methods:
keep file in the tree managed by git but ignore it with entry in gitignore
keep file content in environment variable,
don't use file with key at all, keep key content elsewhere (external systems like hashicorp's vault, database, cloud (iffy, I wouldn't recommend that), etc.)
First approach is easy and doesn't require much work, but you still has the problem of passing the secret key to different location where you'd use the same repository in a secure manner. Second approach requires slightly more work, has the same drawback as the first one.
Third requires certainly more work then 1st and 2nd, but could lead to a setup that's really secure.
This highly depends on the requirements of the project
Basically, the best strategy security-wise is not to store keys, passwords and in general any vulnerable information inside the source control system. If its the goal, there are many different approaches:
"Supply" this kind of information in Runtime and keep it somewhere else:
./runMyApp.sh -db.password=
Use specialized tools (like, for example Vault by Hashicorp) to manage secrets
Encode the secret value offline and store in git the encoded value. Without a secret key used for decoding, this encoded value alone is useless. Decode the value again in runtime, using some kind of shared keys infra / asymmetric key pair, in this case, you can use a public key for encoding, a private key for decoding
I want to use Git to store my repository, but I don't want to publish the key.
For something as critical as a secret key, I would use a dedicated keyring infrastructure located outside the development environment, optionally coupled to a secret passphrase.
Aside this case, I personnaly use submodules for this. Check out :
git submodule
In particular, I declare a global Git repository, in which I declare in turn another Git repository that will contain the actual project that will go public. This enables us to store at top level everything that is related to the given project, but not forcibly relevant to it and that is not to be published. This could be, for instance, all my drafts, automation scripts, worknotes, project specifications, tests, bug reports, etc.
Among all the advantages this facility provides, we can highlight the fact that you can declare as a submodule an already existing repository, this repository being located inside or outside the parent one.
And what's really interesting with this is that both main repository and submodules remains distinct Git repositories, that still can be configured independently. This means that you don't need your parent repository to have its remote servers configured.
Doing that way, you get all the benefits of a versioning system wherever you work, while still ensuring yourself that you'll never accidentally push outside something that is not stored inside the public submodule.
If you don't have too many secrets to manage, and you do want to keep the secrets in version control, I make the parent repository private. It contains 2 folders - a secrets folder, and a gitsubmodule for the public repository (in another folder). I use ansible crypt to encrypt anything in the secrets folder, and a bash script to pass the decrypted contents, and load those secrets as environment vars to ensure the secrets file always stays encrypted.
Ansible crypt can encrypt and decrypt an environment variable, which I wrap in a bash script to do these functions like so-
testsecret=$(echo 'this is a test secret' | ./scripts/ansible-encrypt.sh --vault-id $vault_key --encrypt)
result=$(./scripts/ansible-encrypt.sh --vault-id $vault_key --decrypt $testsecret)
echo $result
testsecret here is the encrypted base64 result, and it can be stored safely in a text file. Later you could source that file to keep the encrypted result in memory, and finally when you need to use the secret, you can decrypt it ./scripts/ansible-encrypt.sh --vault-id $vault_key --decrypt $testsecret
This bash script referenced above is below (ansible-encrypt.sh). It wraps ansible crypt functions in a way that can store encrypted variables in base64 which resolves some problems that can occur with encoding.
#!/bin/bash
# This scripts encrypts an input hidden from the shell and base 64 encodes it so it can be stored as an environment variable
# Optionally can also decrypt an environment variable
vault_id_func () {
if [[ "$verbose" == true ]]; then
echo "Parsing vault_id_func option: '--${opt}', value: '${val}'" >&2;
fi
vault_key="${val}"
}
secret_name=secret
secret_name_func () {
if [[ "$verbose" == true ]]; then
echo "Parsing secret_name option: '--${opt}', value: '${val}'" >&2;
fi
secret_name="${val}"
}
decrypt=false
decrypt_func () {
if [[ "$verbose" == true ]]; then
echo "Parsing secret_name option: '--${opt}', value: '${val}'" >&2;
fi
decrypt=true
encrypted_secret="${val}"
}
IFS='
'
optspec=":hv-:t:"
encrypt=false
parse_opts () {
local OPTIND
OPTIND=0
while getopts "$optspec" optchar; do
case "${optchar}" in
-)
case "${OPTARG}" in
vault-id)
val="${!OPTIND}"; OPTIND=$(( $OPTIND + 1 ))
opt="${OPTARG}"
vault_id_func
;;
vault-id=*)
val=${OPTARG#*=}
opt=${OPTARG%=$val}
vault_id_func
;;
secret-name)
val="${!OPTIND}"; OPTIND=$(( $OPTIND + 1 ))
opt="${OPTARG}"
secret_name_func
;;
secret-name=*)
val=${OPTARG#*=}
opt=${OPTARG%=$val}
secret_name_func
;;
decrypt)
val="${!OPTIND}"; OPTIND=$(( $OPTIND + 1 ))
opt="${OPTARG}"
decrypt_func
;;
decrypt=*)
val=${OPTARG#*=}
opt=${OPTARG%=$val}
decrypt_func
;;
encrypt)
encrypt=true
;;
*)
if [ "$OPTERR" = 1 ] && [ "${optspec:0:1}" != ":" ]; then
echo "Unknown option --${OPTARG}" >&2
fi
;;
esac;;
h)
help
;;
*)
if [ "$OPTERR" != 1 ] || [ "${optspec:0:1}" = ":" ]; then
echo "Non-option argument: '-${OPTARG}'" >&2
fi
;;
esac
done
}
parse_opts "$#"
if [[ "$encrypt" = true ]]; then
read -s -p "Enter the string to encrypt: `echo $'\n> '`";
secret=$(echo -n "$REPLY" | ansible-vault encrypt_string --vault-id $vault_key --stdin-name $secret_name | base64 -w 0)
unset REPLY
echo $secret
elif [[ "$decrypt" = true ]]; then
result=$(echo $encrypted_secret | base64 -d | /snap/bin/yq r - "$secret_name" | ansible-vault decrypt --vault-id $vault_key)
echo $result
else
# if no arg is passed to encrypt or decrypt, then we a ssume the function will decrypt the firehawksecret env var
encrypted_secret="${firehawksecret}"
result=$(echo $encrypted_secret | base64 -d | /snap/bin/yq r - "$secret_name" | ansible-vault decrypt --vault-id $vault_key)
echo $result
fi
Storing encrypted values as environment variables is much more secure than decrypting something at rest and leaving the plaintext result in memory. That's extremely easy for any process to siphon off.
If you only wish to share the code with others and not the secrets, you can use a git template for the parent private repo structure so that others can inherit that structure, but use their own secrets. This also allows CI to pickup everything you would need for your tests.
Alternatively, If you don't want your secrets in version control, you can simply use git ignore on a containing folder that will house your secrets.
Personally, this makes me nervous, its possible for user error to still result in publicly committed secrets, since those files are still under the root of a public repo, any number of things could go wrong that could be embarrassing with that approach.
For Django
add another file called secrets.py or whatever you want and also another called .gitignore
type secrets.py in .gitignore
paste the secret key into the secrets.py file and to import it in the settings file by using
from foldername.secrets import *
this worked for me.

OpenStack multi-node setup doesn't show VM images on Dashboard

I'm new to OpenStack and I used DevStack to configure a multi-node dev environment, currently compound of a controller and two nodes.
I followed the official documentation and used the development version of DevStack from the official git repo. The controller was set up in a fresh Ubuntu Server 16.04.
I automated all the steps described in the docs using some scripts I made available here.
The issue is that my registered VM images don't appear on the Dashboard. The image page is just empty. When I install a single-node setup, everything works fine.
When I run openstack image list or glance image-list, the image registered during the installation process is listed as below, but it doesn't appear at the Dashboard.
----------------------------------------------------------
| ID | Name | Status |
----------------------------------------------------------
| f1db310f-56d6-4f38 | cirros-0.3.5-x86_64-disk | active |
----------------------------------------------------------
openstack --version openstack 3.16.1
glance --version glance 2.12.1.
I've googled a lot but got no clue.
Is there any special configuration to make images available in multi-node setup?
Thanks.
UPDATE 1
I tried to set the image as shared using
glance image-update --visibility shared f1db310f-56d6-4f38-b5da-11a714203478, then to add it to all listed projects (openstack project list) using the command openstack image add project image_name project_name but it doesn't work either.
UPDATE 2
I've included the command source /opt/stack/devstack/openrc admin admin inside my ~/.profile file so that all environment variables are set. It defines the username and project name as admin, but I've already tried to use the default demo project and demo username.
All env variables defined by the script is shown below.
declare -x OS_AUTH_TYPE="password"
declare -x OS_AUTH_URL="http://10.105.0.40/identity"
declare -x OS_AUTH_VERSION="3"
declare -x OS_CACERT=""
declare -x OS_DOMAIN_NAME="Default"
declare -x OS_IDENTITY_API_VERSION="3"
declare -x OS_PASSWORD="stack"
declare -x OS_PROJECT_DOMAIN_ID="default"
declare -x OS_PROJECT_NAME="admin"
declare -x OS_REGION_NAME="RegionOne"
declare -x OS_TENANT_NAME="admin"
declare -x OS_USERNAME="admin"
declare -x OS_USER_DOMAIN_ID="default"
declare -x OS_USER_DOMAIN_NAME="Default"
declare -x OS_VOLUME_API_VERSION="3"
When I type openstack domain list I get the domain list below.
----------------------------------------------------
| ID | Name | Enabled | Description |
----------------------------------------------------
| default | Default | True | The default domain |
----------------------------------------------------
As the env variables show, the domain is set as the default one.
After reviewing all the installation process, the issue
was due to an incorrect floating IP range defined inside the local.conf file.
The FLOATING_RANGE variable in such a file must be defined as a subnet of the node network. For instance, my controller IP is 10.105.0.40/24 while the floating IP range is 10.105.0.128/25.
I just forgot to change the FLOATING_RANGE variable (I was using the default value as shown here).

"scp | openssl" -- both need password

Is there a way to sensibly do this:
scp user#host:/path/to/file /dev/tty | openssl [options] | less
without creating a file, and without having to supply either password directly in arguments?
The problem is, that both ask for password, but the order in which they start (and therefore also the order in which they ask for the password) is undefined. I'm not able to supply either password.
It would be OK to first finish scp and then start openssl, but without a temporary file.
Although not especially nice, this seems to work:
mkfifo pipe && {
scp user#host:/path/to/file pipe | openssl [options] -in pipe | less
# ^
# note the pipe
rm pipe
}

How to make auto trust gpg public key?

I am trying to add my GPG public key as a part of our appliance installation process. The purpose of it to encrypt any important files like logs before admin pulling them into his local using admin portal and then decrypt them using private key.
The plan is to export public key into a file and make appliance installation process to import it using gpg --import command. But I realized, the key is needed to be trusted/signed before do any encryption.
How to make this key is trusted without any human intervention at the time of installation?
Btw, our appliance os is ubuntu vm and we use kickstart to automate.
Advance thanks for all help.
Your question is really "How do I encrypt to a key without gpg balking at the fact that the key is untrusted?"
One answer is you could sign the key.
gpg --edit-key YOUR_RECIPIENT
sign
yes
save
The other is you could tell gpg to go ahead and trust.
gpg --encrypt --recipient YOUR_RECIPIENT --trust-model always YOUR_FILE
Coincidentally I have a similar situation to the OP - I'm trying to use public/private keys to sign and encrypt firmware for different embedded devices. Since no answer yet shows how to add trust to a key you already have imported, here is my answer.
After creating and testing the keys on a test machine, I exported them as ascii:
$ gpg --export -a <hex_key_id> > public_key.asc
$ gpg --export-secret-keys -a <hex_key_id> > private_key.asc
Then secure-copied and imported them to the build server:
$ gpg --import public_key.asc
$ gpg --import private_key.asc
Important: add trust
Now edit the key to add ultimate trust:
$ gpg --edit-key <user#here.com>
At the gpg> prompt, type trust, then type 5 for ultimate trust, then y to confirm, then quit.
Now test it with a test file:
$ gpg --sign --encrypt --yes --batch --status-fd 1 --recipient "recipient" --output testfile.gpg testfile.txt
which reports
...
[GNUPG:] END_ENCRYPTION
without adding trust, I get various errors (not limited to the following):
gpg: There is no assurance this key belongs to the named user
gpg: testfile.bin: sign+encrypt failed: Unusable public key
There's an easier way to tell GPG to trust all of its keys by using the --trust-model option:
gpg -a --encrypt -r <recipient key name> --trust-model always
From the man page:
--trust-model pgp|classic|direct|always|auto
Set what trust model GnuPG should follow. The models are:
always Skip key validation and assume that used
keys are always fully trusted. You generally
won't use this unless you are using some
external validation scheme. This option also
suppresses the "[uncertain]" tag printed
with signature checks when there is no evidence
that the user ID is bound to the key. Note that
this trust model still does not allow the use
of expired, revoked, or disabled keys.
Add trusted-key 0x0123456789ABCDEF to your ~/.gnupg/gpg.conf replacing the keyid. This is equivalent to ultimately trusting this key which means that certifications done by it will be accepted as valid. Just marking this key as valid without trusting it is harder and either requires a signature or switching the trust-model to direct. If you are sure to only import valid keys you can simply mark all keys as valid by adding trust-model always. In the latter case ensure that you disable automatic key retrieval (not enabled by default).
This worked for me:
Trying to encrypt a file responds with this:
gpg -e --yes -r <uid> <filename>
It is NOT certain that the key belongs to the person named
in the user ID. If you *really* know what you are doing,
you may answer the next question with yes.
Use this key anyway? (y/N)
That causes my shell script to fail.
So I:
$gpg --edit-key <uid>
gpg> trust
Please decide how far you trust this user to correctly verify other
users' keys (by looking at passports, checking fingerprints from
different sources, etc.)
1 = I don't know or won't say
2 = I do NOT trust
3 = I trust marginally
4 = I trust fully
5 = I trust ultimately
m = back to the main menu
Your decision? 5
Do you really want to set this key to ultimate trust? (y/N) y
Please note that the shown key validity is not necessarily correct
unless you restart the program.
gpg> quit
Now the encrypt works properly.
Based on #tersmitten's article and a bit of trial and error, I ended up with the following command line to trust all keys in a given keyring without user interaction. I use it for keys used with both StackEschange Blackbox and hiera-eyaml-gpg:
# The "-E" makes this work with both GNU sed and OS X sed
gpg --list-keys --fingerprint --with-colons |
sed -E -n -e 's/^fpr:::::::::([0-9A-F]+):$/\1:6:/p' |
gpg --import-ownertrust
Personally, I prefer a solution which stores the results in the trustdb file itself rather than depends on user environment outside the shared Git repo.
Here's a trick I've figured out for automation of GnuPG key management, hint heredoc + --command-fd 0 is like magic. Below is an abridged version of one of the scripts that's been written to aid in automation with GnuPG.
#!/usr/bin/env bash
## First argument should be a file path or key id
Var_gnupg_import_key="${1}"
## Second argument should be an integer
Var_gnupg_import_key_trust="${2:-1}"
## Point to preferred default key server
Var_gnupg_key_server="${3:-hkp://keys.gnupg.net}"
Func_import_gnupg_key_edit_trust(){
_gnupg_import_key="${1:-${Var_gnupg_import_key}}"
gpg --no-tty --command-fd 0 --edit-key ${_gnupg_import_key} <<EOF
trust
${Var_gnupg_import_key_trust}
quit
EOF
}
Func_import_gnupg_key(){
_gnupg_import_key="${1:-${Var_gnupg_import_key}}"
if [ -f "${_gnupg_import_key}" ]; then
echo "# ${0##*/} reports: importing key file [${_gnupg_import_key}]"
gpg --no-tty --command-fd 0 --import ${_gnupg_import_key} <<EOF
trust
${Var_gnupg_import_key_trust}
quit
EOF
else
_grep_string='not found on keyserver'
gpg --dry-run --batch --search-keys ${_gnupg_import_key} --keyserver ${Var_gnupg_key_server} | grep -qE "${_grep_string}"
_exit_status=$?
if [ "${_exit_status}" != "0" ]; then
_key_fingerprint="$(gpg --no-tty --batch --dry-run --search-keys ${_gnupg_import_key} | awk '/key /{print $5}' | tail -n1)"
_key_fingerprint="${_key_fingerprint//,/}"
if [ "${#_key_fingerprint}" != "0" ]; then
echo "# ${0##*/} reports: importing key [${_key_fingerprint}] from keyserver [${Var_gnupg_key_server}]"
gpg --keyserver ${Var_gnupg_key_server} --recv-keys ${_key_fingerprint}
Func_import_gnupg_key_edit_trust "${_gnupg_import_key}"
else
echo "# ${0##*/} reports: error no public key [${_gnupg_import_key}] as file or on key server [${Var_gnupg_key_server}]"
fi
else
echo "# ${0##*/} reports: error no public key [${_gnupg_import_key}] as file or on key server [${Var_gnupg_key_server}]"
fi
fi
}
if [ "${#Var_gnupg_import_key}" != "0" ]; then
Func_import_gnupg_key "${Var_gnupg_import_key}"
else
echo "# ${0##*/} needs a key to import."
exit 1
fi
Run with script_name.sh 'path/to/key' '1' or script_name.sh 'key-id' '1' to import a key and assign a trust value of 1 or edit all values with script_name.sh 'path/to/key' '1' 'hkp://preferred.key.server'
Encryption should now be without complaint but even if it does the following --always-trust option should allow encryption even with complaint.
gpg --no-tty --batch --always-trust -e some_file -r some_recipient -o some_file.gpg
If you wish to see this in action, then check the Travis-CI build logs and how the helper script GnuPG_Gen_Key.sh is used for both generating and importing keys in the same operation... version two of this helper script will be much cleaner and modifiable but it's a good starting point.
This oneliner updates the trustdb with the ownertrust values from STDIN -- by extracting the fingerprint to the format required by --import-ownertrust flag.
This flag, as detailed on gpg man page, should be used In case of a severely damaged trustdb and/or if you have a recent backup of the ownertrust values, you may re-create the trustdb.
gpg --list-keys --fingerprint \
| grep ^pub -A 1 \
| tail -1 \
| tr -d ' ' \
| awk 'BEGIN { FS = "\n" } ; { print $1":6:" }' \
| gpg --import-ownertrust
One way to trust imported gpg keys:
gpg --import <user-id.keyfile>
fpr=`gpg --with-colons --fingerprint <user-id> |awk -F: '$1 == "fpr" {print$10; exit}'`
gpg --export-ownertrust && echo $fpr:6: |gpg --import-ownertrust
here, I assume that you import a key with the <user-id> from <user-id.keyfile>. The second line only extracts fingerprint, you can drop it if you know the fingerprint beforehand.
I think, I figured way to do this.
I used 'gpg --import-ownertrust' to export my trust db into a text file then removed all of my keys from it except public key I needed to push. And then imported my public key and edited owner-trust file on to server. This seems like working.
Now I am having trouble implementing these steps in Kickstart file:-(
With powershell, here is how to trust john.doe#foo.bar (adapted from #tersmitten blog post):
(gpg --fingerprint john.doe#foo.bar | out-string) -match 'fingerprint = (.+)'
$fingerprint = $Matches[1] -replace '\s'
"${fingerprint}:6:" | gpg --import-ownertrust
Note: using cinst gpg4win-vanilla
There is a way to autotrust key using --edit-key, but without getting into interactive shell (so can be automated in script). Below is a sample for windows:
(echo trust &echo 5 &echo y &echo quit) | gpg --command-fd 0 --edit-key your#email.com
Unix based:
echo -e "5\ny\n" | gpg --homedir . --command-fd 0 --expert --edit-key user#exaple.com trust;
For more info read this post. It details if you are creating more than one key.
I used following script for import key:
#!/bin/bash
function usage() {
cat >&2 <<EOF
Usage: $0 path_of_private_key
Example: gpg_import.sh ~/.ssh/my_gpg_private.key
Import gpg key with trust.
EOF
exit 1
}
[[ $# -lt 1 ]] && usage
KEY_PATH=$1
KEY_ID=$(gpg --list-packets ${KEY_PATH}/${GPG_PRIVATE_KEY} | awk '/keyid:/{ print $2 }' | head -1)
gpg --import ${KEY_PATH}/${GPG_PRIVATE_KEY}
(echo trust &echo 5 &echo y &echo quit) | gpg --command-fd 0 --edit-key $KEY_ID
I am using windows with gpgwin4.0.2 installed.
Open the Kleopatra (the GUI) -> Certificates -> Right Click -> Certify. Once the has been certify, this message will not show any.
Try this :
(echo trust &echo 5 &echo y &echo quit &echo save) | gpg --homedir 'gpgDirectory/' --batch --command-fd 0 --edit-key 'youKey'
--homedir : not required

error out to logger

I use logger command to log messages to /var/log/messages But How do I use logger to save the standard out, error out messages? Something like this does not work.
grep `date +'%y%m%d'` /var/log/mysqld.log | sed 's/^/computer /' | logger 2> logger
You're confusing redirection to a process (pipes, or |) with redirection to a file (>).
You need to redirect stderr to stdout by using 2>&1, and then pipe (|) to your logger process.
e.g.
grep .... 2>&1 | logger
This assumes you're using sh or a variant. The syntax is different for csh. It's worth having a look at this excerpt from Unix Power Tools for more info (especially as your previous question seems strongly related).

Resources