OpenPGP error while using Meta Discovery - nginx

I'm currently testing the CoreOS container runtime rocket and recreated a scenario to sign and distribute images via Meta Discovery which is based on the following guide. When I try to run a self-signed image using Meta Discovery I get the following error/output:
rkt: using image from local store for image name coreos.com/rkt/stage1-coreos:0.16.0
rkt: searching for app image rocket-example.eu/hellorocket
rkt: remote fetching from URL "https://rocket-example.eu/images/hellorocket.aci"
prefix: "rocket-example.eu/hellorocket"
key: "https://rocket-example.eu/pubkeys.gpg"
gpg key fingerprint is: 993C 033A 1556 CCDF 4321 EB17 8192 E9F7 DBD1 49AE
subkey fingerprint: 02BB E974 02CF 0676 28C8 424C DFB3 FED2 080B 7D76
RXXXX XXXXX (ACI signing key) <rXXXX.XXXXX#XXXXX.XX-XXXXX.de>
Key "https://rocket-example.eu/pubkeys.gpg" already in the keystore
rkt: downloading signature from https://rocket-example.eu/images/hellorocket.aci
Downloading signature: 0 B/1.75 MB
Downloading signature: 3.83 KB/1.75 MB
Downloading signature: 1.75 MB/1.75 MB
run: openpgp: invalid data: tag byte does not have MSB set
I'm using a VM running Ubuntu 15.10, rkt 0.16.0 and GnuPG 2.0.23. The images are provided by a local nginx server.
The created signature hellorocket.aci.asc looks like:
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQEcBAABAgAGBQJWqPPqAAoJEJtfmFGWacfBx0gH/i1EVAs2HJm7rOpp0WqbamFa
kC6vH1qs8Rvcagpkcar5ZAZFhC1oQVnF7oB7mvU4Ich3BOS0bBXCgef39oGxVXD6
HrHDB1FX1Q4hFMCnJgFNR4isPaaGy9Hm0uNjE8QxPWBtLgYW3zp5EwBRz3uRizQ7
+BY5Bm+cBIICENKcweTwIXlVgEFk8eFSnMyJ7NP56LbHbZWbb6gFywmz/5A4yJPJ
Qit/iT+FwSfU+xBMpNc2KEux46DfmfpBMippBtMh8wba7Unrjig3oV2Phyqe+UOL
Z6zJjg7dJiAxj7NOwzQRscUyXqmN1yXCF5Tj5ldOwMHXqdXVBw5/KzoTzk1Kl4w=
=9lM+
-----END PGP SIGNATURE-----

Related

Kibana Server not allowing remote access

I've edited my Kibana.yaml config file to allow remote access using the DHCP IP address on my router from a bridged connection using my adapter.
It seems to not establish a connection using the port and IP assigned.
[root#localhost bin]# ./kibana --allow-root
^C^C log [14:36:15.000] [info][plugins-service] Plugin "visTypeXy" is disabled.
log [14:36:15.025] [info][plugins-service] Plugin "auditTrail" is disabled.
log [14:36:15.084] [warning][config][deprecation] Config key [monitoring.cluster_alerts.email_notifications.email_address] will be required for email notifications to work in 8.0."
^C[root#localhost bin]# ./kibana --allow-root &
[1] 2499
[root#localhost bin]# log [14:36:23.872] [info][plugins-service] Plugin "visTypeXy" is disabled.
log [14:36:23.878] [info][plugins-service] Plugin "auditTrail" is disabled.
log [14:36:23.960] [warning][config][deprecation] Config key [monitoring.cluster_alerts.email_notifications.email_address] will be required for email notifications to work in 8.0."
log [14:36:24.133] [info][plugins-system] Setting up [96] plugins: [taskManager,licensing,globalSearch,globalSearchProviders,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetry,telemetryCollectionXpack,kibanaUsageCollection,securityOss,newsfeed,mapsLegacy,kibanaLegacy,translations,share,legacyExport,embeddable,uiActionsEnhanced,expressions,data,home,observability,cloud,console,consoleExtensions,apmOss,searchprofiler,painlessLab,grokdebugger,management,indexPatternManagement,advancedSettings,fileUpload,savedObjects,dashboard,visualizations,visTypeVega,visTypeTimelion,timelion,features,upgradeAssistant,security,snapshotRestore,enterpriseSearch,encryptedSavedObjects,ingestManager,indexManagement,remoteClusters,crossClusterReplication,indexLifecycleManagement,dashboardMode,beatsManagement,transform,ingestPipelines,maps,licenseManagement,graph,dataEnhanced,visTypeTable,visTypeMarkdown,tileMap,regionMap,inputControlVis,visualize,esUiShared,charts,lens,visTypeVislib,visTypeTimeseries,rollup,visTypeTagcloud,visTypeMetric,watcher,discover,discoverEnhanced,savedObjectsManagement,spaces,reporting,lists,eventLog,actions,case,alerts,stackAlerts,triggersActionsUi,ml,securitySolution,infra,monitoring,logstash,apm,uptime,bfetch,canvas]
log [14:36:24.394] [warning][config][plugins][security] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in kibana.yml
log [14:36:24.395] [warning][config][plugins][security] Session cookies will be transmitted over insecure connections. This is not recommended.
log [14:36:24.433] [warning][config][encryptedSavedObjects][plugins] Generating a random key for xpack.encryptedSavedObjects.encryptionKey. To be able to decrypt encrypted saved objects attributes after restart, please set xpack.encryptedSavedObjects.encryptionKey in kibana.yml
log [14:36:24.439] [warning][ingestManager][plugins] Fleet APIs are disabled due to the Encrypted Saved Objects plugin using an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in kibana.yml.
log [14:36:24.561] [warning][config][plugins][reporting] Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in kibana.yml
log [14:36:24.563] [warning][config][plugins][reporting] Chromium sandbox provides an additional layer of protection, but is not supported for Linux CentOS 8.3.2011
OS. Automatically setting 'xpack.reporting.capture.browser.chromium.disableSandbox: true'.
log [14:36:24.575] [warning][actions][actions][plugins] APIs are disabled due to the Encrypted Saved Objects plugin using an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in kibana.yml.
log [14:36:24.596] [warning][alerting][alerts][plugins][plugins] APIs are disabled due to the Encrypted Saved Objects plugin using an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in kibana.yml.
log [14:36:24.785] [info][monitoring][monitoring][plugins] config sourced from: production cluster
log [14:36:25.067] [info][savedobjects-service] Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations...
log [14:36:25.409] [info][savedobjects-service] Starting saved objects migrations
log [14:36:25.976] [info][plugins-system] Starting [96] plugins: [taskManager,licensing,globalSearch,globalSearchProviders,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetry,telemetryCollectionXpack,kibanaUsageCollection,securityOss,newsfeed,mapsLegacy,kibanaLegacy,translations,share,legacyExport,embeddable,uiActionsEnhanced,expressions,data,home,observability,cloud,console,consoleExtensions,apmOss,searchprofiler,painlessLab,grokdebugger,management,indexPatternManagement,advancedSettings,fileUpload,savedObjects,dashboard,visualizations,visTypeVega,visTypeTimelion,timelion,features,upgradeAssistant,security,snapshotRestore,enterpriseSearch,encryptedSavedObjects,ingestManager,indexManagement,remoteClusters,crossClusterReplication,indexLifecycleManagement,dashboardMode,beatsManagement,transform,ingestPipelines,maps,licenseManagement,graph,dataEnhanced,visTypeTable,visTypeMarkdown,tileMap,regionMap,inputControlVis,visualize,esUiShared,charts,lens,visTypeVislib,visTypeTimeseries,rollup,visTypeTagcloud,visTypeMetric,watcher,discover,discoverEnhanced,savedObjectsManagement,spaces,reporting,lists,eventLog,actions,case,alerts,stackAlerts,triggersActionsUi,ml,securitySolution,infra,monitoring,logstash,apm,uptime,bfetch,canvas]
log [14:36:25.978] [info][plugins][taskManager][taskManager] TaskManager is identified by the Kibana UUID: dbda794a-41a8-4223-b66f-b4fed95353db
log [14:36:26.302] [info][crossClusterReplication][plugins] Your basic license does not support crossClusterReplication. Please upgrade your license.
log [14:36:26.339] [info][plugins][watcher] Your basic license does not support watcher. Please upgrade your license.
log [14:36:26.340] [info][kibana-monitoring][monitoring][monitoring][plugins] Starting monitoring stats collection
[2021-01-16T09:36:26,422][INFO ][o.e.c.m.MetadataIndexTemplateService] [localhost.localdomain] adding template [.management-beats] for index patterns [.management-beats]
log [14:36:27.290] [info][listening] Server running at http://10.0.0.137:5601
log [14:36:28.153] [info][server][Kibana][http] http server running at http://10.0.0.137:5601
log [14:36:28.157] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:Actions-actions_telemetry]: version conflict, document already exists (current version [4])
log [14:36:28.181] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:Lens-lens_telemetry]: version conflict, document already exists (current version [4])
log [14:36:28.182] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:Alerting-alerting_telemetry]: version conflict, document already exists (current version [4])
log [14:36:28.183] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:endpoint:user-artifact-packager:1.0.0]: version conflict, document already exists (current version [64])
log [14:36:28.184] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:apm-telemetry-task]: version conflict, document already exists (current version [4])
log [14:36:28.973] [warning][plugins][reporting] Enabling the Chromium sandbox provides an additional layer of protection.

How to read variables from a file decrypted at build time using Google Cloud Build and Google Cloud KMS

I am following this tutorial for getting encrypted keys into my cloudbuild YAML file.
I am trying to understand how am I supposed to " use the decrypted ... file in the workspace directory" variables in the subsequent steps of my YAML file.
My cloudbuild step where I decrypt the keys file is as follows:
- name: gcr.io/cloud-builders/gcloud
args: ['kms', 'decrypt', '--ciphertext-file=<encrypted_file>', '--plaintext-file=<decrypted_file>', '--location=<location>', '--keyring=<keyring>', '--key=<key>']
The tutorial is not clear on how to do this and I cannot find anything on the Internet related to this.
Any help is very appreciated.
Thanks.
When you encrypt your content with gcloud kms encrypt, you can write the output to a file in your workspace, for example:
# replace with your values
gcloud kms encrypt \
--location=global \
--keyring=my-kr \
--key=my-key \
--plaintext-file=./data-to-encrypt \
--ciphertext-file=./encrypted-data
Where ./data-to-encrypt is a file on disk that contains your plaintext secret and ./encrypted-data is the destination path on disk where the encrypted ciphertext should be written.
When working directly with the API, the interaction looks like this:
plaintext -> kms(encrypt) -> ciphertext
However, when working with gcloud, it looks like this:
plaintext-file -> gcloud(read) -> kms(encrypt) -> ciphertext -> gcloud(write)
When you invoke Cloud Build, it effectively gets a tarball of your application, minus any files specified in a .gcloudignore. That means ./encrypted-data will be available on the filesystem inside the container step:
steps:
# decrypt the value in ./my-secret
- name: gcr.io/cloud-builders/gcloud
args:
- kms
- decrypt
- --location=global
- --keyring=my-kr
- --key=my-key
- --ciphertext=file=./encrypted-data
- --plaintext-file=./my-secret
- name: gcr.io/my-project/my-image
steps:
- my-app start --secret=./my-secret
At present, the only way to share data between steps in Cloud Build is with files, but all build steps have the same shared filesystem.

Service Fabric Local Cluster fails to get certificate(s) private key(s)

For every Service Fabric application I attempt to run which utilizes one or more SecretsCertificate instances, the application fails to launch in my local Service Fabric cluster with the following error on the Node > Application in the SF Explorer:
Error event: SourceId='System.Hosting', Property='Activation:1.0'.
There was an error during activation.Failed to configure certificate permissions. Error E_FAIL.
Service Fabric also logs a few relevant items in to the Event Viewer > Applications and Services Logs > Microsoft-Service Fabric > Admin section:
CryptAcquireCertificatePrivateKey failed. Error:0x8009200b
Can't get private key filename for certificate. Error: 0x8009200b
All tries to get private key filename failed.
Failed to get the Certificate's private key.
Thumbprint:4XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXC. Error: E_FAIL
Failed to get private key file. x509FindValue: 4XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXC, x509StoreName: My, findType: FindByThumbprint, Error E_FAIL
ACLing private key filename for thumbprint 4XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXC. ErrorCode=E_FAIL
ConfigureCertificateACLs: error=E_FAIL
I have removed and reinstalled the certificate (which is confirmed to work in multiple other developers' local Service Fabric cluster development environments), and set the private key to have explicit full control permissions for the NETWORK SERVICE user on my computer, which didn't help.
I have followed the instructions in this answer which actually prints out the private key details correctly despite SF local cluster not being able to access it.
I have reinstalled Microsoft Service Fabric SDK, and Microsoft Visual Studio 2017 which also didn't resolve this problem.
All attempts to recreate this error in C# and PowerShell have been fruitless, yet the Service Fabric service doesn't seem to be able to access private keys from my cert store.
Edit: Further progress, no solution.
I am able to successfully decrypt data using the PowerShell Invoke-ServiceFabricDecryptText cmdlet, yet the SF Local Cluster still has the same error.
I determined that the file specified in the certificate's metadata (from the previously referenced SO answer) PrivateKey.CspKeyContainerInfo.UniqueKeyContainerName doesn't exist on my disk at the path C:\Documents and Settings\All Users\Application Data\Microsoft\Crypto\RSA\MachineKeys\ or any neighboring paths. Has anyone seen this before?
As discussed in the comments, the issue is related to how the (self-signed) certificate is created. When using Powershell to create your certs make sure to use:
So when I specified -Provider "Microsoft Enhanced Cryptographic
Provider v1.0" for the
New-SelfsignedCertificate command to create a cert, it works.
Source: https://github.com/Azure/service-fabric-issues/issues/235#issuecomment-292667379
An alternative, in case you can't or don't want to use a self-signed certificate, is to "remove" the CNG storage of the private key (which is the part that Service Fabric can't yet handle).
The steps outlined in this article show how to convert a CNG cert to a non-CNG one:
https://blog.davidchristiansen.com/2016/05/521/
Extract your public key and full certificate chain from your PFX file
openssl pkcs12 -in "yourcertificate.pfx" -nokeys -out "yourcertificate.cer"
-passin "pass:password"
Extract the CNG private key
openssl pkcs12 -in "yourcertificate.pfx" -nocerts –out “yourcertificate.pem"
-nodes -passin "pass:password" -passout "pass:password"
Convert the private key to RSA format
openssl rsa -inform PEM -in "yourcertificate.pem" -out "yourcertificate.rsa"
-passin "pass:password" -passout "pass:password"
Merge public keys with RSA private key to a new PFX file
openssl pkcs12 -export -in "yourcertificate.cer" -inkey "yourcertificate.rsa"
-out "yourcertificate-converted.pfx"
-passin "pass:password" -passout "pass:password"

disk encryption escrow files on centos via kickstart

I'm trying to automate centos installs via PXE and kickstart with encrypted filesystems. In case we mislay the passphrase we want to use escrow files and encrypt them using the public key attached to an x509 certificate obtained from a web server. The relevant line in the kickstart file is
logvol /home --fstype ext4 --name=lv02 --vgname=vg01 --size=1 --grow --encrypted --escrowcert=http://10.0.2.2:8080/escrow.crt --passphrase=XXXX --backuppassphrase
Leaving the cert as PEM encoded on the web server rather than DER doesn't seem to matter, either work up to a point.
The filesystem is created and encrypted using the supplied passphrase and can be opened on reboot with no issues. Two escrow files are produced as expected and if by using the NSS database containing the private key and the first escrow file I obtain what I think is the passphrase but it doesn't unlock the disk. For example:
# volume_key --secrets -d /tmp/nss e04a93fc-555b-430b-a962-1cdf921e320f-escrow
Data encryption key:<span class="whitespace other" title="Tab">»</span>817E65AC37C1EC802E3663322BFE818D47BDD477678482E78986C25731B343C221CC1D2505EA8D76FBB50C5C5E98B28CAD440349DC0842407B46B8F116E50B34
I assume the string from 817 to B34 is the passphrase but using it in a cryptsetup command does not work.
[root#mypxetest ~]# cryptsetup -v status home
/dev/mapper/home is inactive.
Command failed with code 19.
[root#mypxetest ~]# cryptsetup luksOpen /dev/rootvg01/lv02 home
Enter passphrase for /dev/rootvg01/lv02:
No key available with this passphrase.
Enter passphrase for /dev/rootvg01/lv02:
When prompted I paste in the long numeric string but get the No key available message. However if I use the passphrase specified in the kickstart file or the backup escrow file the disk unlocks.
# volume_key --secrets -d /tmp/nss e04a93fc-555b-430b-a962-1cdf921e320f-escrow-backup-passphrase
Passphrase:<span class="whitespace other" title="Tab">»</span>QII.q-ImgpN-0oy0Y-RC5qa
Then using the string QII.q-ImgpN-0oy0Y-RC5qa in the crypsetup command works.
Has anyone any idea what I'm missing? Why don't both escrow files work?
I've done some more reading and the file ending in escrow is not an alternative passphrase for the luks volume but it contains the encryption key which is encrypted of course. When decrypted the long string is the encryption key and there's a clue in the rest of the text which I confess I didn't read very well.

"unable to load Private Key" error when try to open openssl private key file on mac

How can I open a private key created on a linux server from a Mac ?
Some context : I'm using a local script called mup to deploy a Meteor app which requires the openssl private key.
I created the openssl private key on a linux ubuntu server I'm deploying to.
I am deploying from my Mac OS 10.9.5.
The mup script throws this error :
-----------------------------------STDERR-----------------------------------
Trying to initialize SSL contexts with your certificatesError loading rsa private key
-----------------------------------STDOUT-----------------------------------
So, the local mac can't open or access the private key.
This command works on the ubuntu server where the key was created :
openssl rsa -in private-key.nopass.key -check
However, If I run that same command on my local Mac on the same file ( which I copied and pasted from the terminal into Sublime text, with normal settings. ) , the local Mac throws this error :
unable to load Private Key
... routines:PEM_read_bio:no start line:pem_lib.c:701:Expecting: ANY PRIVATE KEY
So, I'm assuming the mup error has something to do with this.
On the local mac the openssl version is OpenSSL 1.0.2f 28 Jan 2016.
On the remote linux server the openssl version is OpenSSL 1.0.1f 6 Jan 2014.
so, the good folks at namecheap.com support helped me with this question. Turns out I was missing one dash!! haha.
This (4 dashes):
----BEGIN RSA PRIVATE KEY-----
Should have been this (5 dashes):
-----BEGIN RSA PRIVATE KEY-----
The takeaway is count your dashes when manually copying/pasting these files! It's far too easy to mistake four dashes for five.

Resources