I've edited my Kibana.yaml config file to allow remote access using the DHCP IP address on my router from a bridged connection using my adapter.
It seems to not establish a connection using the port and IP assigned.
[root#localhost bin]# ./kibana --allow-root
^C^C log [14:36:15.000] [info][plugins-service] Plugin "visTypeXy" is disabled.
log [14:36:15.025] [info][plugins-service] Plugin "auditTrail" is disabled.
log [14:36:15.084] [warning][config][deprecation] Config key [monitoring.cluster_alerts.email_notifications.email_address] will be required for email notifications to work in 8.0."
^C[root#localhost bin]# ./kibana --allow-root &
[1] 2499
[root#localhost bin]# log [14:36:23.872] [info][plugins-service] Plugin "visTypeXy" is disabled.
log [14:36:23.878] [info][plugins-service] Plugin "auditTrail" is disabled.
log [14:36:23.960] [warning][config][deprecation] Config key [monitoring.cluster_alerts.email_notifications.email_address] will be required for email notifications to work in 8.0."
log [14:36:24.133] [info][plugins-system] Setting up [96] plugins: [taskManager,licensing,globalSearch,globalSearchProviders,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetry,telemetryCollectionXpack,kibanaUsageCollection,securityOss,newsfeed,mapsLegacy,kibanaLegacy,translations,share,legacyExport,embeddable,uiActionsEnhanced,expressions,data,home,observability,cloud,console,consoleExtensions,apmOss,searchprofiler,painlessLab,grokdebugger,management,indexPatternManagement,advancedSettings,fileUpload,savedObjects,dashboard,visualizations,visTypeVega,visTypeTimelion,timelion,features,upgradeAssistant,security,snapshotRestore,enterpriseSearch,encryptedSavedObjects,ingestManager,indexManagement,remoteClusters,crossClusterReplication,indexLifecycleManagement,dashboardMode,beatsManagement,transform,ingestPipelines,maps,licenseManagement,graph,dataEnhanced,visTypeTable,visTypeMarkdown,tileMap,regionMap,inputControlVis,visualize,esUiShared,charts,lens,visTypeVislib,visTypeTimeseries,rollup,visTypeTagcloud,visTypeMetric,watcher,discover,discoverEnhanced,savedObjectsManagement,spaces,reporting,lists,eventLog,actions,case,alerts,stackAlerts,triggersActionsUi,ml,securitySolution,infra,monitoring,logstash,apm,uptime,bfetch,canvas]
log [14:36:24.394] [warning][config][plugins][security] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in kibana.yml
log [14:36:24.395] [warning][config][plugins][security] Session cookies will be transmitted over insecure connections. This is not recommended.
log [14:36:24.433] [warning][config][encryptedSavedObjects][plugins] Generating a random key for xpack.encryptedSavedObjects.encryptionKey. To be able to decrypt encrypted saved objects attributes after restart, please set xpack.encryptedSavedObjects.encryptionKey in kibana.yml
log [14:36:24.439] [warning][ingestManager][plugins] Fleet APIs are disabled due to the Encrypted Saved Objects plugin using an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in kibana.yml.
log [14:36:24.561] [warning][config][plugins][reporting] Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in kibana.yml
log [14:36:24.563] [warning][config][plugins][reporting] Chromium sandbox provides an additional layer of protection, but is not supported for Linux CentOS 8.3.2011
OS. Automatically setting 'xpack.reporting.capture.browser.chromium.disableSandbox: true'.
log [14:36:24.575] [warning][actions][actions][plugins] APIs are disabled due to the Encrypted Saved Objects plugin using an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in kibana.yml.
log [14:36:24.596] [warning][alerting][alerts][plugins][plugins] APIs are disabled due to the Encrypted Saved Objects plugin using an ephemeral encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in kibana.yml.
log [14:36:24.785] [info][monitoring][monitoring][plugins] config sourced from: production cluster
log [14:36:25.067] [info][savedobjects-service] Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations...
log [14:36:25.409] [info][savedobjects-service] Starting saved objects migrations
log [14:36:25.976] [info][plugins-system] Starting [96] plugins: [taskManager,licensing,globalSearch,globalSearchProviders,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetry,telemetryCollectionXpack,kibanaUsageCollection,securityOss,newsfeed,mapsLegacy,kibanaLegacy,translations,share,legacyExport,embeddable,uiActionsEnhanced,expressions,data,home,observability,cloud,console,consoleExtensions,apmOss,searchprofiler,painlessLab,grokdebugger,management,indexPatternManagement,advancedSettings,fileUpload,savedObjects,dashboard,visualizations,visTypeVega,visTypeTimelion,timelion,features,upgradeAssistant,security,snapshotRestore,enterpriseSearch,encryptedSavedObjects,ingestManager,indexManagement,remoteClusters,crossClusterReplication,indexLifecycleManagement,dashboardMode,beatsManagement,transform,ingestPipelines,maps,licenseManagement,graph,dataEnhanced,visTypeTable,visTypeMarkdown,tileMap,regionMap,inputControlVis,visualize,esUiShared,charts,lens,visTypeVislib,visTypeTimeseries,rollup,visTypeTagcloud,visTypeMetric,watcher,discover,discoverEnhanced,savedObjectsManagement,spaces,reporting,lists,eventLog,actions,case,alerts,stackAlerts,triggersActionsUi,ml,securitySolution,infra,monitoring,logstash,apm,uptime,bfetch,canvas]
log [14:36:25.978] [info][plugins][taskManager][taskManager] TaskManager is identified by the Kibana UUID: dbda794a-41a8-4223-b66f-b4fed95353db
log [14:36:26.302] [info][crossClusterReplication][plugins] Your basic license does not support crossClusterReplication. Please upgrade your license.
log [14:36:26.339] [info][plugins][watcher] Your basic license does not support watcher. Please upgrade your license.
log [14:36:26.340] [info][kibana-monitoring][monitoring][monitoring][plugins] Starting monitoring stats collection
[2021-01-16T09:36:26,422][INFO ][o.e.c.m.MetadataIndexTemplateService] [localhost.localdomain] adding template [.management-beats] for index patterns [.management-beats]
log [14:36:27.290] [info][listening] Server running at http://10.0.0.137:5601
log [14:36:28.153] [info][server][Kibana][http] http server running at http://10.0.0.137:5601
log [14:36:28.157] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:Actions-actions_telemetry]: version conflict, document already exists (current version [4])
log [14:36:28.181] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:Lens-lens_telemetry]: version conflict, document already exists (current version [4])
log [14:36:28.182] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:Alerting-alerting_telemetry]: version conflict, document already exists (current version [4])
log [14:36:28.183] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:endpoint:user-artifact-packager:1.0.0]: version conflict, document already exists (current version [64])
log [14:36:28.184] [error][data][elasticsearch] [version_conflict_engine_exception]: [task:apm-telemetry-task]: version conflict, document already exists (current version [4])
log [14:36:28.973] [warning][plugins][reporting] Enabling the Chromium sandbox provides an additional layer of protection.
I am following this tutorial for getting encrypted keys into my cloudbuild YAML file.
I am trying to understand how am I supposed to " use the decrypted ... file in the workspace directory" variables in the subsequent steps of my YAML file.
My cloudbuild step where I decrypt the keys file is as follows:
- name: gcr.io/cloud-builders/gcloud
args: ['kms', 'decrypt', '--ciphertext-file=<encrypted_file>', '--plaintext-file=<decrypted_file>', '--location=<location>', '--keyring=<keyring>', '--key=<key>']
The tutorial is not clear on how to do this and I cannot find anything on the Internet related to this.
Any help is very appreciated.
Thanks.
When you encrypt your content with gcloud kms encrypt, you can write the output to a file in your workspace, for example:
# replace with your values
gcloud kms encrypt \
--location=global \
--keyring=my-kr \
--key=my-key \
--plaintext-file=./data-to-encrypt \
--ciphertext-file=./encrypted-data
Where ./data-to-encrypt is a file on disk that contains your plaintext secret and ./encrypted-data is the destination path on disk where the encrypted ciphertext should be written.
When working directly with the API, the interaction looks like this:
plaintext -> kms(encrypt) -> ciphertext
However, when working with gcloud, it looks like this:
plaintext-file -> gcloud(read) -> kms(encrypt) -> ciphertext -> gcloud(write)
When you invoke Cloud Build, it effectively gets a tarball of your application, minus any files specified in a .gcloudignore. That means ./encrypted-data will be available on the filesystem inside the container step:
steps:
# decrypt the value in ./my-secret
- name: gcr.io/cloud-builders/gcloud
args:
- kms
- decrypt
- --location=global
- --keyring=my-kr
- --key=my-key
- --ciphertext=file=./encrypted-data
- --plaintext-file=./my-secret
- name: gcr.io/my-project/my-image
steps:
- my-app start --secret=./my-secret
At present, the only way to share data between steps in Cloud Build is with files, but all build steps have the same shared filesystem.
For every Service Fabric application I attempt to run which utilizes one or more SecretsCertificate instances, the application fails to launch in my local Service Fabric cluster with the following error on the Node > Application in the SF Explorer:
Error event: SourceId='System.Hosting', Property='Activation:1.0'.
There was an error during activation.Failed to configure certificate permissions. Error E_FAIL.
Service Fabric also logs a few relevant items in to the Event Viewer > Applications and Services Logs > Microsoft-Service Fabric > Admin section:
CryptAcquireCertificatePrivateKey failed. Error:0x8009200b
Can't get private key filename for certificate. Error: 0x8009200b
All tries to get private key filename failed.
Failed to get the Certificate's private key.
Thumbprint:4XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXC. Error: E_FAIL
Failed to get private key file. x509FindValue: 4XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXC, x509StoreName: My, findType: FindByThumbprint, Error E_FAIL
ACLing private key filename for thumbprint 4XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXC. ErrorCode=E_FAIL
ConfigureCertificateACLs: error=E_FAIL
I have removed and reinstalled the certificate (which is confirmed to work in multiple other developers' local Service Fabric cluster development environments), and set the private key to have explicit full control permissions for the NETWORK SERVICE user on my computer, which didn't help.
I have followed the instructions in this answer which actually prints out the private key details correctly despite SF local cluster not being able to access it.
I have reinstalled Microsoft Service Fabric SDK, and Microsoft Visual Studio 2017 which also didn't resolve this problem.
All attempts to recreate this error in C# and PowerShell have been fruitless, yet the Service Fabric service doesn't seem to be able to access private keys from my cert store.
Edit: Further progress, no solution.
I am able to successfully decrypt data using the PowerShell Invoke-ServiceFabricDecryptText cmdlet, yet the SF Local Cluster still has the same error.
I determined that the file specified in the certificate's metadata (from the previously referenced SO answer) PrivateKey.CspKeyContainerInfo.UniqueKeyContainerName doesn't exist on my disk at the path C:\Documents and Settings\All Users\Application Data\Microsoft\Crypto\RSA\MachineKeys\ or any neighboring paths. Has anyone seen this before?
As discussed in the comments, the issue is related to how the (self-signed) certificate is created. When using Powershell to create your certs make sure to use:
So when I specified -Provider "Microsoft Enhanced Cryptographic
Provider v1.0" for the
New-SelfsignedCertificate command to create a cert, it works.
Source: https://github.com/Azure/service-fabric-issues/issues/235#issuecomment-292667379
An alternative, in case you can't or don't want to use a self-signed certificate, is to "remove" the CNG storage of the private key (which is the part that Service Fabric can't yet handle).
The steps outlined in this article show how to convert a CNG cert to a non-CNG one:
https://blog.davidchristiansen.com/2016/05/521/
Extract your public key and full certificate chain from your PFX file
openssl pkcs12 -in "yourcertificate.pfx" -nokeys -out "yourcertificate.cer"
-passin "pass:password"
Extract the CNG private key
openssl pkcs12 -in "yourcertificate.pfx" -nocerts –out “yourcertificate.pem"
-nodes -passin "pass:password" -passout "pass:password"
Convert the private key to RSA format
openssl rsa -inform PEM -in "yourcertificate.pem" -out "yourcertificate.rsa"
-passin "pass:password" -passout "pass:password"
Merge public keys with RSA private key to a new PFX file
openssl pkcs12 -export -in "yourcertificate.cer" -inkey "yourcertificate.rsa"
-out "yourcertificate-converted.pfx"
-passin "pass:password" -passout "pass:password"