I am trying to configure remote github repo as the salt server root but it can't make the authentication successful with the pub/priv keypair. I have given the location of the keys in the /etc/salt/master file as well.
Below are the logs I am getting:
2018-11-05 01:48:32,197 [salt.utils.gitfs :1574][ERROR ][21391] Error occurred fetching gitfs remote 'git#[github-endpoint].git': failed to start SSH session: Unable to exchange encryption keys
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/salt/utils/gitfs.py", line 1552, in _fetch
fetch_results = origin.fetch(**fetch_kwargs)
File "/usr/lib64/python2.7/site-packages/pygit2/remote.py", line 405, in fetch
File "/usr/lib64/python2.7/site-packages/pygit2/errors.py", line 64, in check_error
GitError: failed to start SSH session: Unable to exchange encryption keys
I have checked the keypair and connection to the github endpoint.
I am able to sync the repo manually in the server.
I found with the same issue and I finally solved with the following steps:
I create a new ssh key: ssh-keygen -f gitfs_ssh -C 'test#example.com'
Then, I read that an empty line at the end of the private key could be fatal for libssh2, so I removed the empty lines at the bottom of the file (added by ssh-keygen at creation time) and then the new key began to work.
More info in this link
Related
Followed the following procedure for attaching the EFS file to instances created using EB:
https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-mount-efs-volumes/#:~:text=In%20an%20Elastic%20Beanstalk%20environment,scale%20up%20to%20multiple%20instances.
But the logs of Elastic Beanstalk are showing following error:
[Instance: i-06593*****] Command failed on instance. Return code: 1 Output: (TRUNCATED)...fs ... mount -t efs -o tls fs-d9****:/ /efs Failed to resolve "fs-d9****.efs.us-east-1.amazonaws.com" - check that your file system ID is correct. See https://docs.aws.amazon.com/console/efs/mount-dns-name for more detail. ERROR: Mount command failed!. command 01_mount in .ebextensions/storage-efs-mountfilesystem.config failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
Just used **** in EFS ID for security.
Based on the comments.
The solution was to create new EFS filesystem, instead of using the original one.
Composer is failing a task due to it not being able to read a log file, it's complaining about incorrect encoding.
Here's the log that appears in the UI:
*** Unable to read remote log from gs://bucket/logs/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log
*** 'ascii' codec can't decode byte 0xc2 in position 6986: ordinal not in range(128)
*** Log file does not exist: /home/airflow/gcs/logs/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log
*** Fetching from: http://airflow-worker-68dc66c9db-x945n:8793/log/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log
*** Failed to fetch log file from worker. HTTPConnectionPool(host='airflow-worker-68dc66c9db-x945n', port=8793): Max retries exceeded with url: /log/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f1c9ff19d10>: Failed to establish a new connection: [Errno -2] Name or service not known',))
I try viewing the file in the google cloud console and it also throws an error:
Failed to load
Tracking Number: 8075820889980640204
But I am able to download the file via gsutil.
When I view the file, it seems to have text overriding other text.
I can't show the entire file but it looks like this:
--------------------------------------------------------------------------------
Starting attempt 1 of 1
--------------------------------------------------------------------------------
#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
[2019-08-04 10:01:23,313] {models.py:1569} INFO - Executing <Task(BigQueryOperator): merge_campaign_exceptions> on 2019-08-03T10:00:00+00:00#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
[2019-08-04 10:01:23,314] {base_task_runner.py:124} INFO - Running: ['bash', '-c', u'airflow run __campaign_exceptions_0_0_1 merge_campaign_exceptions 2019-08-03T10:00:00+00:00 --job_id 22767 --pool _bq_pool --raw -sd DAGS_FOLDER//-campaign-exceptions.py --cfg_path /tmp/tmpyBIVgT']#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
[2019-08-04 10:01:24,658] {base_task_runner.py:107} INFO - Job 22767: Subtask merge_campaign_exceptions [2019-08-04 10:01:24,658] {settings.py:176} INFO - setting.configure_orm(): Using pool settings. pool_size=5, pool_recycle=1800#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
Where the #-#{} pieces seems to be "on top of" the typical log.
I faced the same problem. In my case the problem was that I removed the google_gcloud_default connection that was being used to retrieve the logs.
Check the configuration and look for the connection name.
[core]
remote_log_conn_id = google_cloud_default
Then check the credentials used for that connection name has the right permissions to access the GCS bucket.
I'm having a similar problem with viewing logs in GCP Cloud Composer. It doesn't appear to be preventing the failing DAG task from running though. What it looks like is a permissions error between the GKE and Storage Bucket where the log files are kept.
You can still view the logs by going into your cluster's storage bucket in the same directory as your /dags folder where you should also see a logs/ folder.
Your helm chart should setup global env:
- name: AIRFLOW_CONN_GOOGLE_CLOUD_DEFAULT
value: "google-cloud-platform://"
Then, you should deploy a Dockerfile with root account only (not airflow account), additionaly, you set up your helm uid, gid as:
uid: 50000 #airflow user
gid: 50000 #airflow group
Then upgrade helm chart with new config
*** Unable to read remote log from gs://bucket
1)Found the solution after assigning the roles to the service account
2)The SA key(json or txt) to be added and configured to the connection in the
remote_log_conn_id = google_cloud_default
3)restart the scheduler and webserver of the airflow
4)restart the dags on the airflow
you can find the logs on the GCS bucket where its configured
When joining the UAT Corda Network and running initial registration as required the Corda node was shutdown before the CSR completed. https://uat.network.r3.com/pages/joining/joining.html
The CSR has been approved by the Corda node does not have the correct certificates yet. When trying to start the node it throws an exception for missing certificates.
[ERROR] 2019-07-26T16:47:51,099Z [main] internal.NodeStartupLogging.invoke - Exception during node startup: One or more keyStores (identity or TLS) or trustStore not found.
Please either copy your existing keys and certificates from another node, or if you don't have one yet, fill out the config file and run corda.jar initial-registration.
Read more at: https://docs.corda.net/permissioning.html [errorCode=16fn52g, moreInformationAt=https://errors.corda.net/ENT/4.1/16fn52g] {}
java.lang.IllegalArgumentException: One or more keyStores (identity or TLS) or trustStore not found. Please either copy your existing keys and certificates from another node, or if you don't have one yet, fill out the config file and run corda.jar initial-registration.
Read more at: https://docs.corda.net/permissioning.html
How can the CSR polling be completed?
Initial registration can be rerun and will resume polling based on the CSR id that is located in the certificates directory as certificate-request-id.txt. Rerun the same command used to start the CSR.
java -jar <CORDA JAR FILE> –initial-registration –network-root-truststore-password <TRUST STORE PASSWORD>
I try to get all Documents from Firestore using the below function.
The credentials are stored in an encrypted file in a GCP Cloud Source repository.
I decrypted the configuration in the Cloud Build trigger and set the ENV in the Dockerfile pointing to the file. I see the content by RUN ls /app/credentials.json.
The error I get in the application log:
rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority"
The credentials are stored in an encrypted file in a GCP Cloud Source repository.
I decrypted the configuration in the Cloud Build trigger and set the ENV in the Dockerfile pointing to the file. I see the content by RUN ls /app/credentials.json.
The error I get in the application log:
rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority"
This error is the result of an HTTPS failure where the certificate cannot be verified. The Alpine base image is missing a package that provides root certificates. Currently the Cloud Run quickstart is missing this for at least the Go language.
Assuming this is your problem, add the following to the final stage of your Dockerfile:
RUN apk add --no-cache ca-certificates
I am attempting to use after_success in a Travis CI build to deploy files to a remote server using SFTP. However, I am getting errors that prevent the upload from succeeding.
SFTP command and resulting error message:
$ sftp -b upload_sftp -i upload_key -P 2222 $sftp_user
Host key verification failed.
Couldn't read packet: Connection reset by peer
The SFTP batch file upload_sftp contains various put commands.
As the "Host key verification failed" message hints, you need to add your server's keys to the known_hosts file, as documented in the Travis CI Documentation.
Adding the following to .travis.yml uses ssh-keyscan:
addons:
ssh_known_hosts: git.example.com
Alternately, known_hosts can be appended to directly using
install:
- echo 'KEY' >> $HOME/.ssh/known_hosts