Cloudify blueprint upload import error - cloudify

we have an openstack deployment. We chose to deploy cloudify manager by image option. Now we are using the paid version of the manager image. When we tried to upload a openstack blueprint from the CLI:
cfy blueprints upload -b vm -p cloudify-nodecellar-example-master/openstack-blueprint.yaml
we have the next output error on the cloudify manager:
20/12/2017 11:45:21 [INFO] [manager_rest.server] InvalidBlueprintError: Invalid blueprint - Failed to resolve the following urls: {u'file:///opt/manager/resources/spec/cloudify/4.3.dev1/types.yaml': "Import failed: Unable to open import url file:///opt/manager/resources/spec/cloudify/4.3.dev1/types.yaml; "}. In addition, failed to resolve the original import url - Import failed: Unable to open import url http://www.getcloudify.org/spec/cloudify/4.3.dev1/types.yaml; HTTPConnectionPool(host='www.getcloudify.org', port=80): Max retries exceeded with url: /spec/cloudify/4.3.dev1/types.yaml (Caused by NewConnectionError(': Failed to establish a new connection: [Errno -3] Temporary failure in name resolution',))
in: /opt/manager/resources/openstack-blueprint-b03206ec-1bde-4595-8cc0-93de5510f777/openstack-blueprint.yaml
in line: 7, column: 0
path: imports
value: ['http://www.getcloudify.org/spec/cloudify/4.3.dev1/types.yaml', 'http://www.getcloudify.org/spec/openstack-plugin/2.0.1/plugin.yaml', 'http://www.getcloudify.org/spec/diamond-plugin/1.3.6/plugin.yaml', 'types/nodecellar.yaml', 'types/openstack-types.yaml']
20/12/2017 11:45:23 [INFO] [manager_rest.server] Authenticated user:

It seems like there are a few issues, which I will not go through here. The best way to get started is to follow the step-by-step instructions at this link.
If you have any further questions, please feel free to ask them here or in our user group.

Related

Issues with persistent volume on DigitalOcean Kubernetes cluster

Just created a managed 2-node Kubernetes (ver. 1.22.8) cluster on DigitalOcean (DO).
After installing WordPress using Bitnami Helm chart, and then installing a WP plugin, the site became unreachable.
Looking into DO K8s dashboard in the deployment section, the wordpress deployment shows the following error:
0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
AttachVolume.Attach failed for volume "pvc-c859847e-f250-4e71-9ed3-63c92cc01f50" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
MountVolume.MountDevice failed for volume "pvc-c859847e-f250-4e71-9ed3-63c92cc01f50" : rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-c859847e-f250-4e71-9ed3-63c92cc01f50' output: "mke2fs 1.45.5 (07-Jan-2020)\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-c859847e-f250-4e71-9ed3-63c92cc01f50 does not exist and no size was specified.\n"
Readiness probe failed: HTTP probe failed with statuscode: 404
As I'm quite new to K8s, I don't know much how to troubleshoot this.
Any help would be much appreciated.
UPDATE
I was able to reproduce the error and found what triggers it.
WordPress Bitnami charts installs several WP plugins by default. As soon as I try to delete them, the error shows up and the persistent volume gets corrupted...
Is this maybe a bug or it's standard behavior?

Task fails due to not being able to read log file

Composer is failing a task due to it not being able to read a log file, it's complaining about incorrect encoding.
Here's the log that appears in the UI:
*** Unable to read remote log from gs://bucket/logs/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log
*** 'ascii' codec can't decode byte 0xc2 in position 6986: ordinal not in range(128)
*** Log file does not exist: /home/airflow/gcs/logs/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log
*** Fetching from: http://airflow-worker-68dc66c9db-x945n:8793/log/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log
*** Failed to fetch log file from worker. HTTPConnectionPool(host='airflow-worker-68dc66c9db-x945n', port=8793): Max retries exceeded with url: /log/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f1c9ff19d10>: Failed to establish a new connection: [Errno -2] Name or service not known',))
I try viewing the file in the google cloud console and it also throws an error:
Failed to load
Tracking Number: 8075820889980640204
But I am able to download the file via gsutil.
When I view the file, it seems to have text overriding other text.
I can't show the entire file but it looks like this:
--------------------------------------------------------------------------------
Starting attempt 1 of 1
--------------------------------------------------------------------------------
#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
[2019-08-04 10:01:23,313] {models.py:1569} INFO - Executing <Task(BigQueryOperator): merge_campaign_exceptions> on 2019-08-03T10:00:00+00:00#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
[2019-08-04 10:01:23,314] {base_task_runner.py:124} INFO - Running: ['bash', '-c', u'airflow run __campaign_exceptions_0_0_1 merge_campaign_exceptions 2019-08-03T10:00:00+00:00 --job_id 22767 --pool _bq_pool --raw -sd DAGS_FOLDER//-campaign-exceptions.py --cfg_path /tmp/tmpyBIVgT']#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
[2019-08-04 10:01:24,658] {base_task_runner.py:107} INFO - Job 22767: Subtask merge_campaign_exceptions [2019-08-04 10:01:24,658] {settings.py:176} INFO - setting.configure_orm(): Using pool settings. pool_size=5, pool_recycle=1800#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
Where the #-#{} pieces seems to be "on top of" the typical log.
I faced the same problem. In my case the problem was that I removed the google_gcloud_default connection that was being used to retrieve the logs.
Check the configuration and look for the connection name.
[core]
remote_log_conn_id = google_cloud_default
Then check the credentials used for that connection name has the right permissions to access the GCS bucket.
I'm having a similar problem with viewing logs in GCP Cloud Composer. It doesn't appear to be preventing the failing DAG task from running though. What it looks like is a permissions error between the GKE and Storage Bucket where the log files are kept.
You can still view the logs by going into your cluster's storage bucket in the same directory as your /dags folder where you should also see a logs/ folder.
Your helm chart should setup global env:
- name: AIRFLOW_CONN_GOOGLE_CLOUD_DEFAULT
value: "google-cloud-platform://"
Then, you should deploy a Dockerfile with root account only (not airflow account), additionaly, you set up your helm uid, gid as:
uid: 50000 #airflow user
gid: 50000 #airflow group
Then upgrade helm chart with new config
*** Unable to read remote log from gs://bucket
1)Found the solution after assigning the roles to the service account
2)The SA key(json or txt) to be added and configured to the connection in the
remote_log_conn_id = google_cloud_default
3)restart the scheduler and webserver of the airflow
4)restart the dags on the airflow
you can find the logs on the GCS bucket where its configured

after insalling devstack http://server-ip:5000 not accessible

I followed https://www.theurbanpenguin.com/installing-devstack-on-ubuntu-16-04/ tutorial to install devstack(queens release) on my Ubuntu 16.04 server.
After the installation was done I ran the following commands
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=<password>
export OS_AUTH_URL=http://server-ip:5000/v2.0
openstack image create --public --disk-format qcow2 --container-format bare --file /home/cse3/ubuntu_images/ubuntu-14.04-server-cloudimg-amd64-disk1.img ubuntu
But whenever I open http://server-ip:5000/v2.0 in my browser I am getting unable to connect error.
When I create an image from the command line I get the following message
Failed to discover available identity versions when contacting http://server-ip:5000/v2.0. Attempting to parse version from URL.
Unable to establish connection to http://server-ip:5000/v2.0/tokens: HTTPConnectionPool(host='server-ip', port=5000): Max retries exceeded with url: /v2.0/tokens (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f84ebecabd0>: Failed to establish a new connection: [Errno 111] Connection refused',))
Can anyone suggest what steps need to be followed to remove this error?
After installing the Devstack, you should be able to view the OpenStack dashboard at http://server-ip if the server-ip is a public IP. The AUTH_URL is for you to authorize the API when you are using the SDK or the client library. And this is actually how the dashboard (Horizon) works with the Keystone identity service.
If the server_ip is not a public IP, you need to set up a proxy port in your server and your browser.
It's because the identity API changed from
export OS_AUTH_URL=http://server-ip:5000/v2.0
to
export OS_AUTH_URL=http://server-ip/identity
You can get more from the OpenStack Doc
Check your httpd is running
systemctl status httpd
If it is exited or not started .
Enable the httpd
systemctk s

bootstrap clodufiy3.4 error occured

I installed cloudify3.4 according to the cloudify DOCS. When I install the manager, and executed like this:
# cfy bootstrap --install-plugins -p openstack-manager-blueprint.yaml -i openstack-manager-blueprint-inputs.yaml
an error occurred:
[ERROR] Workflow failed: Task failed 'fabric_plugin.tasks.run_script' -> Timed out trying to connect to 192.168.17.15 (tried 5 times)
I have already build a extern network 192.168.17.0/24 and I have already installed
cloudify_docker_plugin-1.3.2-py27-none-linux_x86_64-Ubuntu-trusty.wgn
cloudify_fabric_plugin-1.4.1-py27-none-linux_x86_64-centos-Core.wgn
cloudify_fabric_plugin-1.4.1-py27-none-linux_x86_64-redhat-Maipo.wgn
cloudify_host_pool_plugin-1.4-py27-none-linux_x86_64-centos-Core.wgn
cloudify_openstack_plugin-1.4-py27-none-linux_x86_64-redhat-Maipo.wgn
So, how to solve this error? Thank you to everyone who helped me!
It seems that you can't connect the manager.
Please make sure that you have an ssh connection from the CLI to the manager.
Since you are bootstrapping an Openstack manager you should make sure to have an external IP if you are outside of Openstack or that the CLI is on the same network if you are on Openstack.

~/.ssh/id_rsa.pub not found error while installing capistrano as ansible playbook

I try to install https://github.com/roots/bedrock-ansible to get a bedrock deployment (http://roots.io/wordpress-stack/) running.
When I run "vagrant up", after some time I get the error:
TASK: [capistrano-setup | Setup deploy group] *********************************
skipping: [default]
TASK: [capistrano-setup | Setup deploy user] **********************************
skipping: [default]
TASK: [capistrano-setup | Adding public key to server] ************************
fatal: [default] => could not locate file in lookup: ~/.ssh/id_rsa.pub
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/Users/johannes/site.retry
default : ok=46 changed=16 unreachable=1 failed=0
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
I do not have a clou how i can fix this. Do you have an idea?
It seems the role is trying to find your local public key. It should be in the location in the error message '~/.ssh/id_rsa.pub', but it's not. So either you don't have one, or you keep it in another location.
If you're not familiar with generating SSH keys you probably don't have one. I personally like the GitHub help page for this: https://help.github.com/articles/generating-ssh-keys/
(you only have to perform steps 1 and 2).
If you do have SSH keys, but in a different location, the capistrano-install role in bedrock uses some variables:
deploy_user: deploy
deploy_keys:
- "~/.ssh/id_rsa.pub"
So you can set (multiple) public key files in the deploy_keys list and they will be added to the deploy_user's authorized keys.
All this is needed because Capistrano will use the deploy user to connect to the remote server later. http://blakesmith.me/2010/02/08/understanding-public-key-private-key-concepts.html

Resources