ReadTimeoutError(HTTPConnectionPool(host='172.16.0.5', port='9200'): Read timed out. (read timeout=10))) Executing teardown due to failed bootstrap - cloudify

I am trying to Bootstrapping Cloudify on Openstack is done via Cloudify CLI. My base OS is Ubuntu 16.04 VM (100 GB Hard Disk, 8 GB RAM) having openstack and I am trying to bootstrap on openstack CentOS 7 machine(40GB disk, 5 GB RAM).
It installed most of services like Riemann, AMQP InfluxDB, RabbitMQ, Elasticsearch, Logstash. I checked it is listening on port 9200 but it then exited with below error
2017-05-21 10:56:21 LOG <manager> [sanity_472c7.create] INFO: Preparing fabric environment...
2017-05-21 10:56:21 LOG <manager> [sanity_472c7.create] INFO: Environment prepared successfully
2017-05-21 10:56:21 LOG <manager> [sanity_472c7.create] INFO: Uploading key ~/.ssh/cloudify-manager-kp.pem...
2017-05-21 10:56:35 CFY <manager> [sanity_472c7.create] Task succeeded 'fabric_plugin.tasks.run_task'
2017-05-21 10:56:37 CFY <manager> [sanity_472c7] Configuring node
2017-05-21 10:56:37 CFY <manager> [sanity_472c7->manager_configuration_eb78a|postconfigure] Sending task 'script_runner.tasks.run'
2017-05-21 10:56:37 CFY <manager> [sanity_472c7->manager_configuration_eb78a|postconfigure] Task started 'script_runner.tasks.run'
2017-05-21 10:56:37 CFY <manager> [sanity_472c7->manager_configuration_eb78a|postconfigure] Task succeeded 'script_runner.tasks.run'
2017-05-21 10:56:39 CFY <manager> [sanity_472c7] Starting node
2017-05-21 10:56:39 CFY <manager> [sanity_472c7.start] Sending task 'fabric_plugin.tasks.run_script'
2017-05-21 10:56:39 CFY <manager> [sanity_472c7.start] Task started 'fabric_plugin.tasks.run_script'
2017-05-21 10:56:39 LOG <manager> [sanity_472c7.start] INFO: Preparing fabric environment...
2017-05-21 10:56:39 LOG <manager> [sanity_472c7.start] INFO: Environment prepared successfully
2017-05-21 10:57:26 LOG <manager> [sanity_472c7.start] INFO: Saving sanity input configuration to /opt/cloudify/sanity/node_properties/properties.json
2017-05-21 10:57:51 CFY <manager> [sanity_472c7.start] Task succeeded 'fabric_plugin.tasks.run_script'
2017-05-21 10:57:53 CFY <manager> 'install' workflow execution succeeded
[172.24.4.11] put: /home/osboxes/.ssh/cloudify-agent-kp.pem -> /root/.ssh/agent_key.pem
Bootstrap failed! (500: Internal error occurred in manager REST server - ConnectionTimeout: ConnectionTimeout caused by - ReadTimeoutError(HTTPConnectionPool(host='172.16.0.5', port='9200'): Read timed out. (read timeout=10)))
Executing teardown due to failed bootstrap...
2017-05-21 10:58:17 CFY <manager> Starting 'uninstall' workflow execution
2017-05-21 10:58:17 CFY <manager> [webui_23eb4] Stopping node
Please let me know what I am missing.

You did not mention the Cloudify version you are trying to bootstrap, looking at the log it looks like 3.4, so I'll refer to this version.
It looks like you have a connection issue, in the sanity test of the manager.
It also looks like you are using a virtualbox and this may explain the connection issue, because the virtualbox is not configured to route the calls on port 9200.
I suggest that if you are really using a virtualbox, you should route all the relevant ports to it.
It might also be a good idea to check whether your Ubuntu VM has access to the internet, otherwise you cannot perform the sanity tests.
Anyway you can skip the sanity tests and bootstrap the manager without them by commenting the sanity node in the manager blueprint

Related

artifactory 6.8.7 won't start as can't connect to access server

Since upgrading to 6.8.7 using the rpm on RHEL 7, using systemctl start artifactory fails
Looking in the log its failing at this point
2019-03-16 09:50:28,952 [art-init] [INFO ] (o.a.s.a.ArtifactoryAccessClientConfigStore:593) - Using Access Server URL: http://localhost:8040/access (bundled) source: detected
2019-03-16 09:50:29,379 [art-init] [INFO ] (o.a.s.a.AccessServiceImpl:353) - Waiting for access server...
2019-03-16 09:50:30,625 [art-init] [WARN ] (o.j.a.c.AccessClientHttpException:41) - Unrecognized ErrorsModel by Access. Original message: Failed on executing /api/v1/system/ping, with response: Not Found
2019-03-16 09:50:30,634 [art-init] [ERROR] (o.a.s.a.AccessServiceImpl:364) - Could not ping access server: {}
org.jfrog.access.client.AccessClientHttpException: HTTP response status 404:Failed on executing /api/v1/system/ping, with response: Not Found
Previously we would get
2019-03-13 09:56:06,293 [art-init] [INFO ] (o.a.s.a.ArtifactoryAccessClientConfigStore:593) - Using Access Server URL: http://localhost:8040/access (bundled) source: detected
2019-03-13 09:56:06,787 [art-init] [INFO ] (o.a.s.a.AccessServiceImpl:353) - Waiting for access server...
2019-03-13 09:56:24,068 [art-init] [INFO ] (o.a.s.a.AccessServiceImpl:360) - Got response from Access server after 17280 ms, continuing.
Any suggestions on debugging whether this access server has started ?
Further to this I found logs showing when it worked it used to start a jar file
2019-03-08 09:19:11,609 [localhost-startStop-2] [INFO ] (o.j.a.AccessApplication:48) - Starting AccessApplication v4.1.48 on hostname.nexor.co.uk with PID 5913 (/opt/jfrog/artifactory/tomcat/webapps/access/WEB-INF/lib/access-application-4.1.48.jar started by artifactory in /)
Now when i look I find /opt/jfrog/artifactory/tomcat/webapps/access/ is empty so there is no jar file to run
The rpm did deliver an access.war file and that is there
$ ls -l /opt/jfrog/artifactory/webapps
total 104692
-rwxrwxr-x. 1 root root 51099759 Mar 14 12:14 access.war
-rwxrwxr-x. 1 root root 56099348 Mar 14 12:14 artifactory.war
Is there some manual step I can run to expand this war file to get the jar (as you can guess I am not up on my java apps)
Eventually got it working by deleting the empty /opt/jfrog/artifactory/tomcat/webapps/access directory and a new one containing the required jar files got created.
Not sure why this happened but that got it working for me
I had similar problem on CentOS 7, the solution was downgrade the newly updated java packages by running this command:
yum downgrade java-1.8.0*
After that restart the artifactory:
systemctl restart artifactory
Try changing the port number under your tomcat\conf\server.xml from 8081 to a different, unused port. Then, restart the Artifactory service to ensure the change takes effect.

Airflow 1.9.0 is queuing but tasks are not running

Airflow stopped running tasks all of a sudden. Below are all running
airflow scheduler
airflow webserver
airflow worker
webui message
All dependencies are met but the task instance is not running. In most
cases this just means that the task will probably be scheduled soon
unless:
- The scheduler is down or under heavy load
If this task instance does not start soon please contact your Airflow
administrator for assistance.
Scheduler seems to be in a loop, keeps repeating the below messages. WebUI shows tasks are in queued state. Tried restarting the scheduler, didn't help.
[2018-11-17 22:03:45,809] {{jobs.py:1607}} DEBUG - Starting Loop...
[2018-11-17 22:03:45,809] {{jobs.py:1627}} INFO - Heartbeating the process manager
[2018-11-17 22:03:45,810] {{jobs.py:1662}} INFO - Heartbeating the executor
[2018-11-17 22:03:45,810] {{base_executor.py:103}} DEBUG - 124 running task instances
[2018-11-17 22:03:45,810] {{base_executor.py:104}} DEBUG - 0 in queue
[2018-11-17 22:03:45,810] {{base_executor.py:105}} DEBUG - 76 open slots
[2018-11-17 22:03:45,810] {{base_executor.py:132}} DEBUG - Calling the <class 'airflow.executors.celery_executor.CeleryExecutor'> sync method
[2018-11-17 22:03:45,810] {{celery_executor.py:80}} DEBUG - Inquiring about 124 celery task(s)
Airflow setup:
apache-airflow[celery, redis, all]==1.9.0
I also checked these posts but didn't help me:
Airflow 1.9.0 is queuing but not launching tasks
Airflow tasks get stuck at "queued" status and never gets running
Problem solved. This is a problem when you create your build on or after 2018-11-15 Turns out apache-airflow[celery, redis, all]==1.9.0 takes the latest version of redis-py 3.0.1 which does not work with celery 4.2.1.
Solution is to use redis-py 2.10.6
redis==2.10.6
apache-airflow[celery, all]==1.9.0

eucalyptus-cloud.service 4.4.4 constantly crashing with JVM memory errors

I'm trying to work through the manual installation guide. When I get as far as registering an admin account for the console (euare-accountcreate) or registering services (euserv-register-service) the eucalyptus-cloud service inconsistently crashes with the same JVM memory error. As an example a few times I've been able to register the UFS or create the admin account but when I move on to perform another step in the install it will fail and checking the service status shows that it has crashed. I don't have experience with Java memory errors and could really use some help understanding whats going on and how to investigate this type of error in general:
[root#cloud ~]# euserv-describe-services ufs-10.0.0.2
euserv-describe-services: error: connection error (('Connection
aborted.', BadStatusLine("''",)))
[root#cloud ~]# systemctl status eucalyptus-cloud -l
● eucalyptus-cloud.service - Eucalyptus cloud web services
Loaded: loaded (/usr/lib/systemd/system/eucalyptus-cloud.service;
enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2018-10-10 08:45:09 EDT;
4s ago
Process: 45951 ExecStart=/usr/sbin/eucalyptus-cloud $CLOUD_OPTS
(code=exited, status=1/FAILURE)
Main PID: 45951 (code=exited, status=1/FAILURE)
CGroup: /system.slice/eucalyptus-cloud.service
├─46206 /usr/bin/postgres -D /var/lib/eucalyptus/db/data -
h0.0.0.0 -p8777
├─46207 postgres: logger process
├─46209 postgres: checkpointer process
├─46210 postgres: writer process
├─46211 postgres: wal writer process
├─46212 postgres: autovacuum launcher process
└─46213 postgres: stats collector process
Oct 10 08:45:08 cloud eucalyptus-cloud[45951]: OpenJDK 64-Bit Server VM
warning: INFO: os::commit_memory(0x00007f496534b000, 12288, 0) failed;
error='Cannot allocate memory' (errno=12)
Oct 10 08:45:08 cloud eucalyptus-cloud[45951]: #
Oct 10 08:45:08 cloud eucalyptus-cloud[45951]: # There is insufficient
memory for the Java Runtime Environment to continue.
Oct 10 08:45:08 cloud eucalyptus-cloud[45951]: # Native memory
allocation (mmap) failed to map 12288 bytes for committing reserved
memory.
Oct 10 08:45:08 cloud eucalyptus-cloud[45951]: # An error report file
with more information is saved as:
Oct 10 08:45:08 cloud eucalyptus-cloud[45951]: #
/tmp/hs_err_pid45954.log
Oct 10 08:45:09 cloud eucalyptus-cloud[45951]: 2018-10-10 08:45:09
45951 ERROR 0574 Service exit with a return value of 1.
Oct 10 08:45:09 cloud systemd[1]: eucalyptus-cloud.service: main
process exited, code=exited, status=1/FAILURE
Oct 10 08:45:09 cloud systemd[1]: Unit eucalyptus-cloud.service
entered failed state.
Oct 10 08:45:09 cloud systemd[1]: eucalyptus-cloud.service failed.
Edit to include some version info:
[root#cloud ~]# cat /etc/centos-release
CentOS Linux release 7.5.1804 (Core)
[root#cloud ~]# java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
EDIT to note my point of confusion: This system only has 25G of RAM in use and 223G free. So I need help understanding how java is running out of memory.
try this fix:
echo 1999999 > /proc/sys/vm/max_map_count

Could not find a version that satisfies the requirement pbr!=2.1.0,>=2.0.0 (from tempest==16.0.1.dev178)

I just followed the openstack rally quick start guide to create a tempest verifier with Rally v0.9.1 in an Openstack Ocata/stable deployment. The command failed:
(rally-15.1.2) root#infra1-utility-container-f31faeb0:~/.rally/verification# rally verify create-verifier --type tempest --name tempest-verifier
2017-05-21 07:53:13.410 11422 INFO rally.api [-] Creating verifier 'tempest-verifier'.
2017-05-21 07:53:13.528 11422 INFO rally.verification.manager [-] Cloning verifier repo from https://git.openstack.org/openstack/tempest.
2017-05-21 07:53:37.174 11422 INFO rally.verification.manager [-] Creating virtual environment. It may take a few minutes.
2017-05-21 07:53:42.323 11422 ERROR rally.verification.utils [-] Failed cmd: '['pip', 'install', '-e', './']'
2017-05-21 07:53:42.324 11422 ERROR rally.verification.utils [-] Error output: 'Obtaining file:///root/.rally/verification/verifier-091a49ab-1241-40a3-bc9b-531d7f091e37/repo
Collecting pbr!=2.1.0,>=2.0.0 (from tempest==16.0.1.dev178)
Could not find a version that satisfies the requirement pbr!=2.1.0,>=2.0.0 (from tempest==16.0.1.dev178) (from versions: 1.10.0)
No matching distribution found for pbr!=2.1.0,>=2.0.0 (from tempest==16.0.1.dev178)
'
Command failed, please check log for more info
As the current version of pbr is 2.0.0, I'm not sure why pbr installation failed.
(rally-15.1.2) root#infra1-utility-container-f31faeb0:~/.rally/verification# pip freeze|grep pbr
pbr==2.0.0
The question is how to adjust the requirement checking for pbr? or is it possible to choose an older version of tempest?
Thanks.
It solved.
After uploading the two missing python packages: os_testr-0.8.2-py2-none-any.whl and testrepository-0.0.19.tar.gz into local repo, which is a lxc container had been created by openstack-ansible, the Tempest plugin was finally installed.

Cloudify 3.3.1 simple-manager bootstrap fails with http 504 / filename argument expected

I'm trying to bootstrap a cloudify manager using the simple-manager-blueprint from the cloudify-manager-repo and following the instructions here
I am running the bootstrap process from Ubuntu 16, and attempting to bootstrap onto an already-existing Centos 7 VM (KVM) hosted remotely.
The error I get during the bootstrap process is:
(cfyenv) k#ubuntu1:~/cloudify/cloudify-manager$ cfy init -r
Initialization completed successfully
(cfyenv) k#ubuntu1:~/cloudify/cloudify-manager$ cfy --version
Cloudify CLI 3.3.1
(cfyenv) k#ubuntu1:~/cloudify/cloudify-manager$ cfy bootstrap -p ./cloudify-manager-blueprints-3.3.1/simple-manager-blueprint.yaml -i ./cloudify-manager-blueprints-3.3.1/simple-manager-blueprint-inputs.yaml
executing bootstrap validation
2016-06-10 13:03:38 CFY <manager> Starting 'execute_operation' workflow execution
2016-06-10 13:03:38 CFY <manager> [rabbitmq_b88e8] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [python_runtime_89bdd] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [rest_service_61510] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [amqp_influx_2f816] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [manager_host_d688e] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [influxdb_98fd6] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [logstash_39e85] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [manager_configuration_0d9ca] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [mgmt_worker_f0d02] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [riemann_20a3e] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [java_runtime_c9a1c] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [elasticsearch_b1536] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [nginx_db289] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [webui_9c064] Starting operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [rabbitmq_b88e8] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [python_runtime_89bdd] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [manager_configuration_0d9ca] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [mgmt_worker_f0d02] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [nginx_db289] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [rest_service_61510] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [manager_host_d688e] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [riemann_20a3e] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [influxdb_98fd6] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [logstash_39e85] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [amqp_influx_2f816] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [webui_9c064] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [elasticsearch_b1536] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> [java_runtime_c9a1c] Finished operation cloudify.interfaces.validation.creation
2016-06-10 13:03:38 CFY <manager> 'execute_operation' workflow execution succeeded
bootstrap validation completed successfully
executing bootstrap
Inputs ./cloudify-manager-blueprints-3.3.1/simple-manager-blueprint-inputs.yaml
Inputs <cloudify.workflows.local._Environment object at 0x7fc76b458a10>
2016-06-10 13:03:45 CFY <manager> Starting 'install' workflow execution
2016-06-10 13:03:45 CFY <manager> [manager_host_cd1f8] Creating node
2016-06-10 13:03:45 CFY <manager> [manager_host_cd1f8] Configuring node
2016-06-10 13:03:45 CFY <manager> [manager_host_cd1f8] Starting node
2016-06-10 13:03:46 CFY <manager> [java_runtime_e2b0d] Creating node
2016-06-10 13:03:46 CFY <manager> [manager_configuration_baa5a] Creating node
2016-06-10 13:03:46 CFY <manager> [python_runtime_a24d5] Creating node
2016-06-10 13:03:46 CFY <manager> [rabbitmq_2656a] Creating node
2016-06-10 13:03:46 CFY <manager> [influxdb_720e7] Creating node
2016-06-10 13:03:46 CFY <manager> [manager_configuration_baa5a.create] Sending task 'fabric_plugin.tasks.run_script'
2016-06-10 13:03:46 CFY <manager> [python_runtime_a24d5.create] Sending task 'fabric_plugin.tasks.run_script'
2016-06-10 13:03:46 CFY <manager> [influxdb_720e7.create] Sending task 'fabric_plugin.tasks.run_script'
2016-06-10 13:03:46 CFY <manager> [rabbitmq_2656a.create] Sending task 'fabric_plugin.tasks.run_script'
2016-06-10 13:03:46 CFY <manager> [java_runtime_e2b0d.create] Sending task 'fabric_plugin.tasks.run_script'
2016-06-10 13:03:46 CFY <manager> [manager_configuration_baa5a.create] Task started 'fabric_plugin.tasks.run_script'
2016-06-10 13:03:46 LOG <manager> [manager_configuration_baa5a.create] INFO: preparing fabric environment...
2016-06-10 13:03:46 LOG <manager> [manager_configuration_baa5a.create] INFO: Fabric env: {u'always_use_pty': True, u'key_filename': u'/home/k/.ssh/id_rsa.pub', u'user': u'cloudify', u'host_string': u'10.124.129.42'}
2016-06-10 13:03:46 LOG <manager> [manager_configuration_baa5a.create] INFO: environment prepared successfully
[10.124.129.42] put: /tmp/tmppt9dtd-configure_manager.sh -> /tmp/cloudify-ctx/scripts/tmppt9dtd-configure_manager.sh-7MH6NQ63
[10.124.129.42] put: <file obj> -> /tmp/cloudify-ctx/scripts/env-tmppt9dtd-configure_manager.sh-7MH6NQ63
[10.124.129.42] run: source /tmp/cloudify-ctx/scripts/env-tmppt9dtd-configure_manager.sh-7MH6NQ63 && /tmp/cloudify-ctx/scripts/tmppt9dtd-configure_manager.sh-7MH6NQ63
[10.124.129.42] out: Traceback (most recent call last):
[10.124.129.42] out: File "/tmp/cloudify-ctx/ctx", line 130, in <module>
[10.124.129.42] out: main()
[10.124.129.42] out: File "/tmp/cloudify-ctx/ctx", line 119, in main
[10.124.129.42] out: args.timeout)
[10.124.129.42] out: File "/tmp/cloudify-ctx/ctx", line 78, in client_req
[10.124.129.42] out: response = request_method(socket_url, request, timeout)
[10.124.129.42] out: File "/tmp/cloudify-ctx/ctx", line 59, in http_client_req
[10.124.129.42] out: timeout=timeout)
[10.124.129.42] out: File "/usr/lib64/python2.7/urllib2.py", line 154, in urlopen
[10.124.129.42] out: return opener.open(url, data, timeout)
[10.124.129.42] out: File "/usr/lib64/python2.7/urllib2.py", line 437, in open
[10.124.129.42] out: response = meth(req, response)
[10.124.129.42] out: File "/usr/lib64/python2.7/urllib2.py", line 550, in http_response
[10.124.129.42] out: 'http', request, response, code, msg, hdrs)
[10.124.129.42] out: File "/usr/lib64/python2.7/urllib2.py", line 475, in error
[10.124.129.42] out: return self._call_chain(*args)
[10.124.129.42] out: File "/usr/lib64/python2.7/urllib2.py", line 409, in _call_chain
[10.124.129.42] out: result = func(*args)
[10.124.129.42] out: File "/usr/lib64/python2.7/urllib2.py", line 558, in http_error_default
[10.124.129.42] out: raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
[10.124.129.42] out: urllib2.HTTPError: HTTP Error 504: Gateway Time-out
[10.124.129.42] out: /tmp/cloudify-ctx/scripts/tmppt9dtd-configure_manager.sh-7MH6NQ63: line 3: .: filename argument required
[10.124.129.42] out: .: usage: . filename [arguments]
[10.124.129.42] out:
Fatal error: run() received nonzero return code 2 while executing!
Requested: source /tmp/cloudify-ctx/scripts/env-tmppt9dtd-configure_manager.sh-7MH6NQ63 && /tmp/cloudify-ctx/scripts/tmppt9dtd-configure_manager.sh-7MH6NQ63
Executed: /bin/bash -l -c "cd /tmp/cloudify-ctx/work && source /tmp/cloudify-ctx/scripts/env-tmppt9dtd-configure_manager.sh-7MH6NQ63 && /tmp/cloudify-ctx/scripts/tmppt9dtd-configure_manager.sh-7MH6NQ63"
Aborting.
2016-06-10 13:03:47 LOG <manager> [manager_configuration_baa5a.create] ERROR: Exception raised on operation [fabric_plugin.tasks.run_script] invocation
Traceback (most recent call last):
File "/home/k/cfyenv/local/lib/python2.7/site-packages/cloudify/decorators.py", line 122, in wrapper
result = func(*args, **kwargs)
File "/home/k/cfyenv/local/lib/python2.7/site-packages/fabric_plugin/tasks.py", line 214, in run_script
remote_env_script_path, command))
File "/home/k/cfyenv/local/lib/python2.7/site-packages/fabric/network.py", line 639, in host_prompting_wrapper
return func(*args, **kwargs)
File "/home/k/cfyenv/local/lib/python2.7/site-packages/fabric/operations.py", line 1042, in run
shell_escape=shell_escape)
File "/home/k/cfyenv/local/lib/python2.7/site-packages/fabric/operations.py", line 932, in _run_command
error(message=msg, stdout=out, stderr=err)
File "/home/k/cfyenv/local/lib/python2.7/site-packages/fabric/utils.py", line 327, in error
return func(message)
File "/home/k/cfyenv/local/lib/python2.7/site-packages/fabric/utils.py", line 32, in abort
raise env.abort_exception(msg)
FabricTaskError: run() received nonzero return code 2 while executing!
Requested: source /tmp/cloudify-ctx/scripts/env-tmppt9dtd-configure_manager.sh-7MH6NQ63 && /tmp/cloudify-ctx/scripts/tmppt9dtd-configure_manager.sh-7MH6NQ63
Executed: /bin/bash -l -c "cd /tmp/cloudify-ctx/work && source /tmp/cloudify-ctx/scripts/env-tmppt9dtd-configure_manager.sh-7MH6NQ63 && /tmp/cloudify-ctx/scripts/tmppt9dtd-configure_manager.sh-7MH6NQ63"
2016-06-10 13:03:47 CFY <manager> [manager_configuration_baa5a.create] Task failed 'fabric_plugin.tasks.run_script' -> run() received nonzero return code 2 while executing!
Requested: source /tmp/cloudify-ctx/scripts/env-tmppt9dtd-configure_manager.sh-7MH6NQ63 && /tmp/cloudify-ctx/scripts/tmppt9dtd-configure_manager.sh-7MH6NQ63
Executed: /bin/bash -l -c "cd /tmp/cloudify-ctx/work && source /tmp/cloudify-ctx/scripts/env-tmppt9dtd-configure_manager.sh-7MH6NQ63 && /tmp/cloudify-ctx/scripts/tmppt9dtd-configure_manager.sh-7MH6NQ63" [attempt 1/6]
2016-06-10 13:03:47 CFY <manager> [python_runtime_a24d5.create] Task started 'fabric_plugin.tasks.run_script'
2016-06-10 13:03:47 LOG <manager> [python_runtime_a24d5.create] INFO: preparing fabric environment...
2016-06-10 13:03:47 LOG <manager> [python_runtime_a24d5.create] INFO: Fabric env: {u'always_use_pty': True, u'key_filename': u'/home/k/.ssh/id_rsa.pub', u'hide': u'running', u'user': u'cloudify', u'host_string': u'10.124.129.42'}
2016-06-10 13:03:47 LOG <manager> [python_runtime_a24d5.create] INFO: environment prepared successfully
[10.124.129.42] put: /tmp/tmpmndvAt-create.sh -> /tmp/cloudify-ctx/scripts/tmpmndvAt-create.sh-F7IX8WT9
[10.124.129.42] put: <file obj> -> /tmp/cloudify-ctx/scripts/env-tmpmndvAt-create.sh-F7IX8WT9
[10.124.129.42] run: source /tmp/cloudify-ctx/scripts/env-tmpmndvAt-create.sh-F7IX8WT9 && /tmp/cloudify-ctx/scripts/tmpmndvAt-create.sh-F7IX8WT9
[10.124.129.42] out: Traceback (most recent call last):
[10.124.129.42] out: File "/tmp/cloudify-ctx/ctx", line 130, in <module>
[10.124.129.42] out: main()
[10.124.129.42] out: File "/tmp/cloudify-ctx/ctx", line 119, in main
[10.124.129.42] out: args.timeout)
[10.124.129.42] out: File "/tmp/cloudify-ctx/ctx", line 78, in client_req
[10.124.129.42] out: response = request_method(socket_url, request, timeout)
[10.124.129.42] out: File "/tmp/cloudify-ctx/ctx", line 59, in http_client_req
[10.124.129.42] out: timeout=timeout)
[10.124.129.42] out: File "/usr/lib64/python2.7/urllib2.py", line 154, in urlopen
[10.124.129.42] out: return opener.open(url, data, timeout)
[10.124.129.42] out: File "/usr/lib64/python2.7/urllib2.py", line 437, in open
[10.124.129.42] out: response = meth(req, response)
[10.124.129.42] out: File "/usr/lib64/python2.7/urllib2.py", line 550, in http_response
[10.124.129.42] out: 'http', request, response, code, msg, hdrs)
[10.124.129.42] out: File "/usr/lib64/python2.7/urllib2.py", line 475, in error
[10.124.129.42] out: return self._call_chain(*args)
[10.124.129.42] out: File "/usr/lib64/python2.7/urllib2.py", line 409, in _call_chain
[10.124.129.42] out: result = func(*args)
[10.124.129.42] out: File "/usr/lib64/python2.7/urllib2.py", line 558, in http_error_default
[10.124.129.42] out: raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
[10.124.129.42] out: urllib2.HTTPError: HTTP Error 504: Gateway Time-out
[10.124.129.42] out: /tmp/cloudify-ctx/scripts/tmpmndvAt-create.sh-F7IX8WT9: line 3: .: filename argument required
[10.124.129.42] out: .: usage: . filename [arguments]
[10.124.129.42] out:
Fatal error: run() received nonzero return code 2 while executing!
^C
(cfyenv) k#ubuntu1:~/cloudify/cloudify-manager$ ^C
As far as I can tell it looks like the bootstrap scripts are expecting something to be listening http on the target manager host but it's not there, but of course I could be way off track as I'm new to cloudify.
I've made only minimal changes to the blueprints input:
(cfyenv) k#ubuntu1:~/cloudify/cloudify-manager/cloudify-manager-blueprints-3.3.1$ cat ./simple-manager-blueprint-inputs.yaml
#############################
# Provider specific Inputs
#############################
# The public IP of the manager to which the CLI will connect.
public_ip: '<my target hosts ip>'
# The manager's private IP address. This is the address which will be used by the
# application hosts to connect to the Manager's fileserver and message broker.
private_ip: '<my target hosts ip>'
# SSH user used to connect to the manager
ssh_user: 'cloudify'
# SSH key path used to connect to the manager
ssh_key_filename: '/home/k/.ssh/id_rsa.pub'
# This is the user with which the Manager will try to connect to the application hosts.
agents_user: 'cloudify'
#resources_prefix: ''
#############################
# Security Settings
#############################
# Cloudify REST security is disabled by default. To disable security, set to true.
# Note: If security is disabled, the other security inputs are irrelevant.
#security_enabled: false
# Enabling SSL limits communication with the server to SSL only.
# NOTE: If enabled, the certificate and private key files must reside in resources/ssl.
#ssl_enabled: false
# Username and password of the Cloudify administrator.
# This user will also be included in the simple userstore repostiroty if the
# simple userstore implementation is used.
admin_username: 'admin'
admin_password: '<my admin password>'
#insecure_endpoints_disabled: false
#############################
# Agent Packages
#############################
# The key names must be in the format: distro_release_agent (e.g. ubuntu_trusty_agent)
# as the key is what's used to name the file, which later allows our
# agent installer to identify it for your distro and release automatically.
# Note that the windows agent key name MUST be `cloudify_windows_agent`
agent_package_urls:
# ubuntu_trusty_agent: http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/Ubuntu-trusty-agent_3.3.1-sp-b310.tar.gz
# ubuntu_precise_agent: http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/Ubuntu-precise-agent_3.3.1-sp-b310.tar.gz
centos_7x_agent: http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/centos-Core-agent_3.3.1-sp-b310.tar.gz
# centos_6x_agent: http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/centos-Final-agent_3.3.1-sp-b310.tar.gz
# redhat_7x_agent: http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/redhat-Maipo-agent_3.3.1-sp-b310.tar.gz
# redhat_6x_agent: http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/redhat-Santiago-agent_3.3.1-sp-b310.tar.gz
# cloudify_windows_agent: http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify-windows-agent_3.3.1-sp-b310.exe
#############################
# Cloudify Modules
#############################
# Note that you can replace rpm urls with names of packages as long as they're available in your default yum repository.
# That is, as long as they provide the exact same version of that module.
rest_service_rpm_source_url: 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify-rest-service-3.3.1-sp_b310.x86_64.rpm'
management_worker_rpm_source_url: 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify-management-worker-3.3.1-sp_b310.x86_64.rpm'
amqpinflux_rpm_source_url: 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify-amqp-influx-3.3.1-sp_b310.x86_64.rpm'
cloudify_resources_url: 'https://github.com/cloudify-cosmo/cloudify-manager/archive/3.3.1.tar.gz'
webui_source_url: 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify-ui-3.3.1-sp-b310.tgz'
# This is a Cloudify specific redistribution of Grafana.
grafana_source_url: http://repository.cloudifysource.org/org/cloudify3/components/grafana-1.9.0.tgz
#############################
# External Components
#############################
# Note that you can replace rpm urls with names of packages as long as they're available in your default yum repository.
# That is, as long as they provide the exact same version of that module.
pip_source_rpm_url: http://repository.cloudifysource.org/org/cloudify3/components/python-pip-7.1.0-1.el7.noarch.rpm
java_source_url: http://repository.cloudifysource.org/org/cloudify3/components/jre1.8.0_45-1.8.0_45-fcs.x86_64.rpm
# RabbitMQ Distribution of Erlang
erlang_source_url: http://repository.cloudifysource.org/org/cloudify3/components/erlang-17.4-1.el6.x86_64.rpm
rabbitmq_source_url: http://repository.cloudifysource.org/org/cloudify3/components/rabbitmq-server-3.5.3-1.noarch.rpm
elasticsearch_source_url: http://repository.cloudifysource.org/org/cloudify3/components/elasticsearch-1.6.0.noarch.rpm
elasticsearch_curator_rpm_source_url: http://repository.cloudifysource.org/org/cloudify3/components/elasticsearch-curator-3.2.3-1.x86_64.rpm
logstash_source_url: http://repository.cloudifysource.org/org/cloudify3/components/logstash-1.5.0-1.noarch.rpm
nginx_source_url: http://repository.cloudifysource.org/org/cloudify3/components/nginx-1.8.0-1.el7.ngx.x86_64.rpm
influxdb_source_url: http://repository.cloudifysource.org/org/cloudify3/components/influxdb-0.8.8-1.x86_64.rpm
riemann_source_url: http://repository.cloudifysource.org/org/cloudify3/components/riemann-0.2.6-1.noarch.rpm
# A RabbitMQ Client for Riemann
langohr_source_url: http://repository.cloudifysource.org/org/cloudify3/components/langohr.jar
# Riemann's default daemonizer
daemonize_source_url: http://repository.cloudifysource.org/org/cloudify3/components/daemonize-1.7.3-7.el7.x86_64.rpm
nodejs_source_url: http://repository.cloudifysource.org/org/cloudify3/components/node-v0.10.35-linux-x64.tar.gz
#############################
# RabbitMQ Configuration
#############################
# Sets the username/password to use for clients such as celery
# to connect to the rabbitmq broker.
# It is recommended that you set both the username and password
# to something reasonably secure.
rabbitmq_username: 'cloudify'
rabbitmq_password: '<my rabbit password>'
# Enable SSL for RabbitMQ. If this is set to true then the public and private
# certs must be supplied (`rabbitmq_cert_private`, `rabbitmq_cert_public` inputs).
#rabbitmq_ssl_enabled: false
# The private certificate for RabbitMQ to use for SSL. This must be PEM formatted.
# It is expected to begin with a line containing 'PRIVATE KEY' in the middle.
#rabbitmq_cert_private: ''
# The public certificate for RabbitMQ to use for SSL. This does not need to be signed by any CA,
# as it will be deployed and explicitly used for all other components.
# It may be self-signed. It must be PEM formatted.
# It is expected to begin with a line of dashes with 'BEGIN CERTIFICATE' in the middle.
# If an external endpoint is used, this must be the public certificate associated with the private
# certificate that has already been configured for use by that rabbit endpoint.
#rabbitmq_cert_public: ''
# Allows to define the message-ttl for the different types of queues (in milliseconds).
# These are not used if `rabbitmq_endpoint_ip` is provided.
# https://www.rabbitmq.com/ttl.html
rabbitmq_events_queue_message_ttl: 60000
rabbitmq_logs_queue_message_ttl: 60000
rabbitmq_metrics_queue_message_ttl: 60000
# This will set the queue length limit. Note that while new messages
# will be queued in RabbitMQ, old messages will be deleted once the
# limit is reached!
# These are not used if `rabbitmq_endpoint_ip` is provided.
# Note this is NOT the message byte length!
# https://www.rabbitmq.com/maxlength.html
rabbitmq_events_queue_length_limit: 1000000
rabbitmq_logs_queue_length_limit: 1000000
rabbitmq_metrics_queue_length_limit: 1000000
# RabbitMQ File Descriptors Limit
rabbitmq_fd_limit: 102400
# You can configure an external endpoint of a RabbitMQ Cluster to use
# instead of the built in one.
# If one is provided, the built in RabbitMQ cluster will not run.
# Also note that your external cluster must be preconfigured with any
# user name/pass and SSL certs if you plan on using RabbitMQ's security
# features.
#rabbitmq_endpoint_ip: ''
#############################
# Elasticsearch Configuration
#############################
# bootstrap.mlockall is set to true by default.
# This allows to set the heapsize for your cluster.
# https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html
#elasticsearch_heap_size: 2g
# This allows to provide any JAVA_OPTS to Elasticsearch.
#elasticsearch_java_opts: ''
# The index for events will be named `logstash-YYYY.mm.dd`.
# A new index corresponding with today's date will be added each day.
# Elasticsearch Curator is used to rotate the indices on a daily basis
# via a cronjob. This allows to determine the number of days to keep.
#elasticsearch_index_rotation_interval: 7
# You can configure an external endpoint of an Elasticsearch Cluster to use
# instead of the built in one. The built in Elasticsearch cluster will not run.
# You need to provide an IP (defaults to localhost) and Port (defaults to 9200) of your Elasticsearch Cluster.
#elasticsearch_endpoint_ip: ''
#elasticsearch_endpoint_port: 9200
#############################
# InfluxDB Configuration
#############################
# You can configure an external endpoint of an InfluxDB Cluster to use
# instead of the built in one.
# If one is provided, the built in InfluxDB cluster will not run.
# Note that the port is currently not configurable and must remain 8086.
# Also note that the database username and password are hardcoded to root:root.
#influxdb_endpoint_ip: ''
#############################
# Offline Resources Upload
#############################
# You can configure a set of resources to upload at bootstrap. These resources
# will reside on the manager and enable offline deployment. `dsl_resources`
# should contain any resource needed in the parsing process (i.e. plugin.yaml files)
# and any plugin archive should be compiled using the designated wagon tool
# which can be found at: http://github.com/cloudify-cosmo/wagon.
# The path should be passed to plugin_resources. Any resource your
# blueprint might need, could be uploaded using this mechanism.
#dsl_resources:
# - {'source_path': 'http://www.getcloudify.org/spec/fabric-plugin/1.3.1/plugin.yaml', 'destination_path': '/spec/fabric-plugin/1.3.1/plugin.yaml'}
# - {'source_path': 'http://www.getcloudify.org/spec/script-plugin/1.3.1/plugin.yaml', 'destination_path': '/spec/script-plugin/1.3.1/plugin.yaml'}
# - {'source_path': 'http://www.getcloudify.org/spec/diamond-plugin/1.3.1/plugin.yaml', 'destination_path': '/spec/diamond-plugin/1.3.1/plugin.yaml'}
# - {'source_path': 'http://www.getcloudify.org/spec/aws-plugin/1.3.1/plugin.yaml', 'destination_path': '/spec/aws-plugin/1.3.1/plugin.yaml'}
# - {'source_path': 'http://www.getcloudify.org/spec/openstack-plugin/1.3.1/plugin.yaml', 'destination_path': '/spec/openstack-plugin/1.3.1/plugin.yaml'}
# - {'source_path': 'http://www.getcloudify.org/spec/tosca-vcloud-plugin/1.3.1/plugin.yaml', 'destination_path': '/spec/tosca-vcloud-plugin/1.3.1/plugin.yaml'}
# - {'source_path': 'http://www.getcloudify.org/spec/vsphere-plugin/1.3.1/plugin.yaml', 'destination_path': '/spec/vsphere-plugin/1.3.1/plugin.yaml'}
# - {'source_path': 'http://www.getcloudify.org/spec/cloudify/3.3.1/types.yaml', 'destination_path': '/spec/cloudify/3.3.1/types.yaml'}
# The plugins you would like to use in your applications should be added here.
# By default, the Diamond, Fabric and relevant IaaS plugins are provided.
# Note that you can upload plugins post-bootstrap via the `cfy plugins upload`
# command.
plugin_resources:
# - 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify_diamond_plugin-1.3.1-py27-none-linux_x86_64-redhat-Maipo.wgn'
- 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify_diamond_plugin-1.3.1-py27-none-linux_x86_64-centos-Core.wgn'
# - 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify_diamond_plugin-1.3.1-py26-none-linux_x86_64-centos-Final.wgn'
# - 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify_diamond_plugin-1.3.1-py27-none-linux_x86_64-Ubuntu-precise.wgn'
# - 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify_diamond_plugin-1.3.1-py27-none-linux_x86_64-Ubuntu-trusty.wgn'
- 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify_fabric_plugin-1.3.1-py27-none-linux_x86_64-centos-Core.wgn'
# - 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify_aws_plugin-1.3.1-py27-none-linux_x86_64-centos-Core.wgn'
- 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify_openstack_plugin-1.3.1-py27-none-linux_x86_64-centos-Core.wgn'
# - 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify_vcloud_plugin-1.3.1-py27-none-linux_x86_64-centos-Core.wgn'
# - 'http://repository.cloudifysource.org/org/cloudify3/3.3.1/sp-RELEASE/cloudify_vsphere_plugin-1.3.1-py27-none-linux_x86_64-centos-Core.wgn'
I'm kinda lost even knowing where to start troubleshooting. Any assistance very gratefully received
K.
Have you looked at the document on offline installation? This should address the scenario when you need to work behind a firewall or a proxy.

Resources