Cloudify nodecellar,Task failed 'script_runner.tasks.run' -> RecoverableError('ProcessException: ',) - cloudify

when I try to install nodecellar with Cloudify,I am getting the following error
2015-07-13T17:31:03 LOG <nodecellar> [mongod_a50aa.configure] ERROR: Exception raised on operation [script_runner.tasks.run] invocation
Traceback (most recent call last):
File "/root/cloudify.host_dba5c/env/local/lib/python2.7/site-packages/cloudify/decorators.py", line 125, in wrapper
result = func(*args, **kwargs)
File "/root/cloudify.host_dba5c/env/local/lib/python2.7/site-packages/script_runner/tasks.py", line 58, in run
return process_execution(script_func, script_path, ctx, process)
File "/root/cloudify.host_dba5c/env/local/lib/python2.7/site-packages/script_runner/tasks.py", line 74, in process_execution
script_func(script_path, ctx, process)
File "/root/cloudify.host_dba5c/env/local/lib/python2.7/site-packages/script_runner/tasks.py", line 143, in execute
stderr_consumer.buffer.getvalue())
How can I fix this problem?

This exception is raised by the Cloudify Script Plugin you ran a script, which exited with a non-zero error code. Here is the source of that error.
The script that returned non-zero code is that script which is mapped to the configure operation on the mongod node. Which script that is depends on the version of the Nodecellar blueprint that you are using.
I can't give a more detailed answer without information regarding the specific blueprint version, which Cloudify version you have installed, details about your provider (local, Vagrant, Openstack, AWS), and OS (Ubuntu, Centos, etc).

Related

Facing (2006, "Lost connection to MySQL server at 'reading initial communication packet', system error: 0") in cloud composer

I am facing this issue:
(2006, "Lost connection to MySQL server at 'reading initial communication packet', system error: 0")
on cloud composer on composer-1.16.5-airflow-1.10.14 version, it is an intermittent issue. We have tried cleaning our airflow metadata and modified the code (for example, replacing variable.get() with the jinja template) to reduce the load on db, but we are facing this issue on a daily level. We also restarted the scheduler but the issue started occuring again after two days, also the cpu usage and memory usage graph of airflow database on composer monitoring is constant but the sql database is going into unhealthy state in some time.
The whole error message is as :
Traceback (most recent call last): File "/opt/python3.6/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 2336, in _wrap_pool_connect return fn() File "/opt/python3.6/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 364, in connect return _ConnectionFairy._checkout(self) File "/opt/python3.6/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 778, in _checkout fairy = _ConnectionRecord.checkout(pool) File "/opt/python3.6/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 495, in checkout rec = pool._do_get() File "/opt/python3.6/lib/python3.6/site-packages/sqlalchemy/pool/impl.py", line 241, in _do_get return self._create_connection() File "/opt/python3.6/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 309, in _create_connection return _ConnectionRecord(self) File "/opt/python3.6/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 440, in __init__ self.__connect(first_connect_check=True) File "/opt/python3.6/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 661, in __connect pool.logger.debug("Error on connect(): %s", e) File "/opt/python3.6/lib/python3.6/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__ with_traceback=exc_tb, File "/opt/python3.6/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 182, in raise_ raise exception File "/opt/python3.6/lib/python3.6/site-packages/sqlalchemy/pool/base.py", line 656, in __connect connection = pool._invoke_creator(self) File "/opt/python3.6/lib/python3.6/site-packages/sqlalchemy/engine/strategies.py", line 114, in connect return dialect.connect(*cargs, **cparams) File "/opt/python3.6/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 493, in connect return self.dbapi.connect(*cargs, **cparams) File "/opt/python3.6/lib/python3.6/site-packages/MySQLdb/__init__.py", line 85, in Connect return Connection(*args, **kwargs) File "/opt/python3.6/lib/python3.6/site-packages/MySQLdb/connections.py", line 208, in __init__ super(Connection, self).__init__(*args, **kwargs2)_mysql_exceptions.OperationalError: (2006, "Lost connection to MySQL server at 'reading initial communication packet', system error: 0")
There could be multiple reasons as the error itself is too general, so it makes a lot of different possibilities for what could go wrong. Known causes:
Connections are blocked by firewall rules.
This can also temporarily happen while an instance is being restarted.
Generic GKE failures because nodes with airflow-sqlproxy are overloaded.
Since it's an intermittent issue, we can assure connections are not being blocked by firewall rules. We might have to check whether any instances have been restarted. And lastly to avoid generic GKE failures you can upgrade your machine types, allocating more resources.
Also as I already mentioned in the comments you're using an old version of Composer which is out of support from May,2022. Its always better to upgrade your composer to a certain version which will have support from Google .

Airflow SQSSensor message filtering

Given below json:
{ "Model" : "level1" }
what is the right combination of message_filtering_match_values and message_filtering_config values? I try below but it fails:
model_operator = SQSSensor(
task_id='model_operator',
dag=dag,
sqs_queue='https://sqs.somewhere/somequeue.fifo',
aws_conn_id='aws_default',
message_filtering='jsonpath',
message_filtering_config='Model[*]',
message_filtering_match_values=['level1'],
mode='reschedule')
Error message is:
Broken DAG: [/usr/local/airflow/dags/test_dag.py] Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/airflow/utils/decorators.py", line 94, in wrapper
result = func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/airflow/models/baseoperator.py", line 414, in __init__
"arguments were:\n**kwargs: {k}".format(c=self.__class__.__name__, k=kwargs, t=task_id),
airflow.exceptions.AirflowException: Invalid arguments were passed to SQSSensor (task_id: model_operator). Invalid arguments were:
**kwargs: {'message_filtering': 'jsonpath', 'message_filtering_config': 'Model[*]', 'message_filtering_match_values': ['level1']}
The message_filtering / message_filtering_config / message_filtering_match_values were added recently in PR it was released in Amazon provider version 2.2.0
From the traceback we can see that these parameters are not recognized by the operator which means that you are running an older version of the Amazon provider.
You should upgrade the Amazon provider to the latest version.
pip install apache-airflow-providers-amazon --upgrade
It's also recommended to read the documentation about constraint files.
You didn't mention what Airflow version you are running nor what version of the Amazon provider so note to read the change logs in case you are upgrading major version.

Eucalyptus 4.4.4 Eucaconsole 502 Bad Gateway / WebOb Version Conflict

I've completed a manual installation of Eucalpytus 4.4.4 but when I try to use a web browser to reach the eucaconsole (running on the same host as CLC/UFS) I get a 502 Bad Gateway Error.
I'm focusing on this error in the eucaconsole.log What does it mean and how can I update WebOb?
pkg_resources.VersionConflict: (WebOb 1.2.3 (/usr/lib/python2.7/site-packages), Requirement.parse('WebOb>=1.3.1'))
Eucaconsole_startup.log:
Traceback (most recent call last):
File "/bin/eucaconsole", line 106, in <module>
daemonize(start_console)
File "/bin/eucaconsole", line 61, in daemonize
func()
File "/bin/eucaconsole", line 73, in start_console
load_entry_point('pyramid', 'console_scripts', 'pserve')(args)
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 378, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2566, in load_entry_point
return ep.load()
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2259, in load
if require: self.require(env, installer)
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2272, in require
working_set.resolve(self.dist.requires(self.extras),env,installer)))
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 630, in resolve
raise VersionConflict(dist,req) # XXX put more info here
pkg_resources.VersionConflict: (WebOb 1.2.3 (/usr/lib/python2.7/site-packages), Requirement.parse('WebOb>=1.3.1'))
I've edited this post to remove info and focus on the clear WebOb version error.
This issue was solved below. Adding a note that the downlevel python-webob version is actually a requirement of the midonet client installation. So if you expect to run VPCMIDO and have the midonet gateway on your CLC you'll have to run your eucaconsole elsewhere.
This was resolved by removing an older python-webob package to ensure that the newer python-webob1.4 package from epel was used.
Related issue in github:
https://github.com/Corymbia/eucalyptus/issues/124
Selinux problem. Run the following in the your CLC/UFS machine
setsebool -P httpd_can_network_connect 1
Its better to flush your iptables during installation.

Robot framework, Sikuli hello_world demo script is failing?

I have installed Robot framework 2.8.7 in a solaris server and added sikuli library to it . when tried to run demo script "Hello world" i'm getting the following error.
bash-3.2# pybot /robot/robotframework-SikuliLibrary-master/demo/hello_world/testsuite_sikuli_demo.txt
*[ WARN ] Test get_keyword_names failed! Connecting remote server at http://127.0.0.1:42821/ failed: <Fault 0: 'Failed to invoke method get_keyword_names in class org.robotframework.remoteserver.servlet.ServerMethods: java.lang.RuntimeException'>
[ ERROR ] Error in file '/robot/robotframework-SikuliLibrary-master/demo/hello_world/testsuite_sikuli_demo.txt': Initializing test library 'SikuliLibrary' with no arguments failed: Failed to get_keyword_names!
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/SikuliLibrary/sikuli.py", line 41, in __init__
self.remote = self._connect_remote_library()
File "/usr/lib/python2.7/site-packages/SikuliLibrary/sikuli.py", line 138, in _connect_remote_library
self._test_get_keyword_names(remote)
File "/usr/lib/python2.7/site-packages/SikuliLibrary/sikuli.py", line 155, in _test_get_keyword_names
raise RuntimeError('Failed to get_keyword_names!')*
I have done the same setup on windows machine and it is working fine. Python version used in solaris is 2.6.Can you let me know how to resolve this?
Thanks

saltstack dotnet install fails on windows minion

I'm trying to install applications on windows server 2012r2 minion. Namely I'm interested on MS management and .NET frameworks. I can install apps like winscp and firefox successfully, so basically it should work.
Install of .NET (dotnet.sls) gives me this:
# salt 'minion3' pkg.install dotnet
minion3:
----------
dotnet:
Unable to locate package dotnet
.sls points to MS download site, where the actual file can be downloaded.
On the minion side I've got:
2016-04-13 11:41:27 [salt.loaded.int.module.cmdmod][INFO ] Executing command 'Powershell -NonInteractive "Import-Module ServerManager"' in directory 'C:\\Windows\\system32\\config\\systemprofile'
2016-04-13 11:41:28 [salt.loaded.int.module.win_pkg][ERROR ] Unable to locate package dotnet
And asking for the available versions (pkg.available_version dotnet) gives me:
minion3:
The minion function caused an exception: Traceback (most recent call last):
File "c:\salt\bin\lib\site-packages\salt\minion.py", line 1071, in _thread_return
return_data = func(*args, **kwargs)
File "c:\salt\bin\lib\site-packages\salt\modules\win_pkg.py", line 103, in latest_version
latest_available = _get_latest_pkg_version(pkg_info)
File "c:\salt\bin\lib\site-packages\salt\modules\win_pkg.py", line 1088, in _get_latest_pkg_version
return sorted(pkginfo, cmp=_reverse_cmp_pkg_versions).pop()
IndexError: pop from empty list
None of the other state files I've tried do not give above errors.
So, what is going on and how to correct this?

Resources