Plone test-runner errors - plone

Can anyone explain please what could be wrong:
I have a fresh Plone 4.1.4 installation via buildout and a fresh out-of-box Plone site created (no work is done on the site). After running ./bin/test --all testsuite (just out of curiosity) it gives lots of the following errors:
Mik#S-linux:/Plone414/PLONE414/zinstance>
./bin/test --all
./bin/test:239: DeprecationWarning: zope.testing.testrunner is deprecated in favour of zope.testrunner. /Plone414/PLONE414/buildout-cache/eggs/zope.testing-3.9.7-py2.6.egg/zope/testing/testrunner/formatter.py:28: DeprecationWarning: zope.testing.exceptions is deprecated in favour of zope.testrunner.exceptions from zope.testing.exceptions import DocTestFailureException Running Testing.ZopeTestCase.layer.ZopeLite tests: Set up Testing.ZopeTestCase.layer.ZopeLite in 0.071 seconds. Running: 8/44 (18.2%)
Failure in test testDateTime (Products.DocFinderTab.tests.testAnalyse.TestAnalyse) Traceback (most recent call last): File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 279, in run testMethod() File "/Plone414/PLONE414/buildout-cache/eggs/Products.DocFinderTab-1.0.5-py2.6.egg/Products/DocFinderTab/tests/testAnalyse.py", line 198, in testDateTime self.assertEqual(self.ob.getdoc('_DateTime').Type(), 'DateTime') File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 350, in failUnlessEqual (msg or '%r != %r' % (first, second)) AssertionError: 'DateTime instance' != 'DateTime'
Ran 44 tests with 1 failures and 0 errors in 1.376 seconds. Running zope.testing.testrunner.layer.UnitTests tests: Tear down Testing.ZopeTestCase.layer.ZopeLite in 0.000 seconds. Set up zope.testing.testrunner.layer.UnitTests in 0.000 seconds. Running: 2/47 (4.3%)
Failure in test test_search_modules (plone.reload.tests.test_code.TestSearch) Traceback (most recent call last): File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 279, in run testMethod() File "/Plone414/PLONE414/buildout-cache/eggs/plone.reload-2.0-py2.6.egg/plone/reload/tests/test_code.py", line 33, in test_search_modules self.assertTrue(found) File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 325, in failUnless if not expr: raise self.failureException, msg AssertionError 5/47 (10.6%)
Error in test test_check_mod_times_change (plone.reload.tests.test_code.TestTimes) Traceback (most recent call last): File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 279, in run testMethod() File "/Plone414/PLONE414/buildout-cache/eggs/plone.reload-2.0-py2.6.egg/plone/reload/tests/test_code.py", line 82, in test_check_mod_times_change
our_entry = MOD_TIMES[our_package] KeyError: '/Plone414/PLONE414/buildout-cache/eggs/plone.reload-2.0-py2.6.egg/plone/reload/__init__.pyc'
8/47 (17.0%)
Failure in test test_get_mod_times (plone.reload.tests.test_code.TestTimes) Traceback (most recent call last): File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 279, in run testMethod() File "/Plone414/PLONE414/buildout-cache/eggs/plone.reload-2.0-py2.6.egg/plone/reload/tests/test_code.py", line 70, in test_get_mod_times self.assertTrue(our_package in times) File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 325, in failUnless if not expr: raise self.failureException, msg AssertionError 10/47 (21.3%)
Error in test test_reload_code_change (plone.reload.tests.test_code.TestTimes) Traceback (most recent call last): File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 279, in run testMethod() File "/Plone414/PLONE414/buildout-cache/eggs/plone.reload-2.0-py2.6.egg/plone/reload/tests/test_code.py", line 98, in test_reload_code_change our_entry = MOD_TIMES[our_package] KeyError: '/Plone414/PLONE414/buildout-cache/eggs/plone.reload-2.0-py2.6.egg/plone/reload/__init__.pyc'
Ran 47 tests with 2 failures and 2 errors in 0.102 seconds. Tearing down left over layers: Tear down zope.testing.testrunner.layer.UnitTests in 0.000 seconds. Total: 91 tests, 3 failures, 2 errors in 1.682 seconds.

This isn't a supported way to run the tests. Some of the tests for the components of Plone change global state and then do not clean up after themselves, causing failures in tests that run later which depended on that state being a certain way. The environment we use to develop Plone, buildout.coredev, uses the plone.recipe.alltests buildout recipe to set up a script that can run all the tests successfully by isolating some packages from others.
This is of course not ideal, but it's a pragmatic solution until someone does the work to find and solve the test isolation problems.

Related

PETScVector could not be imported

I am trying to use MPI in OpenMDAO. I get the following error. It seems to come from this. But I don't understand what is causing this error or how to fix it.
Traceback (most recent call last):
File "/home/users/RBMDO/Power_Subsystem.py", line 494, in <module>
prob.setup()
File "/home/users/.conda/envs/my-fastenv/lib/python3.8/site-packages/openmdao/core/problem.py", line 874, in setup
raise ValueError(self.msginfo +
ValueError: Problem: Attempting to run in parallel under MPI but PETScVector could not be imported.
Traceback (most recent call last):
File "/home/users/RBMDO/Power_Subsystem.py", line 494, in <module>
prob.setup()
File "/home/users/.conda/envs/my-fastenv/lib/python3.8/site-packages/openmdao/core/problem.py", line 874, in setup
raise ValueError(self.msginfo +
ValueError: Problem: Attempting to run in parallel under MPI but PETScVector could not be imported.
Traceback (most recent call last):
File "/home/users/RBMDO/Power_Subsystem.py", line 494, in <module>
prob.setup()
File "/home/users/.conda/envs/my-fastenv/lib/python3.8/site-packages/openmdao/core/problem.py", line 874, in setup
raise ValueError(self.msginfo +
ValueError: Problem: Attempting to run in parallel under MPI but PETScVector could not be imported.
Traceback (most recent call last):
File "/home/users/rpandi/RBMDO/Power_Subsystem.py", line 494, in <module>
prob.setup()
File "/home/users/.conda/envs/my-fastenv/lib/python3.8/site-packages/openmdao/core/problem.py", line 874, in setup
raise ValueError(self.msginfo +
ValueError: Problem: Attempting to run in parallel under MPI but PETScVector could not be imported.
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
I verified MPI and petsc are working in my environment, I ran the small petsc4py script given here. It runs successfully and gives the expected result.
For completeness, I am running a MINLP optimization with AMIEGO and my job script is below.
#SBATCH -N 1
#SBATCH --ntasks-per-node=4
#SBATCH -c 1
#SBATCH -p batch
#SBATCH -C skylake
print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
module purge || print_error_and_exit "No 'module' command"
module load lang/Anaconda3/2020.02
module load mpi/OpenMPI/3.1.4-GCC-8.3.0
conda activate my-fastenv
mpirun -n $SLURM_NTASKS python /RBMDO/Power_Subsystem.py
Any idea on what might be causing this error ?

AttributeError: module 'select' has no attribute 'poll'

I'm running eventlet.monkey_patch() while trying to spin up a flask server which uses flask-socketio. This is the traceback:
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib64/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib64/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/alhasan/MeetupPoint/venv/lib/python3.6/site-packages/werkzeug/serving.py", line 777, in inner
srv.serve_forever()
File "/home/alhasan/MeetupPoint/venv/lib/python3.6/site-packages/werkzeug/serving.py", line 612, in serve_forever
HTTPServer.serve_forever(self)
File "/usr/lib64/python3.6/socketserver.py", line 232, in serve_forever
with _ServerSelector() as selector:
File "/usr/lib64/python3.6/selectors.py", line 348, in __init__
self._poll = select.poll()
AttributeError: module 'select' has no attribute 'poll'
I tried using monkey_patch, as previously I encountered the following error:
RuntimeError: You need to use the eventlet server. See the Deployment section of the documentation for more information.
I have eventlet installed.
...
eventlet==0.23.0
Flask==0.12.2
Flask-Migrate==2.1.1
Flask-Script==2.0.6
Flask-SocketIO==3.0.1
...
Is there a fix for this?
My initial problem was that my server returns bad requests everytime I try to emit a message from the client. But, the other way works. Would really appreciate any kind of a solution. :)

iprof and iprof_totals profiling error

I get this error after trying :
openmdao iprof x.py
or
openmdao iprof_totals x.py
on my terminal. Any idea why it could be? Do we have a simple sample code where the iprof works smoothly.
Traceback (most recent call last):
File "/home/user/miniconda3/bin/openmdao", line 11, in
sys.exit(openmdao_cmd())
File "/home/user/miniconda3/lib/python3.6/site-packages/openmdao/utils/om.py", line 403, in openmdao_cmd
options.executor(options)
File "/home/user/miniconda3/lib/python3.6/site-packages/openmdao/devtools/iprofile.py", line 373, in _iprof_totals_exec
_iprof_py_file(options)
File "/home/user/miniconda3/lib/python3.6/site-packages/openmdao/devtools/iprofile.py", line 429, in _iprof_py_file
_finalize_profile()
File "/home/user/miniconda3/lib/python3.6/site-packages/openmdao/devtools/iprofile.py", line 183, in _finalize_profile
qfile, qclass, qname = find_qualified_name(filename, int(line), cache, full=False)
File "/home/user/miniconda3/lib/python3.6/site-packages/openmdao/devtools/iprof_utils.py", line 73, in find_qualified_name
with open(filename, 'Ur') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'packages/openmdao/jacobians/jacobian.py'
I looked into this and there is currently a bug when using 'openmdao iprof' and 'openmdao iprof_totals' on a version of OpenMDAO that was not installed from an OpenMDAO repository using 'pip install -e'. I put a story in our bug tracker to fix it.

Cloudify nodecellar,Task failed 'script_runner.tasks.run' -> RecoverableError('ProcessException: ',)

when I try to install nodecellar with Cloudify,I am getting the following error
2015-07-13T17:31:03 LOG <nodecellar> [mongod_a50aa.configure] ERROR: Exception raised on operation [script_runner.tasks.run] invocation
Traceback (most recent call last):
File "/root/cloudify.host_dba5c/env/local/lib/python2.7/site-packages/cloudify/decorators.py", line 125, in wrapper
result = func(*args, **kwargs)
File "/root/cloudify.host_dba5c/env/local/lib/python2.7/site-packages/script_runner/tasks.py", line 58, in run
return process_execution(script_func, script_path, ctx, process)
File "/root/cloudify.host_dba5c/env/local/lib/python2.7/site-packages/script_runner/tasks.py", line 74, in process_execution
script_func(script_path, ctx, process)
File "/root/cloudify.host_dba5c/env/local/lib/python2.7/site-packages/script_runner/tasks.py", line 143, in execute
stderr_consumer.buffer.getvalue())
How can I fix this problem?
This exception is raised by the Cloudify Script Plugin you ran a script, which exited with a non-zero error code. Here is the source of that error.
The script that returned non-zero code is that script which is mapped to the configure operation on the mongod node. Which script that is depends on the version of the Nodecellar blueprint that you are using.
I can't give a more detailed answer without information regarding the specific blueprint version, which Cloudify version you have installed, details about your provider (local, Vagrant, Openstack, AWS), and OS (Ubuntu, Centos, etc).

3.3 -> 4.1 migration fails, busted RAMCache AttributeError: 'RAMCache' object has no attribute '_cacheId'

After 3.3 -> 4.1 migration I get exception on the resulting page
File "/fast/buildout-cache/eggs/plone.app.viewletmanager-2.0.2-py2.6.egg/plone/app/viewletmanager/manager.py", line 85, in render
return u'\n'.join([viewlet.render() for viewlet in self.viewlets])
File "/fast/buildout-cache/eggs/plone.memoize-1.1.1-py2.6.egg/plone/memoize/volatile.py", line 281, in replacement
cached_value = cache.get(key, _marker)
File "/fast/buildout-cache/eggs/plone.memoize-1.1.1-py2.6.egg/plone/memoize/ram.py", line 124, in get
return self.__getitem__(key)
File "/fast/buildout-cache/eggs/plone.memoize-1.1.1-py2.6.egg/plone/memoize/ram.py", line 166, in __getitem__
MARKER)
File "/fast/buildout-cache/eggs/zope.ramcache-1.0-py2.6.egg/zope/ramcache/ram.py", line 107, in query
s = self._getStorage()
File "/fast/buildout-cache/eggs/zope.ramcache-1.0-py2.6.egg/zope/ramcache/ram.py", line 122, in _getStorage
cacheId = self._cacheId
AttributeError: 'RAMCache' object has no attribute '_cacheId'
Looks like RAMCache object is in invalid state.
Also before this seeing in logs:
2012-06-21 16:42:54 INFO plone.app.upgrade Ran upgrade step: Miscellaneous
2012-06-21 16:42:54 INFO plone.app.upgrade End of upgrade path, migration has finished
2012-06-21 16:42:54 INFO plone.app.upgrade Your Plone instance is now up-to-date.
2012-06-21 16:43:02 ERROR txn.4553572352 Error in tpc_abort() on manager <Connection at 10be48490>
Traceback (most recent call last):
File "/fast/buildout-cache/eggs/transaction-1.1.1-py2.6.egg/transaction/_transaction.py", line 484, in _cleanup
rm.tpc_abort(self)
File "/fast/buildout-cache/eggs/ZODB3-3.10.5-py2.6-macosx-10.7-x86_64.egg/ZODB/Connection.py", line 730, in tpc_abort
self._storage.tpc_abort(transaction)
File "/fast/buildout-cache/eggs/ZODB3-3.10.5-py2.6-macosx-10.7-x86_64.egg/ZEO/ClientStorage.py", line 1157, in tpc_abort
self._server.tpc_abort(id(txn))
File "/fast/buildout-cache/eggs/ZODB3-3.10.5-py2.6-macosx-10.7-x86_64.egg/ZEO/ServerStub.py", line 255, in tpc_abort
self.rpc.call('tpc_abort', id)
File "/fast/buildout-cache/eggs/ZODB3-3.10.5-py2.6-macosx-10.7-x86_64.egg/ZEO/zrpc/connection.py", line 768, in call
raise inst # error raised by server
OSError: [Errno 2] No such file or directory: '/fast/xxx-2012/var/blobstorage/0x00/0x00/0x00/0x00/0x00/0x07/0xdc/0x55/0x00FWigqp.tmp-'
2012-06-21 16:43:03 ERROR Zope.SiteErrorLog 1340286183.10.000607291180815 http://localhost:9666/xxx/##plone-upgrade
Traceback (innermost last):
Module ZPublisher.Publish, line 134, in publish
Module Zope2.App.startup, line 301, in commit
Module transaction._manager, line 89, in commit
Module transaction._transaction, line 329, in commit
Module transaction._transaction, line 446, in _commitResources
Module ZODB.Connection, line 781, in tpc_vote
Module ZEO.ClientStorage, line 1098, in tpc_vote
Module ZEO.ClientStorage, line 929, in _check_serials
IOError: [Errno 2] No such file or directory: '/fast/xxx-2012/var/blobstorage/0x00/0x00/0x00/0x00/0x00/0x07/0xdd/0xca/0x009kWNYQ.tmp-'
Why this would might happen?
Any pointers how to reinitialize RAMCache object?
RAMCache is first time referred by FaviconViewlet which is using #memoize deorator and it leads to this error.
Well, your migration obviously did not complete successfully, based on the traceback. So I would focus on figuring out why it failed, rather than working around things like the broken RAMCache which are likely a result of the migration not having run.
The traceback indicates that it broke while trying to abort the transaction...so you'll probably need to do some debugging to determine what caused it to try to abort, since that's not indicated in the logs you pasted.

Resources