I am trying to use MPI in OpenMDAO. I get the following error. It seems to come from this. But I don't understand what is causing this error or how to fix it.
Traceback (most recent call last):
File "/home/users/RBMDO/Power_Subsystem.py", line 494, in <module>
prob.setup()
File "/home/users/.conda/envs/my-fastenv/lib/python3.8/site-packages/openmdao/core/problem.py", line 874, in setup
raise ValueError(self.msginfo +
ValueError: Problem: Attempting to run in parallel under MPI but PETScVector could not be imported.
Traceback (most recent call last):
File "/home/users/RBMDO/Power_Subsystem.py", line 494, in <module>
prob.setup()
File "/home/users/.conda/envs/my-fastenv/lib/python3.8/site-packages/openmdao/core/problem.py", line 874, in setup
raise ValueError(self.msginfo +
ValueError: Problem: Attempting to run in parallel under MPI but PETScVector could not be imported.
Traceback (most recent call last):
File "/home/users/RBMDO/Power_Subsystem.py", line 494, in <module>
prob.setup()
File "/home/users/.conda/envs/my-fastenv/lib/python3.8/site-packages/openmdao/core/problem.py", line 874, in setup
raise ValueError(self.msginfo +
ValueError: Problem: Attempting to run in parallel under MPI but PETScVector could not be imported.
Traceback (most recent call last):
File "/home/users/rpandi/RBMDO/Power_Subsystem.py", line 494, in <module>
prob.setup()
File "/home/users/.conda/envs/my-fastenv/lib/python3.8/site-packages/openmdao/core/problem.py", line 874, in setup
raise ValueError(self.msginfo +
ValueError: Problem: Attempting to run in parallel under MPI but PETScVector could not be imported.
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
I verified MPI and petsc are working in my environment, I ran the small petsc4py script given here. It runs successfully and gives the expected result.
For completeness, I am running a MINLP optimization with AMIEGO and my job script is below.
#SBATCH -N 1
#SBATCH --ntasks-per-node=4
#SBATCH -c 1
#SBATCH -p batch
#SBATCH -C skylake
print_error_and_exit() { echo "***ERROR*** $*"; exit 1; }
module purge || print_error_and_exit "No 'module' command"
module load lang/Anaconda3/2020.02
module load mpi/OpenMPI/3.1.4-GCC-8.3.0
conda activate my-fastenv
mpirun -n $SLURM_NTASKS python /RBMDO/Power_Subsystem.py
Any idea on what might be causing this error ?
Related
I recently compiled a python 3 file using pyinstaller. When I tried to run
./main
it said the following message:
Fatal Python error: (pygame parachute) Segmentation Fault
Traceback (most recent call last):
File "pygame/pkgdata.py", line 67, in getResource
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/_MEIkMQ7na/pygame/freesansbold.ttf'
Aborted (core dumped)
I do use the pygame module.
Try this:
pyinstaller -F --add-data="<PATH_OF_FILE_IN_YOUR_ENV>/pygame/freesansbold.ttf;/pygame/freesansbold.ttf" main.py
Basically you need to find freesansbold.ttf from your virtual environment and explicitly add it inside the bundle.
I'm running eventlet.monkey_patch() while trying to spin up a flask server which uses flask-socketio. This is the traceback:
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib64/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib64/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/alhasan/MeetupPoint/venv/lib/python3.6/site-packages/werkzeug/serving.py", line 777, in inner
srv.serve_forever()
File "/home/alhasan/MeetupPoint/venv/lib/python3.6/site-packages/werkzeug/serving.py", line 612, in serve_forever
HTTPServer.serve_forever(self)
File "/usr/lib64/python3.6/socketserver.py", line 232, in serve_forever
with _ServerSelector() as selector:
File "/usr/lib64/python3.6/selectors.py", line 348, in __init__
self._poll = select.poll()
AttributeError: module 'select' has no attribute 'poll'
I tried using monkey_patch, as previously I encountered the following error:
RuntimeError: You need to use the eventlet server. See the Deployment section of the documentation for more information.
I have eventlet installed.
...
eventlet==0.23.0
Flask==0.12.2
Flask-Migrate==2.1.1
Flask-Script==2.0.6
Flask-SocketIO==3.0.1
...
Is there a fix for this?
My initial problem was that my server returns bad requests everytime I try to emit a message from the client. But, the other way works. Would really appreciate any kind of a solution. :)
I am trying to install the autoIt library to use it with robotFramework, but I keep encoutering this error:
Don't think we need to unregister the old one...
%SYSTEMROOT%\system32\regsvr32.exe /S C:\xxx xxx\Python27\Lib\site-packages\AutoItLibrary\lib\AutoItX3.dll
Traceback (most recent call last):
File "setup.py", line 70, in <module>
subprocess.check_call(cmd, shell=True)
File "C:\Python27\lib\subprocess.py", line 504, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '%SYSTEMROOT%\system32\regsvr32.exe /S C:\xxx xxx\Python27\Lib\site-packages\AutoItLibrary\lib\AutoItX3.dll' returned non-zero exit status 1
I have tried all the solution found so far on stackoverflow:
Run cmd as Administrator
Set HOMEDRIVE environment variable
Manually run this command %SYSTEMROOT%\system32\regsvr32.exe /S C
C:\Python27\Lib\site-packages\AutoItLibrary\lib\AutoItX3.dll (which throw no
error)
Using a fixed setup
py script :https://github.com/qitaos/robotframeworkautoitlibrary/issues/30
( which trhow this error :subprocess.CalledProcessError: Command 'python "C:\xxx xxx\Python27\Lib\site-packages\win32com\client\makepy.py" "C:\xxx xxx\Python27\Lib\site-packages\AutoItLibrary\lib\AutoItX3.dll"' returned non-zero exit status 1
Does anyone have others solutions ?
(If not I will go for SikuliX, even if AutoIt is the best for Win automation)
Thanks !
when I try to install nodecellar with Cloudify,I am getting the following error
2015-07-13T17:31:03 LOG <nodecellar> [mongod_a50aa.configure] ERROR: Exception raised on operation [script_runner.tasks.run] invocation
Traceback (most recent call last):
File "/root/cloudify.host_dba5c/env/local/lib/python2.7/site-packages/cloudify/decorators.py", line 125, in wrapper
result = func(*args, **kwargs)
File "/root/cloudify.host_dba5c/env/local/lib/python2.7/site-packages/script_runner/tasks.py", line 58, in run
return process_execution(script_func, script_path, ctx, process)
File "/root/cloudify.host_dba5c/env/local/lib/python2.7/site-packages/script_runner/tasks.py", line 74, in process_execution
script_func(script_path, ctx, process)
File "/root/cloudify.host_dba5c/env/local/lib/python2.7/site-packages/script_runner/tasks.py", line 143, in execute
stderr_consumer.buffer.getvalue())
How can I fix this problem?
This exception is raised by the Cloudify Script Plugin you ran a script, which exited with a non-zero error code. Here is the source of that error.
The script that returned non-zero code is that script which is mapped to the configure operation on the mongod node. Which script that is depends on the version of the Nodecellar blueprint that you are using.
I can't give a more detailed answer without information regarding the specific blueprint version, which Cloudify version you have installed, details about your provider (local, Vagrant, Openstack, AWS), and OS (Ubuntu, Centos, etc).
Can anyone explain please what could be wrong:
I have a fresh Plone 4.1.4 installation via buildout and a fresh out-of-box Plone site created (no work is done on the site). After running ./bin/test --all testsuite (just out of curiosity) it gives lots of the following errors:
Mik#S-linux:/Plone414/PLONE414/zinstance>
./bin/test --all
./bin/test:239: DeprecationWarning: zope.testing.testrunner is deprecated in favour of zope.testrunner. /Plone414/PLONE414/buildout-cache/eggs/zope.testing-3.9.7-py2.6.egg/zope/testing/testrunner/formatter.py:28: DeprecationWarning: zope.testing.exceptions is deprecated in favour of zope.testrunner.exceptions from zope.testing.exceptions import DocTestFailureException Running Testing.ZopeTestCase.layer.ZopeLite tests: Set up Testing.ZopeTestCase.layer.ZopeLite in 0.071 seconds. Running: 8/44 (18.2%)
Failure in test testDateTime (Products.DocFinderTab.tests.testAnalyse.TestAnalyse) Traceback (most recent call last): File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 279, in run testMethod() File "/Plone414/PLONE414/buildout-cache/eggs/Products.DocFinderTab-1.0.5-py2.6.egg/Products/DocFinderTab/tests/testAnalyse.py", line 198, in testDateTime self.assertEqual(self.ob.getdoc('_DateTime').Type(), 'DateTime') File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 350, in failUnlessEqual (msg or '%r != %r' % (first, second)) AssertionError: 'DateTime instance' != 'DateTime'
Ran 44 tests with 1 failures and 0 errors in 1.376 seconds. Running zope.testing.testrunner.layer.UnitTests tests: Tear down Testing.ZopeTestCase.layer.ZopeLite in 0.000 seconds. Set up zope.testing.testrunner.layer.UnitTests in 0.000 seconds. Running: 2/47 (4.3%)
Failure in test test_search_modules (plone.reload.tests.test_code.TestSearch) Traceback (most recent call last): File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 279, in run testMethod() File "/Plone414/PLONE414/buildout-cache/eggs/plone.reload-2.0-py2.6.egg/plone/reload/tests/test_code.py", line 33, in test_search_modules self.assertTrue(found) File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 325, in failUnless if not expr: raise self.failureException, msg AssertionError 5/47 (10.6%)
Error in test test_check_mod_times_change (plone.reload.tests.test_code.TestTimes) Traceback (most recent call last): File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 279, in run testMethod() File "/Plone414/PLONE414/buildout-cache/eggs/plone.reload-2.0-py2.6.egg/plone/reload/tests/test_code.py", line 82, in test_check_mod_times_change
our_entry = MOD_TIMES[our_package] KeyError: '/Plone414/PLONE414/buildout-cache/eggs/plone.reload-2.0-py2.6.egg/plone/reload/__init__.pyc'
8/47 (17.0%)
Failure in test test_get_mod_times (plone.reload.tests.test_code.TestTimes) Traceback (most recent call last): File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 279, in run testMethod() File "/Plone414/PLONE414/buildout-cache/eggs/plone.reload-2.0-py2.6.egg/plone/reload/tests/test_code.py", line 70, in test_get_mod_times self.assertTrue(our_package in times) File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 325, in failUnless if not expr: raise self.failureException, msg AssertionError 10/47 (21.3%)
Error in test test_reload_code_change (plone.reload.tests.test_code.TestTimes) Traceback (most recent call last): File "/Plone414/PLONE414/Python-2.6/lib/python2.6/unittest.py", line 279, in run testMethod() File "/Plone414/PLONE414/buildout-cache/eggs/plone.reload-2.0-py2.6.egg/plone/reload/tests/test_code.py", line 98, in test_reload_code_change our_entry = MOD_TIMES[our_package] KeyError: '/Plone414/PLONE414/buildout-cache/eggs/plone.reload-2.0-py2.6.egg/plone/reload/__init__.pyc'
Ran 47 tests with 2 failures and 2 errors in 0.102 seconds. Tearing down left over layers: Tear down zope.testing.testrunner.layer.UnitTests in 0.000 seconds. Total: 91 tests, 3 failures, 2 errors in 1.682 seconds.
This isn't a supported way to run the tests. Some of the tests for the components of Plone change global state and then do not clean up after themselves, causing failures in tests that run later which depended on that state being a certain way. The environment we use to develop Plone, buildout.coredev, uses the plone.recipe.alltests buildout recipe to set up a script that can run all the tests successfully by isolating some packages from others.
This is of course not ideal, but it's a pragmatic solution until someone does the work to find and solve the test isolation problems.