I am using pyomo Pyomo 5.1.1 (CPython 3.6.0 on Linux 3.16.0-4-amd64)
under Python 3.6, and I get an error message when I want to build an Expression in a model using a summation.
Here is a minimal example :
from pyomo.environ import *
from pyomo.opt import SolverFactory
model=ConcreteModel()
model.H=RangeSet(0,23)
model.x=Var(model.H)
E=summation(model.x)
I get the following error :
"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/b67777/anaconda3/lib/python3.6/site-packages/pyomo/core/base/util.py", line 86, in summation
ans += item
File "/home/b67777/anaconda3/lib/python3.6/site-packages/pyomo/core/base/numvalue.py", line 537, in __iadd__
return generate_expression(_iadd,self,other)
File "/home/b67777/anaconda3/lib/python3.6/site-packages/pyomo/core/base/expr_coopr3.py", line 977, in generate_expression
_self = _generate_expression__clone_if_needed(_self, 1)
File "/home/b67777/anaconda3/lib/python3.6/site-packages/pyomo/core/base/expr_coopr3.py", line 918, in _generate_expression__clone_if_needed
% ( getrefcount(obj) - UNREFERENCED_EXPR_COUNT, ))
RuntimeError: Expression entered generate_expression() with too few references (0<0); this is indicative of a SERIOUS ERROR in the expression reuse detection scheme.
"
I get the same error if I replaced the "summation" by a loop over h in model.H.
The bug has been noticed also here : https://github.com/Pyomo/pyomo/issues/103
Is there a simple way to fix this, other than stepping back to older version of Python ?
Thank you very much for your help,
Paulin
Pyomo only supports Python 2.6, 2.7, 3.3, 3.4, and 3.5.
Python 3.6 changed the internal call stack, which invalidated the internal "magic numbers" that Pyomo uses for detecting when an expression is being pointed to by extra variables (i.e., it is potentially being reused in multiple expressions - something that is not allowed by the Pyomo expression trees). The developers are working on a fix (in the pyomo4-expressions branch). Until that fix is merged back into master and released, the only alternative is to install one of the supported Python versions.
Update [3 April 17]: The fix was merged back into master on 7 March 2017. Pyomo should support Python 3.6 in the next patch release.
Update [14 May 17]: Pyomo 5.2 has been released, which provides full support for Python 3.6
Related
I have just upgrade my ubuntu. I have this error on spyder:
Warning: Ignoring XDG_SESSION_TYPE=wayland on Gnome. Use QT_QPA_PLATFORM=wayland to run on Wayland anyway.
Traceback (most recent call last):
File "/usr/bin/spyder", line 33, in <module>
sys.exit(load_entry_point('spyder==4.2.1', 'gui_scripts', 'spyder')())
File "/usr/lib/python3/dist-packages/spyder/app/start.py", line 213, in main
mainwindow.main(options, args)
File "/usr/lib/python3/dist-packages/spyder/app/mainwindow.py", line 3624, in main
mainwindow = create_window(app, splash, options, args)
File "/usr/lib/python3/dist-packages/spyder/app/mainwindow.py", line 3482, in create_window
main.setup()
File "/usr/lib/python3/dist-packages/spyder/app/mainwindow.py", line 803, in setup
self.completions = CompletionManager(self)
File "/usr/lib/python3/dist-packages/spyder/plugins/completion/plugin.py", line 97, in __init__
plugin_client = Plugin(self.main)
File "/usr/lib/python3/dist-packages/spyder/plugins/completion/kite/plugin.py", line 50, in __init__
self.installer = KiteInstallerDialog(
File "/usr/lib/python3/dist-packages/spyder/plugins/completion/kite/widgets/install.py", line 287, in __init__
self._integration_widget = KiteIntegrationInfo(self)
File "/usr/lib/python3/dist-packages/spyder/plugins/completion/kite/widgets/install.py", line 58, in __init__
image = image.scaled(image_width, image_height, Qt.KeepAspectRatio,
TypeError: arguments did not match any overloaded call:
scaled(self, int, int, aspectRatioMode: Qt.AspectRatioMode = Qt.IgnoreAspectRatio, transformMode: Qt.TransformationMode = Qt.FastTransformation): argument 1 has unexpected type 'float'
scaled(self, QSize, aspectRatioMode: Qt.AspectRatioMode = Qt.IgnoreAspectRatio, transformMode: Qt.TransformationMode = Qt.FastTransformation): argument 1 has unexpected type 'float'
All the solution that I have found deal with specific application developed my different user and not for probel related to upgrade or ubuntu.
As a quick workaround just comment out the lines in File:
/usr/lib/python3/dist-packages/spyder/plugins/completion/kite/widgets/install.py
58
#image = image.scaled(image_width, image_height, Qt.KeepAspectRatio, Qt.SmoothTransformation)
143
#install_gif.setScaledSize(QSize(image_width, image_height))
244-247
#copilot_label.setPixmap(
# copilot_image.scaled(image_width, image_height,
# Qt.KeepAspectRatio,
# Qt.SmoothTransformation))
(Spyder maintainer here) The Spyder package provided by Ubuntu 22.04 (4.2.1, released in December 2020) is broken with the Python version that comes with it (3.10).
However, this error was fixed in any Spyder version released after 5.3.0, released in March 2022. So, to solve this problem please uninstall the Spyder that comes with Ubuntu and install it with pip in a virtualenv, as explained in our documentation.
I had the exact same problem, did a search with the following two items :
"spyder" + "mainwindow.py, line 3624" # aka a piece of the error message
I found the bug report #16571 on spyder's github : "TypeError in Tour with Python 3.10", which identified python 3.10 as the culprit.
In Debian + aptitude I downgraded python 3 from 3.10 to the 3.9 version, launched Spyder again and...problem solved !
it is appearing in some big modules like matplotlib. For example expression :
import importlib
obj = importlib.import_module('matplotlib')
obj_entries = obj.__dict__
Between runs len of obj_entries can vary. From 108 to 157 (expected) entries. Especially pyplot can be ignored like some another submodules.
it can work stable during manual debug mode with len computing statement after dict extraction. But in auto it dont work well.
such error occures:
RuntimeError: dictionary changed size during iteration
python-BaseException
using clear python 3.10 on windows. Version swap change nothing at all
during some attempts some interesting features was found.
use of repr is helpfull before dict initiation.
But if module transported between classes like variable more likely lazy-import happening? For now there is evidence that not all names showing when command line interpriter doing opposite - returning what expected. So this junk of code help bypass this bechavior...
Note: using pkgutil.iter_modules(some_path) to observe modules im internal for pkgutil ModuleInfo form.
import pkgutil, importlib
module_info : pkgutil.ModuleInfo
name = module_info.name
founder = module_info.module_finder
spec = founder.find_spec(name)
module_obj = importlib.util.module_from_spec(spec)
loader = module_obj.__loader__
loader.exec_module(module_obj)
still unfamilliar with interior of import mechanics so it will be helpfull to recive some links to more detail explanation (spot on)
I am trying an MXNet tutorial mentioned at http://mxnet.io/tutorials/embedded/wine_detector.html (Section "Running the Model" on a raspberry pi3 using python3.4, specifically the script "inception_predict.py". I managed to fix a couple of issue but am getting stumped at this error:
>> import inception_predict
[23:43:37] src/nnvm/legacy_json_util.cc:190: Loading symbol saved by previous version v0.8.0. Attempting to upgrade...
[23:43:37] src/nnvm/legacy_json_util.cc:198: Symbol successfully upgraded!
>> predict_from_url("http://imgur.com/HzafyBA")
Traceback (most recent call last):
File "", line 1, in
NameError: name 'predict_from_url' is not defined
Function predict_from_url is defined in the imported file inception_predict.py (as mentioned in the tutorial) so why is python telling me it is not defined?
What am I doing wrong?
The tutorial has a few errors that you need to fix to make it run:
add time to the import list in the inception_predict.py
...
import cv2, os, urllib, time
...
use a URL that you can actually download directly (use your favorite image search engine to find ones)
call the full name function
inception_predict. predict_from_url("https://media.mnn.com/assets/images/2017/01/cow-in-pasture.jpg.838x0_q80.jpg")
After these small changes you will see something like this:
pre-processed image in 0.27312707901
MKL Build:20170209
forward pass in 0.131096124649
probability=0.784963, class=n02403003 ox
probability=0.099463, class=n03868242 oxcart
probability=0.035585, class=n03967562 plow, plough
probability=0.033620, class=n02415577 bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis
probability=0.015443, class=n02412080 ram, tup
[(0.78496253, 'n02403003 ox'), (0.09946309, 'n03868242 oxcart'), (0.035584591, 'n03967562 plow, plough'), (0.033620458, 'n02415577 bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis'), (0.015442736, 'n02412080 ram, tup')]
The OpenMDAO problem that I'm running is quite complicated so I don't think it would be helpful to post the entire script. However, the basic setup is that my problem root is a ParallelFDGroup (not actually finite differencing for now--just running the problem once) that contains a few normal components as well as a parallel group. The parallel group is responsible for running 56 instances of an external code (one component per instance of the code). Strangely, when I run the problem with 4-8 processors, everything seems to work fine (sometimes even works with 10-12 processors). But when I try to use more processors (20+), I fairly consistently get the errors below. It provides two tracebacks:
Traceback (most recent call last):
File "opt_5mw.py", line 216, in <module>
top.setup() #call setup
File "/home/austinherrema/.local/lib/python2.7/site-packages/openmdao/core/problem.py", line 644, in setup
self.root._setup_vectors(param_owners, impl=self._impl, alloc_derivs=alloc_derivs)
File "/home/austinherrema/.local/lib/python2.7/site-packages/openmdao/core/group.py", line 476, in _setup_vectors
self._u_size_lists = self.unknowns._get_flattened_sizes()
File "/home/austinherrema/.local/lib/python2.7/site-packages/openmdao/core/petsc_impl.py", line 204, in _get_flattened_sizes
return self.comm.allgather(sizes)
File "MPI/Comm.pyx", line 1291, in mpi4py.MPI.Comm.allgather (src/mpi4py.MPI.c:109194)
File "MPI/msgpickle.pxi", line 746, in mpi4py.MPI.PyMPI_allgather (src/mpi4py.MPI.c:48575)
mpi4py.MPI.Exception: MPI_ERR_IN_STATUS: error code in status
Traceback (most recent call last):
File "opt_5mw.py", line 216, in <module>
top.setup() #call setup
File "/home/austinherrema/.local/lib/python2.7/site-packages/openmdao/core/problem.py", line 644, in setup
self.root._setup_vectors(param_owners, impl=self._impl, alloc_derivs=alloc_derivs)
File "/home/austinherrema/.local/lib/python2.7/site-packages/openmdao/core/group.py", line 476, in _setup_vectors
self._u_size_lists = self.unknowns._get_flattened_sizes()
File "/home/austinherrema/.local/lib/python2.7/site-packages/openmdao/core/petsc_impl.py", line 204, in _get_flattened_sizes
return self.comm.allgather(sizes)
File "MPI/Comm.pyx", line 1291, in mpi4py.MPI.Comm.allgather (src/mpi4py.MPI.c:109194)
File "MPI/msgpickle.pxi", line 749, in mpi4py.MPI.PyMPI_allgather (src/mpi4py.MPI.c:48609)
File "MPI/msgpickle.pxi", line 191, in mpi4py.MPI.Pickle.loadv (src/mpi4py.MPI.c:41957)
File "MPI/msgpickle.pxi", line 143, in mpi4py.MPI.Pickle.load (src/mpi4py.MPI.c:41248)
cPickle.BadPickleGet: 65
I am running under Ubuntu with OpenMDAO 1.7.3. I have tried running with both mpirun.openmpi (OpenRTE) 1.4.3 and mpirun (Open MPI) 1.4.3 and have gotten the same result in each case.
I found this post that seems to suggest that there is something wrong with the MPI installation. But if this were the case, it strikes me as strange that the problem would work for a small number of processors but not with a larger number. I also can run a relatively simple OpenMDAO problem (no external codes) with 32 processors without incident.
Because the traceback references OpenMDAO unknowns, I wondered if there are limitations on the size of OpenMDAO unknowns. In my case, each external code component has a few dozen array outputs that can be up to 50,000-60,000 elements each. Might that be problematic? Each external code component also reads the same set of input files. Could that be an issue as well? I have tried to ensure that read and write access is defined properly but perhaps that's not enough.
Any suggestions about what might be culprit in this situation are appreciated.
EDIT: I should add that I have tried running the problem without actually running the external codes (i.e. the components in the parallel group are called and set up but the external subprocesses are never actually created) and the problem persists.
EDIT2: I have done some more debugging on this issue and thought I should share the little that I have discovered. If I strip the problem down to only the parallel group containing the external code instances, the problem persists. However, if I reduce the components in the parallel group to basically nothing--just a print function for setup and for solve_nonlinear--then the problem can successfully "run" with a large number of processors. I started adding setup lines back in one by one to see what would create problems. I ran into issues when trying to add many large unknowns to the components. I can actually still add just a single large unknown--for example, this works:
self.add_output('BigOutput', shape=[100000])
But when I try to add too many large outputs like below, I get errors:
for i in range(100):
outputname = 'BigOutput{0}'.format(i)
self.add_output(outputname, shape=[100000])
Sometimes I just get a general segmentation violation error from PETSc. Other times I get a fairly length traceback that is too long to post here--I'll post just the beginning in case it provides any helpful clues:
*** glibc detected *** python2.7: free(): invalid pointer: 0x00007f21204f5010 ***
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x7da26)[0x7f2285f0ca26]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/../../libsqlite3.so.0(sqlite3_free+0x4f)[0x7f2269b7754f]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/../../libsqlite3.so.0(+0x1cbbc)[0x7f2269b87bbc]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/../../libsqlite3.so.0(+0x54d6c)[0x7f2269bbfd6c]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/../../libsqlite3.so.0(+0x9d31f)[0x7f2269c0831f]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/../../libsqlite3.so.0(sqlite3_step+0x1bf)[0x7f2269be261f]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/_sqlite3.so(pysqlite_step+0x2d)[0x7f2269e4306d]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/_sqlite3.so(_pysqlite_query_execute+0x661)[0x7f2269e404b1]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x8942)[0x7f2286c6a5a2]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x86c3)[0x7f2286c6a323]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x86c3)[0x7f2286c6a323]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x89e)[0x7f2286c6b1ce]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(+0x797e1)[0x7f2286be67e1]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyObject_Call+0x53)[0x7f2286bb6dc3]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(+0x5c54f)[0x7f2286bc954f]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyObject_Call+0x53)[0x7f2286bb6dc3]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyEval_CallObjectWithKeywords+0x43)[0x7f2286c60d63]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(+0x136652)[0x7f2286ca3652]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f2286957e9a]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f2285f8236d]
======= Memory map: ========
00400000-00401000 r-xp 00000000 08:03 9706352 /home/austinherrema/miniconda2/bin/python2.7
00600000-00601000 rw-p 00000000 08:03 9706352 /home/austinherrema/miniconda2/bin/python2.7
00aca000-113891000 rw-p 00000000 00:00 0 [heap]
7f21107d6000-7f2241957000 rw-p 00000000 00:00 0
etc...
its hard to guess whats going on here, but if it works for a small number of processors and not on larger ones one guess might be that the issue shows up when you use more than one node, and data has to get transfered across the network. I have seen bad MPI compilations that behaved this way. Things would work if I kept the job to one node, but broke on more than one.
The traceback shows that you're not even getting through setup. So its not likely to be anything in your external code or any other components run method.
If you're running on a cluster, are you compiling your own MPI? You usually need to compile very with very specific options/libraries for any kind of HPC library. But most HPC systems provide modules you can load that have mpi pre-compiled.
I'm trying to use the rmagic extension for the IPython notebook, using Python 2.7.6 via Enthought Canopy.
When I try the following example:
import numpy as np
import pylab
X = np.array([0,1,2,3,4])
Y = np.array([3,5,4,6,7])
pylab.scatter(X, Y)
%Rpush X Y
%R lm(Y~X)$coef
I get an error:
AttributeError Traceback (most recent call last)
<ipython-input-7-96dff2c70ba0> in <module>()
1 get_ipython().magic(u'Rpush X Y')
----> 2 get_ipython().magic(u'R lm(Y~X)$coef')
…
/Users/hrob/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/IPython/extensions/rmagic.pyc in eval(self, line)
212 res = ro.r("withVisible({%s})" % line)
213 value = res[0] #value (R object)
--> 214 visible = ro.conversion.ri2py(res[1])[0] #visible (boolean)
215 except (ri.RRuntimeError, ValueError) as exception:
216 warning_or_other_msg = self.flush() # otherwise next return seems to have copy of error
AttributeError: 'module' object has no attribute 'ri2py'
I can't find anyone else who's had the same problem and don't know enough to solve it myself. There is no definition for ri2py in conversion.py though.
I initially had installed Anaconda and was running python notebook through that, with exactly the same results.
rpy2 (version 2.4.0) installed successfully but when I test it I get 1 expected failure as follows:
python -m 'rpy2.robjects.tests.__init__'
…
testNewWithTranslation (testFunction.SignatureTranslatedFunctionTestCase) ... expected failure
I don't know if that's related.
Can anyone suggest what the problem might be and how I might fix it? Are the versions of python, R, etc. that I'm using compatible or do I need to re-install/update something?
Are you using %load_ext rmagic?
If so, try using %load_ext rpy2.ipython instead.
This is one of the new features in version 2.4.0.