it is appearing in some big modules like matplotlib. For example expression :
import importlib
obj = importlib.import_module('matplotlib')
obj_entries = obj.__dict__
Between runs len of obj_entries can vary. From 108 to 157 (expected) entries. Especially pyplot can be ignored like some another submodules.
it can work stable during manual debug mode with len computing statement after dict extraction. But in auto it dont work well.
such error occures:
RuntimeError: dictionary changed size during iteration
python-BaseException
using clear python 3.10 on windows. Version swap change nothing at all
during some attempts some interesting features was found.
use of repr is helpfull before dict initiation.
But if module transported between classes like variable more likely lazy-import happening? For now there is evidence that not all names showing when command line interpriter doing opposite - returning what expected. So this junk of code help bypass this bechavior...
Note: using pkgutil.iter_modules(some_path) to observe modules im internal for pkgutil ModuleInfo form.
import pkgutil, importlib
module_info : pkgutil.ModuleInfo
name = module_info.name
founder = module_info.module_finder
spec = founder.find_spec(name)
module_obj = importlib.util.module_from_spec(spec)
loader = module_obj.__loader__
loader.exec_module(module_obj)
still unfamilliar with interior of import mechanics so it will be helpfull to recive some links to more detail explanation (spot on)
Related
I have a simple Chapel program to test the NetCDF module:
use NetCDF;
use NetCDF.C_NetCDF;
var f: int = ncopen("ppt2020_08_20.nc", NC_WRITE);
var status: int = nc_close(f);
and when I compile with:
chpl -I/usr/include -L/usr/lib/x86_64-linux-gnu -lnetcdf hello.chpl
it produces a list of errors about SysCTypes:
$CHPL_HOME/modules/packages/NetCDF.chpl:57: error: 'c_int' undeclared (first use this function)
$CHPL_HOME/modules/packages/NetCDF.chpl:77: error: 'c_char' undeclared (first use this function)
...
Would anyone see what my error is? I tried adding use SysCTypes; to my program, but that didn't seem to have an effect.
Sorry for the delayed response and for this bad behavior. This is a bug that's crept into the NetCDF module which seems not to have been caught by Chapel's nightly testing. To work around it, edit $CHPL_HOME/modules/packages/NetCDF.chpl, adding the line:
public use SysCTypes, SysBasic;
within the declaration of the C_NetCDF module (around line 50 in my copy of the sources). If you would consider filing this bug as an issue on the Chapel GitHub issue tracker, that would be great as well, though we'll try to get this fixed in the next release in any case.
With that change, your program almost compiles for me, except that nc_close() takes a c_int argument rather than a Chapel int. You could either lean on Chapel's type inference to cause this to happen:
var f = ncopen("ppt2020_08_20.nc", NC_WRITE);
or explicitly declare f to be of type c_int:
var f: c_int = ncopen("ppt2020_08_20.nc", NC_WRITE);
And then as one final note, I believe you should be able to drop the -lnetcdf from your chpl command-line as using the NetCDF module should cause this requirement to automatically be added.
Thanks for bringing this bug to our attention!
I am a beginner to programming. I am trying to run a simulation of a combustion chamber using reactingFoam.
I have modified the counterflow2D tutorial.
For those who maybe don't know OpenFOAM, it is a programme built in C++ but it does not require C++ programming, just well-defining the variables in the files needed.
In one of my first tries I have made a very simple model but since I wanted to check it very well I set it to 60 seconds with a 1e-6 timestep.
My computer is not very powerful so it took me for a day aprox. (by this I mean I'd like to find a solution rather than repeating the simulation).
I executed the solver reactingFOAM using 4 processors in parallel using
mpirun -np 4 reactingFOAM -parallel > log
The log does not show any evidence of error.
The problem is that when I use reconstructPar it works perfectly but then I try to watch the results with paraFoam and this error is shown:
From function bool Foam::IOobject::readHeader(Foam::Istream&)
in file db/IOobject/IOobjectReadHeader.C at line 88
Reading "mypath/constant/reactions" at line 1
First token could not be read or is not the keyword 'FoamFile'
I have read that maybe some files are empty when they are not supposed to be so, but I have not found that problem.
My 'reactions' file have not been modified from the tutorial and has always worked.
edit:
Sorry for the vague question. I have modified it a bit.
A typical OpenFOAM dictionary file always contains a Foam::Istream named FoamFile. An example from a typical system/controlDict file can be seen below:
FoamFile
{
version 2.0;
format ascii;
class dictionary;
location "system";
object controlDict;
}
During the construction of the dictionary header, if this Istream is absent, OpenFOAM ceases its operation by raising an error message that you have experienced:
First token could not be read or is not the keyword 'FoamFile'
The benefit of the header is possibly to contribute OpenFOAM's abstraction mechanisms, which would be difficult otherwise.
As mentioned in the comments, adding the header entity almost always solves this problem.
I would like to know how in the Julia language, I can determine if a file.jl is run as script, such as in the call:
bash$ julia file.jl
It must only in this case start a function main, for example. Thus I could use include('file.jl'), without actually executing the function.
To be specific, I am looking for something similar answered already in a python question:
def main():
# does something
if __name__ == '__main__':
main()
Edit:
To be more specific, the method Base.isinteractive (see here) is not solving the problem, when using include('file.jl') from within a non-interactive (e.g. script) environment.
The global constant PROGRAM_FILE contains the script name passed to Julia from the command line (it does not change when include is called).
On the other hand #__FILE__ macro gives you a name of the file where it is present.
For instance if you have a files:
a.jl
println(PROGRAM_FILE)
println(#__FILE__)
include("b.jl")
b.jl
println(PROGRAM_FILE)
println(#__FILE__)
You have the following behavior:
$ julia a.jl
a.jl
D:\a.jl
a.jl
D:\b.jl
$ julia b.jl
b.jl
D:\b.jl
In summary:
PROGRAM_FILE tells you what is the file name that Julia was started with;
#__FILE__ tells you in what file actually the macro was called.
tl;dr version:
if !isdefined(:__init__) || Base.function_module(__init__) != MyModule
main()
end
Explanation:
There seems to be some confusion. Python and Julia work very differently in terms of their "modules" (even though the two use the same term, in principle they are different).
In python, a source file is either a module or a script, depending on how you chose to "load" / "run" it: the boilerplate exists to detect the environment in which the source code was run, by querying the __name__ of the embedding module at the time of execution. E.g. if you have a file called mymodule.py, it you import it normally, then within the module definition the variable __name__ automatically gets set to the value mymodule; but if you ran it as a standalone script (effectively "dumping" the code into the "main" module), the __name__ variable is that of the global scope, namely __main__. This difference gives you the ability to detect how a python file was ran, so you could act slightly differently in each case, and this is exactly what the boilerplate does.
In julia, however, a module is defined explicitly as code. Running a file that contains a module declaration will load that module regardless of whether you did using or include; however in the former case, the module will not be reloaded if it's already on the workspace, whereas in the latter case it's as if you "redefined" it.
Modules can have initialisation code via the special __init__() function, whose job is to only run the first time a module is loaded (e.g. when imported via a using statement). So one thing you could do is have a standalone script, which you could either include directly to run as a standalone script, or include it within the scope of a module definition, and have it detect the presence of module-specific variables such that it behaves differently in each case. But it would still have to be a standalone file, separate from the main module definition.
If you want the module to do stuff, that the standalone script shouldn't, this is easy: you just have something like this:
module MyModule
__init__() = # do module specific initialisation stuff here
include("MyModule_Implementation.jl")
end
If you want the reverse situation, you need a way to detect whether you're running inside the module or not. You could do this, e.g. by detecting the presence of a suitable __init__() function, belonging to that particular module. For example:
### in file "MyModule.jl"
module MyModule
export fun1, fun2;
__init__() = print("Initialising module ...");
include("MyModuleImplementation.jl");
end
### in file "MyModuleImplementation.jl"
fun1(a,b) = a + b;
fun2(a,b) = a * b;
main() = print("Demo of fun1 and fun2. \n" *
" fun1(1,2) = $(fun1(1,2)) \n" *
" fun2(1,2) = $(fun2(1,2)) \n");
if !isdefined(:__init__) || Base.function_module(__init__) != MyModule
main()
end
If MyModule is loaded as a module, the main function in MyModuleImplementation.jl will not run.
If you run MyModuleImplementation.jl as a standalone script, the main function will run.
So this is a way to achieve something close to the effect you want; but it's very different to saying running a module-defining file as either a module or a standalone script; I don't think you can simply "strip" the module instruction from the code and run the module's "contents" in such a manner in julia.
The answer is available at the official Julia docs FAQ. I am copy/pasting it here because this question comes up as the first hit on some search engines. It would be nice if people found the answer on the first-hit site.
How do I check if the current file is being run as the main script?
When a file is run as the main script using julia file.jl one might want to activate extra functionality like command line argument handling. A way to determine that a file is run in this fashion is to check if abspath(PROGRAM_FILE) == #__FILE__ is true.
I'm working with Bokeh 0.12.2 in a Jupyter notebook and it frequently throws exceptions about "Models must be owned by only a single document":
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-23-f50ac7abda5e> in <module>()
2 ea.legend.label_text_font_size = '10pt'
3
----> 4 show(column([co2, co, nox, o3]))
C:\Users\pokeeffe\AppData\Local\Continuum\Anaconda3\lib\site-packages\bokeh\io.py in show(obj, browser, new, notebook_handle)
308 '''
309 if obj not in _state.document.roots:
--> 310 _state.document.add_root(obj)
311 return _show_with_state(obj, _state, browser, new, notebook_handle=notebook_handle)
312
C:\Users\pokeeffe\AppData\Local\Continuum\Anaconda3\lib\site-packages\bokeh\document.py in add_root(self, model)
443 self._roots.append(model)
444 finally:
--> 445 self._pop_all_models_freeze()
446 self._trigger_on_change(RootAddedEvent(self, model))
447
C:\Users\pokeeffe\AppData\Local\Continuum\Anaconda3\lib\site-packages\bokeh\document.py in _pop_all_models_freeze(self)
343 self._all_models_freeze_count -= 1
344 if self._all_models_freeze_count == 0:
--> 345 self._recompute_all_models()
346
347 def _invalidate_all_models(self):
C:\Users\pokeeffe\AppData\Local\Continuum\Anaconda3\lib\site-packages\bokeh\document.py in _recompute_all_models(self)
367 d._detach_document()
368 for a in to_attach:
--> 369 a._attach_document(self)
370 self._all_models = recomputed
371 self._all_models_by_name = recomputed_by_name
C:\Users\pokeeffe\AppData\Local\Continuum\Anaconda3\lib\site-packages\bokeh\model.py in _attach_document(self, doc)
89 '''This should only be called by the Document implementation to set the document field'''
90 if self._document is not None and self._document is not doc:
---> 91 raise RuntimeError("Models must be owned by only a single document, %r is already in a doc" % (self))
92 doc.theme.apply_to_model(self)
93 self._document = doc
RuntimeError: Models must be owned by only a single document, <bokeh.models.tickers.DaysTicker object at 0x00000000042540B8> is already in a doc
The trigger is always calling show(...) (although never the first time after kernel start-up, only subsequent calls).
Based on the docs, I thought reset_output() would return my notebook to an operable state but the exception persists. Through trial-and-error, I've determined it's necessary to also re-define everything being passing to show(). That makes interactive work cumbersome and error-prone.
[Ref]:
reset_output(state=None)
Clear the default state of all output modes.
Returns: None
Am I right about reset_output() -- is it supposed to resolve the situation causing this exception?
Else, how do I avoid this kind of exception?
It may be because of conflicting objects that has the same name. you need to create completely new objects every time.
Seems it can by fixed by differentiating the source name
Like this:
source1 = df
p1.circle('A', 'B', source=source1)
source2 = df
p2 = figure()
p2.circle('C', 'D', source=source2)
sourceN = df
p2 = figure()
p2.circle('X', 'Y', source=sourceN)
I've been working in a jupyterlab notebook iterating on visualizations of a large amount of data with bokeh, holoviews, and panel, and have been running into this issue periodically.
Here are a couple of additional things that may help. Note that p is used as the bokeh conventional name for the figure. I am posting on this old thread because it was the top result in my Google search for the error message.
Try clearing the document (found in docs):
from bokeh.io import curdoc
curdoc().clear()
I observed that panel was able to display a bokeh object even when bokeh show would not.
import panel as pn
pn.extension()
pn.pane.Bokeh(p)
Digging into how panel is able to display an object even when bokeh is not, I noticed this function, which fixed the problem for me:
import panel as pn
pn.io.model.remove_root(p)
If you don't have panel installed, here is the source code from above:
from bokeh.models import Model
for model in p.select({'type': Model}):
prev_doc = model.document
model._document = None
if prev_doc:
prev_doc.remove_root(model)
Hopefully this helps someone, or future me.
I ran into this error message when using file_html from from bokeh.embed, after i upgraded to bokeh version 1.01. Downgrading again to bokeh version 0.12.16 solved it. (pip install bokeh==0.12.16)
Note sure why.
This solution works without upgrading or downgrading packages.
try:
reset_output()
output_notebook()
show(p)
except:
output_notebook()
show(p)
Solution provided here :
https://github.com/bokeh/bokeh/issues/8579
Try using after creating each plot. add_root will add that model as a root of the current Document, making sure that each Model is added to a single Document:
curdoc().add_root(column([plot]))
curdoc().title = doc_title //Add a title to the doc
show(figure)
Note: column(the list of plots) can be replaced with any object which inherits the Model class.
Refer to the link for more details on add_root and bokeh Documents: https://docs.bokeh.org/en/latest/docs/reference/document.html?highlight=add_root#bokeh.document.document.Document.add_root
column_data_source = ColumnDataSource(dataframe)
After each row in the jupyter notebook that used the column data source we have to make it again simply. The column data source seems like we cant use it many times in the same code.
This seems a little hack but it worked for me when i faced the same error.
I want to use R's mathematical functions as provided in libRmath from Ocaml. I successfully installed the library via brew tap homebrew science && brew install --with-librmath-only r. I end up with a .dylib in /usr/local/lib and a .h in /usr/local/include. Following the Ocaml ctypes tutorial, i do this in utop
#require "ctypes.foreign";;
open Ctypes;;
open Foreign;;
let test_pow = foreign "pow_di" (float #-> int #-> returning float);;
this complains that it can't find the symbol. What am I doing wrong? Do I need to open the dynamic library first? Set some environment variables? After googling, I also did this:
nm -gU /usr/local/lib/libRmath.dylib
which gives a bunch of symbols all with a leading underscore including 00000000000013ff T _R_pow_di. In the header file, pow_di is defined via some #define directive from _R_pow_di. I did try variations of the name like "R_pow_di" etc.
Edit: I tried compiling a simple C program using Rmath using Xcode. After setting the include path manually to include /usr/local/include, Xcode can find the header file Rmath.h. However, inside the header file, there is an include of R_ext/Boolean.h which does not seem to exist. This error is flagged by Xcode and compilation stops.
Noob alert: this may be totally obvious to a C programmer...
In order to use external library you still need to link. There're at least two different ways, either link using compiler, or link even more dynamically using dlopen.
For the first method use the following command (as an initial approximation):
ocamlbuild -pkg ctypes.foreign -lflags -cclib,-lRmath yourapp.native
under premise that your code is put into yourapp.ml file.
The second method is to use ctypes interface to dlopen to open the library. Using the correct types and name for the C function call, this goes like this:
let library = Dl.dlopen ~filename:"libRmath.dylib" ~flags:[]
let test_pow = foreign ~from:library "R_pow_di" (double #-> int #-> returning double)