IFC to Cypher via python and IfcOpenShell - graph

I want to convert an IFC file to a graph database to extract adjacency and accessibility of the spaces in the IFC model.
I wanted to use Neo4j, and as a part of this job, I need to extract a Cypher code from the IFC file.
I found this code but when I run it, I encounter the error below:
AttributeError Traceback (most recent call last)
<ipython-input-1-b13704b3ee10> in <module>
20 typeDict = IfcTypeDict()
21
---> 22 assert typeDict['IfcWall'] == ('GlobalId', 'OwnerHistory', 'Name', 'Description', 'ObjectType', 'ObjectPlacement', 'Representation', 'Tag')
23
24 nodes = []
<ipython-input-1-b13704b3ee10> in __missing__(self, key)
15 class IfcTypeDict(dict):
16 def __missing__(self, key):
---> 17 value = self[key] = ifcopenshell.create_entity(key).wrapped_data.get_attribute_names()
18 return value
19
AttributeError: 'str' object has no attribute 'get_attribute_names'
Can anyone help me with this? or any other idea about how I can perform this task?
any help would be greatly appreciated.
Regards.

It seems this script was created for IfcOpenShell v0.5 compiled for IFC2x3 while you are using IfcOpenShell v0.6, which is unfortunately not backwards-compatible. You could either try to use v0.5 or update the script to the v0.6 API.
If you use v0.5, be aware that this version was compiled for a specific IFC version. I believe the published packages are for IFC2x3, thus it will not work with IFC4 files. You could compile for IFC4 though, but then would loose IFC2x3 support. The assertion wouldn't work anymore, because IFC4 walls have one more attribute PredefinedType:
assert typeDict["IfcWall"] == ('GlobalId', 'OwnerHistory', 'Name', 'Description', 'ObjectType', 'ObjectPlacement', 'Representation', 'Tag', 'PredefinedType')
Alternatively if using v0.6 you will have to change more in the script. Maybe like in this updated Gist. There was another issue further down and you might encounter more. If in doubt, try to contact the original author or use the script as an inspiration to write your own conversion.

Related

How to avoid RuntimeError while call __dict__ on module?

it is appearing in some big modules like matplotlib. For example expression :
import importlib
obj = importlib.import_module('matplotlib')
obj_entries = obj.__dict__
Between runs len of obj_entries can vary. From 108 to 157 (expected) entries. Especially pyplot can be ignored like some another submodules.
it can work stable during manual debug mode with len computing statement after dict extraction. But in auto it dont work well.
such error occures:
RuntimeError: dictionary changed size during iteration
python-BaseException
using clear python 3.10 on windows. Version swap change nothing at all
during some attempts some interesting features was found.
use of repr is helpfull before dict initiation.
But if module transported between classes like variable more likely lazy-import happening? For now there is evidence that not all names showing when command line interpriter doing opposite - returning what expected. So this junk of code help bypass this bechavior...
Note: using pkgutil.iter_modules(some_path) to observe modules im internal for pkgutil ModuleInfo form.
import pkgutil, importlib
module_info : pkgutil.ModuleInfo
name = module_info.name
founder = module_info.module_finder
spec = founder.find_spec(name)
module_obj = importlib.util.module_from_spec(spec)
loader = module_obj.__loader__
loader.exec_module(module_obj)
still unfamilliar with interior of import mechanics so it will be helpfull to recive some links to more detail explanation (spot on)

Bulk upsert in gremlin_python fails with "TypeError: 'GraphTraversal' object is not callable"

I am new to Gremlin and trying to perform bulk upsert in neptune db with gremlin_python.
I found this solution in google groups
l = [
[name:'josh',age:29,country:'usa'],
[name:'bar',age:24,country:'usa']];
g.inject(l).
unfold().as('properties').
select('name').as('pName').
coalesce(V().has('name', where(eq('pName'))),
addV('person')).as('vertex').
sideEffect(select('properties').
unfold().as('kv').
select('vertex').
property(select('kv').by(Column.keys), select('kv').by(Column.values)))
And tried to adapt it for gremlin_python like this:
l = [
{'name':'josh','age':29,'country':'usa'},
{'name':'bar','age':24,'country':'usa'}];
g.inject(l).\
unfold().as_('properties').\
select('name').as_('pName').\
coalesce(__.V().has('name', __.where(__.eq('pName'))),
addV('person')).as_('vertex').\
sideEffect(select('properties').\
unfold().as_('kv').\
select('vertex').\
property(select('kv').by(Column.keys), select('kv').by(Column.values)))
having following error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-162-c262a63ad82e> in <module>
8 unfold().as_('properties').\
9 select('name').as_('pName').\
---> 10 coalesce(__.V().has('name', __.where(__.eq('pName'))),
11 addV('person')).as_('vertex').\
12 sideEffect(select('properties').\
TypeError: 'GraphTraversal' object is not callable
I assume the code adaptation might be wrong.
Can anyone give me a hint about what is going on here?
The part __.eq('pName') should be statics.eq('pName').
After
from gremlin_python import statics
statics.load_statics(globals())
The part __.eq('pName') can be just eq('pName').
See: https://tinkerpop.apache.org/docs/current/reference/#gremlin-python-imports
Since I use the AWS Neptune DB, I ended up applying Neptune Python utils to make bulk upsert:
It is faster, than the solution we were discussing, but be careful about data types and mappings, when you use it. (I had an issue with BigInts)
Here is the library and documentation:
https://github.com/awslabs/amazon-neptune-tools/tree/master/neptune-python-utils

SysCTypes errors when using NetCDF.chpl?

I have a simple Chapel program to test the NetCDF module:
use NetCDF;
use NetCDF.C_NetCDF;
var f: int = ncopen("ppt2020_08_20.nc", NC_WRITE);
var status: int = nc_close(f);
and when I compile with:
chpl -I/usr/include -L/usr/lib/x86_64-linux-gnu -lnetcdf hello.chpl
it produces a list of errors about SysCTypes:
$CHPL_HOME/modules/packages/NetCDF.chpl:57: error: 'c_int' undeclared (first use this function)
$CHPL_HOME/modules/packages/NetCDF.chpl:77: error: 'c_char' undeclared (first use this function)
...
Would anyone see what my error is? I tried adding use SysCTypes; to my program, but that didn't seem to have an effect.
Sorry for the delayed response and for this bad behavior. This is a bug that's crept into the NetCDF module which seems not to have been caught by Chapel's nightly testing. To work around it, edit $CHPL_HOME/modules/packages/NetCDF.chpl, adding the line:
public use SysCTypes, SysBasic;
within the declaration of the C_NetCDF module (around line 50 in my copy of the sources). If you would consider filing this bug as an issue on the Chapel GitHub issue tracker, that would be great as well, though we'll try to get this fixed in the next release in any case.
With that change, your program almost compiles for me, except that nc_close() takes a c_int argument rather than a Chapel int. You could either lean on Chapel's type inference to cause this to happen:
var f = ncopen("ppt2020_08_20.nc", NC_WRITE);
or explicitly declare f to be of type c_int:
var f: c_int = ncopen("ppt2020_08_20.nc", NC_WRITE);
And then as one final note, I believe you should be able to drop the -lnetcdf from your chpl command-line as using the NetCDF module should cause this requirement to automatically be added.
Thanks for bringing this bug to our attention!

First token could not be read or is not the keyword 'FoamFile' in OpenFOAM

I am a beginner to programming. I am trying to run a simulation of a combustion chamber using reactingFoam.
I have modified the counterflow2D tutorial.
For those who maybe don't know OpenFOAM, it is a programme built in C++ but it does not require C++ programming, just well-defining the variables in the files needed.
In one of my first tries I have made a very simple model but since I wanted to check it very well I set it to 60 seconds with a 1e-6 timestep.
My computer is not very powerful so it took me for a day aprox. (by this I mean I'd like to find a solution rather than repeating the simulation).
I executed the solver reactingFOAM using 4 processors in parallel using
mpirun -np 4 reactingFOAM -parallel > log
The log does not show any evidence of error.
The problem is that when I use reconstructPar it works perfectly but then I try to watch the results with paraFoam and this error is shown:
From function bool Foam::IOobject::readHeader(Foam::Istream&)
in file db/IOobject/IOobjectReadHeader.C at line 88
Reading "mypath/constant/reactions" at line 1
First token could not be read or is not the keyword 'FoamFile'
I have read that maybe some files are empty when they are not supposed to be so, but I have not found that problem.
My 'reactions' file have not been modified from the tutorial and has always worked.
edit:
Sorry for the vague question. I have modified it a bit.
A typical OpenFOAM dictionary file always contains a Foam::Istream named FoamFile. An example from a typical system/controlDict file can be seen below:
FoamFile
{
version 2.0;
format ascii;
class dictionary;
location "system";
object controlDict;
}
During the construction of the dictionary header, if this Istream is absent, OpenFOAM ceases its operation by raising an error message that you have experienced:
First token could not be read or is not the keyword 'FoamFile'
The benefit of the header is possibly to contribute OpenFOAM's abstraction mechanisms, which would be difficult otherwise.
As mentioned in the comments, adding the header entity almost always solves this problem.

Openmdao 1.7.3 error with unicode variables in python2

In the file openmdao/core/problem.py on lines such as 1619 and 1638, it checks if a variable is a string by using:
isinstance(inp, str)
however, this will return false if inp is unicode in python2, and eventually cause the program to raise an exception. In python2, the correct syntax is:
isinstance(inp, basestring)
I understand that basestring is not available in python 3, but there are several ways to write python 2/3 compatible code. Can this be fixed?
feel free to submit a pull request, but please add a test that checks the new functionality

Resources