I am relatively new to Python and have developed an application for myself that I would like to make an executable. It uses multiple external libraries such as Pyqt5, sqlite3, and openpyxl. I am using pyinstaller to create the .exe - the installer fails with a message "nomodulefound xml.parser.expat" i've tried a few ways to include it in my package but to no avail.
here is a replicable py example:
from decimal import Decimal
import openpyxl # xl libraries
wb = openpyxl.load_workbook(r'C:\Users\jmstr\OneDrive\Documents\finproxlc.xlsx')
myint = Decimal("27")
print("hello")
any ideas?
Here is my pyinstaller cmd: pyinstaller C:\Users\usr1\IdeaProjects\GUIProjects\finpro.py -p C:\Users\usr1\venv\Lib\site-packages\openpyxl --hidden-import openpyxl --additional-hooks-dir C:\Users\usr1\venv\Lib\site-packages\openpyxl --collect-all xml.parsers.expat
Output of the pyinstall:
['C:\Users\usr1\IdeaProjects\GUIProjects', 'C:\Users\usr1\venv\Lib\site-packages\openpyxl'] Traceback (most recent call last): File "c:\users\usr1\appdata\local\programs\python\python39\lib\runpy.py", line 197, in run_module_as_main return run_code(code, main_globals, None, File "c:\users\usr1\appdata\local\programs\python\python39\lib\runpy.py", line 87, in run_code exec(code, run_globals) File "C:\Users\usr1\AppData\Local\Programs\Python\Python39\Scripts\pyinstaller.exe_main.py", line 7, in File "c:\users\usr1\appdata\local\programs\python\python39\lib\site-packages\PyInstaller_main.py", line 194, in console_script_run run() File "c:\users\usr1\appdata\local\programs\python\python39\lib\site-packages\PyInstaller_main.py", line 180, in run run_build(pyi_config, spec_file, **vars(args)) File "c:\users\usr1\appdata\local\programs\python\python39\lib\site-packages\PyInstaller_main.py", line 61, in run_build PyInstaller.building.build_main.main(pyi_config, spec_file, **kwargs) File "c:\users\usr1\appdata\local\programs\python\python39\lib\site-packages\PyInstaller\building\build_main.py", line 977, in main build(specfile, distpath, workpath, clean_build) File "c:\users\usr1\appdata\local\programs\python\python39\lib\site-packages\PyInstaller\building\build_main.py", line 899, in build exec(code, spec_namespace) File "C:\Users\usr1\finpro.spec", line 14, in a = Analysis( File "c:\users\usr1\appdata\local\programs\python\python39\lib\site-packages\PyInstaller\building\build_main.py", line 379, in init self.hookspath += discover_hook_directories() File "c:\users\usr1\appdata\local\programs\python\python39\lib\site-packages\PyInstaller\isolated_parent.py", line 404, in wrapped return call(function, *args, **kwargs) File "c:\users\usr1\appdata\local\programs\python\python39\lib\site-packages\PyInstaller\isolated_parent.py", line 373, in call return isolated.call(function, *args, **kwargs) File "c:\users\usr1\appdata\local\programs\python\python39\lib\site-packages\PyInstaller\isolated_parent.py", line 311, in call raise RuntimeError(f"Child process call to {function.name}() failed with:\n" + output) RuntimeError: Child process call to discover_hook_directories() failed with: File "c:\users\usr1\appdata\local\programs\python\python39\lib\site-packages\PyInstaller\isolated_child.py", line 63, in run_next_command output = function(*args, **kwargs) File "c:\users\usr1\appdata\local\programs\python\python39\lib\site-packages\PyInstaller\building\build_main.py", line 107, in discover_hook_directories import pkg_resources File "c:\users\usr1\appdata\local\programs\python\python39\lib\site-packages\pkg_resources_init_.py", line 35, in import plistlib File "c:\users\usr1\appdata\local\programs\python\python39\lib\plistlib.py", line 61, in from xml.parsers.expat import ParserCreate ModuleNotFoundError: No module named 'xml.parsers'
Seems like the installion of openpyxl was specific to my IDE (intelliJ) and i could not figure out how to change the spec file to get it to pick up but running this command resolved the issue:
python -m pip install openpyxl
Related
I followed the steps from Running Airflow on Ubuntu 20.04 (TypeError: required field "type_ignores" missing from Module```), and received the following:
(airflow-uGvev7QO) root#testing2:/opt/airflow# airflow db init
DB: sqlite:////root/airflow/airflow.db
[2021-03-30 21:17:43,978] {db.py:674} INFO - Creating tables
/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/sqlalchemy/orm/relationships.py:3454 SAWarning: relationship 'DagRun.serialized_dag' will copy column serialized_dag.dag_id to column dag_run.dag_id, which conflicts with relationship(s): 'TaskInstance.dag_run' (copies task_instance.dag_id to dag_run.dag_id), 'DagRun.task_instances' (copies task_instance.dag_id to dag_run.dag_id). If this is not the intention, consider if these relationships should be linked with back_populates, or if viewonly=True should be applied to one or more if they are read-only. For the less common case that foreign key constraints are partially overlapping, the orm.foreign() annotation can be used to isolate the columns that should be written towards. The 'overlaps' parameter may be used to remove this warning.
/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/sqlalchemy/orm/relationships.py:3454 SAWarning: relationship 'SerializedDagModel.dag_runs' will copy column serialized_dag.dag_id to column dag_run.dag_id, which conflicts with relationship(s): 'TaskInstance.dag_run' (copies task_instance.dag_id to dag_run.dag_id), 'DagRun.task_instances' (copies task_instance.dag_id to dag_run.dag_id). If this is not the intention, consider if these relationships should be linked with back_populates, or if viewonly=True should be applied to one or more if they are read-only. For the less common case that foreign key constraints are partially overlapping, the orm.foreign() annotation can be used to isolate the columns that should be written towards. The 'overlaps' parameter may be used to remove this warning.
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Traceback (most recent call last):
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/bin/airflow", line 8, in <module>
sys.exit(main())
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/airflow/__main__.py", line 40, in main
args.func(args)
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/airflow/cli/commands/db_command.py", line 31, in initdb
db.initdb()
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/airflow/utils/db.py", line 549, in initdb
upgradedb()
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/airflow/utils/db.py", line 684, in upgradedb
command.upgrade(config, 'heads')
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/alembic/command.py", line 294, in upgrade
script.run_env()
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/alembic/script/base.py", line 490, in run_env
util.load_python_file(self.dir, "env.py")
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/alembic/util/pyfiles.py", line 97, in load_python_file
module = load_module_py(module_id, path)
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/alembic/util/compat.py", line 182, in load_module_py
spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/airflow/migrations/env.py", line 108, in <module>
run_migrations_online()
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/airflow/migrations/env.py", line 102, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/alembic/runtime/environment.py", line 813, in run_migrations
self.get_context().run_migrations(**kw)
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/alembic/runtime/migration.py", line 548, in run_migrations
for step in self._migrations_fn(heads, self):
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/alembic/command.py", line 283, in upgrade
return script._upgrade_revs(revision, rev)
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/alembic/script/base.py", line 365, in _upgrade_revs
revs = list(revs)
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/alembic/script/revision.py", line 916, in _iterate_revisions
uppers = util.dedupe_tuple(self.get_revisions(upper))
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/alembic/script/revision.py", line 457, in get_revisions
resolved_id, branch_label = self._resolve_revision_number(id_)
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/alembic/script/revision.py", line 640, in _resolve_revision_number
self._revision_map
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/alembic/util/langhelpers.py", line 234, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/alembic/script/revision.py", line 156, in _revision_map
for revision in self._generator():
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/alembic/script/base.py", line 115, in _load_revisions
script = Script._from_filename(self, vers, file_)
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/alembic/script/base.py", line 904, in _from_filename
module = util.load_python_file(dir_, filename)
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/alembic/util/pyfiles.py", line 97, in load_python_file
module = load_module_py(module_id, path)
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/alembic/util/compat.py", line 182, in load_module_py
spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/airflow/migrations/versions/2c6edca13270_resource_based_permissions.py", line 29, in <module>
from airflow.www.app import create_app
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/airflow/www/app.py", line 38, in <module>
from airflow.www.extensions.init_views import (
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/airflow/www/extensions/init_views.py", line 29, in <module>
from airflow.www.views import lazy_add_provider_discovered_options_to_connection_form
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/airflow/www/views.py", line 96, in <module>
from airflow.www import auth, utils as wwwutils
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/airflow/www/utils.py", line 27, in <module>
from flask_appbuilder.models.sqla.interface import SQLAInterface
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/flask_appbuilder/models/sqla/interface.py", line 16, in <module>
from sqlalchemy_utils.types.uuid import UUIDType
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/sqlalchemy_utils/__init__.py", line 1, in <module>
from .aggregates import aggregated # noqa
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/sqlalchemy_utils/aggregates.py", line 372, in <module>
from .functions.orm import get_column_key
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/sqlalchemy_utils/functions/__init__.py", line 1, in <module>
from .database import ( # noqa
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/sqlalchemy_utils/functions/database.py", line 11, in <module>
from .orm import quote
File "/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/sqlalchemy_utils/functions/orm.py", line 14, in <module>
from sqlalchemy.orm.query import _ColumnEntity
ImportError: cannot import name '_ColumnEntity' from 'sqlalchemy.orm.query' (/root/.local/share/virtualenvs/airflow-uGvev7QO/lib/python3.7/site-packages/sqlalchemy/orm/query.py)
(airflow-uGvev7QO) root#testing2:/opt/airflow#
Advice is welcome.
Thank you
I was able to reproduce the error and the reason seems to be the release of sqlalchemy 1.4 that introduces breaking changes. Airflow by default depends on the latest version of sqlalchemy, but cannot work with version 1.4. A workaround is to downgrade sqlalchem to a version < 1.4.0:
pipenv uninstall sqlalchemy
pipenv install 'sqlalchemy < 1.4.0'
When installing Airflow fresh:
pipenv install 'sqlalchemy < 1.4.0' apache-airflow
Please use below
pip install "apache-airflow[celery]==2.2.2" --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-2.2.2/constraints-3.6.txt"
It will definitely work.
When I am import pyautogui it gives me:
Traceback (most recent call last):
File "app.py", line 2, in <module>
from register import reg
File "/home/nlp/Projects/face_recognition/register.py", line 12, in <module>
import pyautogui
File "/home/nlp/miniconda3/envs/face_recognition_env/lib/python3.6/site-packages/pyautogui/__init__.py", line 241, in <module>
import mouseinfo
File "/home/nlp/miniconda3/envs/face_recognition_env/lib/python3.6/site-packages/mouseinfo/__init__.py", line 223, in <module>
_display = Display(os.environ['DISPLAY'])
File "/home/nlp/miniconda3/envs/face_recognition_env/lib/python3.6/site-packages/Xlib/display.py", line 80, in __init__
self.display = _BaseDisplay(display)
File "/home/nlp/miniconda3/envs/face_recognition_env/lib/python3.6/site-packages/Xlib/display.py", line 62, in __init__
display.Display.__init__(*(self, ) + args, **keys)
File "/home/nlp/miniconda3/envs/face_recognition_env/lib/python3.6/site-packages/Xlib/protocol/display.py", line 58, in __init__
self.socket = connect.get_socket(name, host, displayno)
File "/home/nlp/miniconda3/envs/face_recognition_env/lib/python3.6/site-packages/Xlib/support/connect.py", line 76, in get_socket
return mod.get_socket(dname, host, dno)
File "/home/nlp/miniconda3/envs/face_recognition_env/lib/python3.6/site-packages/Xlib/support/unix_connect.py", line 78, in get_socket
raise error.DisplayConnectionError(dname, str(val))
Xlib.error.DisplayConnectionError: Can't connect to display ":0": [Errno 2] No such file or directory
I have tried this:
import os
os.environ['DISPLAY'] = ':0'
But still same error. According to one comment of to this link (https://github.com/asweigart/pyautogui/issues/161) in github says it is not accessible remotely. I have tried sudo xhost + command but it gives me below error.
xhost: unable to open display ""
I just started the PyTorch-Tutorial Deep Learning with PyTorch: A 60 Minute Blitz and I should add, that I haven't programmed any python (but other languages like Java) before.
Right now, my Code looks like
import torch
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
import numpy as np
print("\n-------------------Backpropagation-------------------\n")
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
dataiter = iter(trainloader)
images, labels = dataiter.next()
def imshow(img):
img = img / 2 + 0.5
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
imshow(torchvision.utils.make_grid(images))
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
which should be consistent with the tutorial.
If I execute this, I'll get the following error:
"C:\Program Files\Anaconda3\python.exe" C:/MA/pytorch/deepLearningWithPytorchTutorial/trainingClassifier.py
-------------------Backpropagation-------------------
Files already downloaded and verified
Files already downloaded and verified
-------------------Backpropagation-------------------
Files already downloaded and verified
Files already downloaded and verified
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Program Files\Anaconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Program Files\Anaconda3\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Program Files\Anaconda3\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Program Files\Anaconda3\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "C:\Program Files\Anaconda3\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Program Files\Anaconda3\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Program Files\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\MA\pytorch\deepLearningWithPytorchTutorial\trainingClassifier.py", line 23, in <module>
dataiter = iter(trainloader)
File "C:\Program Files\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 451, in __iter__
return _DataLoaderIter(self)
File "C:\Program Files\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 239, in __init__
w.start()
File "C:\Program Files\Anaconda3\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Program Files\Anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Program Files\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Program Files\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Program Files\Anaconda3\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\Program Files\Anaconda3\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Traceback (most recent call last):
File "C:/MA/pytorch/deepLearningWithPytorchTutorial/trainingClassifier.py", line 23, in <module>
dataiter = iter(trainloader)
File "C:\Program Files\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 451, in __iter__
return _DataLoaderIter(self)
File "C:\Program Files\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 239, in __init__
w.start()
File "C:\Program Files\Anaconda3\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Program Files\Anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Program Files\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Program Files\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
reduction.dump(process_obj, to_child)
File "C:\Program Files\Anaconda3\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe
Process finished with exit code 1
I already downloaded the *.py and *.ipynb.
Running the *.ipynb with jupyter works fine (but I don't want to programm in the juniper web-interface, I prefer pyCharm) while the *.py in the console (Anaconda prompt and cmd) fails with the same error.
Does anyone know how to fix this?
(I'm using Python 3.6.5 (from Anaconda) and pyCharm, OS: Win10 64-bit)
Thanks!
Bene
Update:
If it is relevant, I just set num_workers=2 to num_workers=0 (both) and then it'll work.. .
Check out the documentation for multiprocessing: programming guidelines for windows. You should wrap all operations in functions and then call them inside an if __name__ == '__main__' clause:
# required imports
def load_datasets(...):
# Code to load the datasets with multiple workers
def train(...):
# Code to train the model
if __name__ == '__main__':
load_datasets()
train()
In short, the the idea here is to wrap the example code inside an if __name__ == '__main__' statement.
Because of different implementation of multiprocessing in Windows, you need to wrap your main code with this block:
if __name__ == '__main__':
For more info, you can check the official PyTorch Windows notes.
I've already seen this answer:
Gremlin, How to add edge to existing vertex in gremlin-python
and it wasn't really helpful. As suggested in one of the comments I did try to update gremlinpython 3.3.0 but then I get key error.
Stack:
JanusGraph 0.2.0, gremlinpython3.2.3
This is my code
from gremlin_python import statics
from gremlin_python.structure.graph import Graph
from gremlin_python.process.graph_traversal import __
from gremlin_python.process.strategies import *
from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
graph = Graph()
g = graph.traversal().withRemote(DriverRemoteConnection('ws://localhost:8182/gremlin','g'))
martha = g.V().has('name','martha').next()
jack = g.V().has('name','jack').next()
#e_id = g.addE(jack,'likes',martha).next()
e_id = g.V(martha).as_('to').V(jack).addE("Likes").to('to').toList()
print e_id.toList()
StackTrace with gremlinpython 3.3.0
Traceback (most recent call last):
File "gremlin-py.py", line 9, in <module>
martha = g.V().has('name','martha').next()
File "/Users/arvindn/.virtualenvs/gremlinenv/lib/python2.7/site-packages/gremlin_python/process/traversal.py", line 70,in next
return self.__next__()
File "/Users/arvindn/.virtualenvs/gremlinenv/lib/python2.7/site-packages/gremlin_python/process/traversal.py", line 43,in __next__
self.traversal_strategies.apply_strategies(self)
File "/Users/arvindn/.virtualenvs/gremlinenv/lib/python2.7/site-packages/gremlin_python/process/traversal.py", line 352, in apply_strategies
traversal_strategy.apply(traversal)
File "/Users/arvindn/.virtualenvs/gremlinenv/lib/python2.7/site-packages/gremlin_python/driver/remote_connection.py", line 143, in apply
remote_traversal = self.remote_connection.submit(traversal.bytecode)
File "/Users/arvindn/.virtualenvs/gremlinenv/lib/python2.7/site-packages/gremlin_python/driver/driver_remote_connection.py", line 54, in submit
results = result_set.all().result()
File "/Users/arvindn/.virtualenvs/gremlinenv/lib/python2.7/site-packages/concurrent/futures/_base.py", line 429, in result
return self.__get_result()
File "/Users/arvindn/.virtualenvs/gremlinenv/lib/python2.7/site-packages/concurrent/futures/_base.py", line 381, in __get_result
raise exception_type, self._exception, self._traceback
KeyError: None
In my case, 3.3.0 is throwing error for all queries including g.V().next(). Now going back to 3.2.3, addvertex and other queries are working absolutely fine, but I couldn't figure out how to add edges. The same code when run with 3.2.3 produces,
StackTrace with gremlinpython 3.2.3
Traceback (most recent call last): File "gremlin-py.py", line 12, in <module>
e_id = g.V(martha).as_('to').V(jack).addE("Likes").to('to').toList()
File "/Users/arvindn/.virtualenvs/gremlinenv/lib/python2.7/site-packages/gremlin_python/process/traversal.py", line 52, in toList return list(iter(self))
File "/Users/arvindn/.virtualenvs/gremlinenv/lib/python2.7/site-packages/gremlin_python/process/traversal.py", line 70, in next
return self.__next__() File "/Users/arvindn/.virtualenvs/gremlinenv/lib/python2.7/site-packages/gremlin_python/process/traversal.py", line 43, in __next__
self.traversal_strategies.apply_strategies(self) File "/Users/arvindn/.virtualenvs/gremlinenv/lib/python2.7/site-packages/gremlin_python/process/traversal.py", line 284, in apply_strategies
traversal_strategy.apply(traversal)
File "/Users/arvindn/.virtualenvs/gremlinenv/lib/python2.7/site-packages/gremlin_python/driver/remote_connection.py", line 95, in apply remote_traversal = self.remote_connection.submit(traversal.bytecode) File "/Users/arvindn/.virtualenvs/gremlinenv/lib/python2.7/site-packages/gremlin_python/driver/driver_remote_connection.py", line 53, in submit traversers = self._loop.run_sync(lambda: self.submit_traversal_bytecode(request_id, bytecode))
File "/Users/arvindn/.virtualenvs/gremlinenv/lib/python2.7/site-packages/tornado/ioloop.py", line 457, in run_sync
return future_cell[0].result() File "/Users/arvindn/.virtualenvs/gremlinenv/lib/python2.7/site-packages/tornado/concurrent.py", line 237, in result
raise_exc_info(self._exc_info)
File "/Users/arvindn/.virtualenvs/gremlinenv/lib/python2.7/site-packages/tornado/gen.py", line 285, in wrapper
yielded = next(result)
File "/Users/arvindn/.virtualenvs/gremlinenv/lib/python2.7/site-packages/gremlin_python/driver/driver_remote_connection.py", line 69, in submit_traversal_bytecode
"gremlin": self._graphson_writer.writeObject(bytecode),
File "/Users/arvindn/.virtualenvs/gremlinenv/lib/python2.7/site-packages/gremlin_python/structure/io/graphson.py", line 72, in writeObject
return json.dumps(self.toDict(objectData), separators=(',', ':'))
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 250, in dumps
sort_keys=sort_keys, **kw).encode(obj)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/encoder.py", line 207, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/encoder.py", line 270, in iterencode
return _iterencode(o, 0)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/encoder.py", line 184, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: v[4184] is not JSON serializable
It says v[x] is not JSON serializable. I'm not sure what causes this error. It'll be awesome if someone can help. If any more info is needed, I shall update the question accordingly.
JanusGraph 0.2.0 uses Apache TinkerPop 3.2.6. You should use the 3.2.6 version of the gremlinpython driver.
pip uninstall gremlinpython
pip install gremlinpython==3.2.6
I'm on OS X 10.10.4 with Macports and Anaconda (condo 3.15.1).
During the process of building the robotic simulator ARGoS,
https://github.com/ilpincy/argos3
we ran into runtime 'qt' issues running the ARGoS examples.
[FATAL] Can't load library "/usr/local/lib/argos3/libargos3plugin_simulator_epuck.dylib" even after trying to add extensions for shared library (dylib) and module library (so):
/usr/local/lib/argos3/libargos3plugin_simulator_epuck.dylib: dlopen(/usr/local/lib/argos3/libargos3plugin_simulator_epuck.dylib, 1): Library not loaded: #rpath/./libQtOpenGL.4.dylib
I learned that cmake found 'qt' within Anaconda which had runtime problems.
-- Found Qt4: /Users/davidlaxer/anaconda/bin/qmake (found version "4.8.6")
Next, I installed qt4-mac:
sudo port install qt4-mac
---> Computing dependencies for qt4-mac
---> Fetching archive for qt4-mac
---> Attempting to fetch qt4-mac-4.8.7_0.darwin_14.x86_64.tbz2 from http://packages.macports.org/qt4-mac
---> Attempting to fetch qt4-mac-4.8.7_0.darwin_14.x86_64.tbz2.rmd160 from http://packages.macports.org/qt4-mac
---> Installing qt4-mac #4.8.7_0
---> Activating qt4-mac #4.8.7_0
NOTE: Qt database plugins for mysql55, postgresql91, and sqlite2 are NOT installed by this port; they are installed by qt4-mac-*-plugin instead.
---> Cleaning qt4-mac
---> Updating database of binaries
---> Scanning binaries for linking errors
---> No broken files found.
When I installed 'qt' with Macports, ipython qtconsole broke.
David-Laxers-MacBook-Pro:build_simulator davidlaxer$ ipython qtconsole
Traceback (most recent call last):
File "/Users/davidlaxer/anaconda/bin/ipython", line 6, in <module>
sys.exit(start_ipython())
File "/Users/davidlaxer/anaconda/lib/python2.7/site-packages/IPython/__init__.py", line 120, in start_ipython
return launch_new_instance(argv=argv, **kwargs)
File "/Users/davidlaxer/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 573, in launch_instance
app.initialize(argv)
File "<string>", line 2, in initialize
File "/Users/davidlaxer/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 75, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/davidlaxer/anaconda/lib/python2.7/site-packages/IPython/terminal/ipapp.py", line 321, in initialize
super(TerminalIPythonApp, self).initialize(argv)
File "<string>", line 2, in initialize
File "/Users/davidlaxer/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 75, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/davidlaxer/anaconda/lib/python2.7/site-packages/IPython/core/application.py", line 369, in initialize
self.parse_command_line(argv)
File "/Users/davidlaxer/anaconda/lib/python2.7/site-packages/IPython/terminal/ipapp.py", line 316, in parse_command_line
return super(TerminalIPythonApp, self).parse_command_line(argv)
File "<string>", line 2, in parse_command_line
File "/Users/davidlaxer/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 75, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/davidlaxer/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 471, in parse_command_line
return self.initialize_subcommand(subc, subargv)
File "<string>", line 2, in initialize_subcommand
File "/Users/davidlaxer/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 75, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/davidlaxer/anaconda/lib/python2.7/site-packages/IPython/config/application.py", line 402, in initialize_subcommand
subapp = import_item(subapp)
File "/Users/davidlaxer/anaconda/lib/python2.7/site-packages/IPython/utils/importstring.py", line 42, in import_item
module = __import__(package, fromlist=[obj])
File "/Users/davidlaxer/anaconda/lib/python2.7/site-packages/IPython/qt/console/qtconsoleapp.py", line 50, in <module>
from IPython.external.qt import QtCore, QtGui
File "/Users/davidlaxer/anaconda/lib/python2.7/site-packages/IPython/external/qt.py", line 23, in <module>
QtCore, QtGui, QtSvg, QT_API = load_qt(api_opts)
File "/Users/davidlaxer/anaconda/lib/python2.7/site-packages/IPython/external/qt_loaders.py", line 277, in load_qt
result = loaders[api]()
File "/Users/davidlaxer/anaconda/lib/python2.7/site-packages/IPython/external/qt_loaders.py", line 184, in import_pyqt4
from PyQt4 import QtGui, QtCore, QtSvg
ImportError: dlopen(/Users/davidlaxer/anaconda/lib/python2.7/site-packages/PyQt4/QtGui.so, 2): Symbol not found: _iconv
Referenced from: /Users/davidlaxer/anaconda/lib//libxml2.2.dylib
Expected in: /opt/local/lib//libiconv.2.dylib
How is this supposed to work?