pyinstaller ImportError: C extension: No module named np_datetime not built - pyinstaller

I am running a virtual environment with Python 2.7 for my program.
There seems to be a problem after creating the executable file on windows.
I ran venv/Scripts/pyinstaller.exe -F main.py
everything seems fine. But when i click on the created executable main.exe.
There is an error.
Tried and tested
I have re-installed of pandas and pyinstaller
Implemented the hook-pandas.py to the hooks folder in the environment.
hook-pandas
Ensured the environment is activated.
Checked that the program is running fine before building executable.
Re-created the environment.
Yet after all that, I am prompted with this issue [see Importerror] when I run the executable file.
It is an extreme pain to debug this because the command prompt displaying the error will not pause but close almost immediately.
Similar issues
Looking for Suggestions
I am hoping for suggestions to troubleshoot Pyinstaller. Any resources to read up on would be nice.
Usually, I have no trouble with python as Pycharm has several handy debugging tools that will help me identify the problem

I ran into the same problem and found this thread, but I managed to solve it borrowing from the reference you posted (about pandas._libs.tslibs.timedeltas), so thank you for that!
In that article, the module that resulted in the ImportError was, in fact pandas._libs.tslibs.timedeltas, if you look at the poster's logs. But the error you and I ran into refers to np_datetime instead. So, from the traceback logs, I finally figured out that the code we have to write in hook-pandas.py should be the following:
hiddenimports = ['pandas._libs.tslibs.np_datetime']
Maybe that alone will solve your problem, HOWEVER, in my case, once I solved the np_datetime issue, other very similar ImportError problems arose (also related to hiddenimports regarding pandas), so, in case you run into the same issues, just define hiddenimports as follows:
hiddenimports = ['pandas._libs.tslibs.np_datetime','pandas._libs.tslibs.nattype','pandas._libs.skiplist']
TL;DR:
You can first try to write
hiddenimports = ['pandas._libs.tslibs.np_datetime']
into hook-pandas.py. However, if for some reason you run into the exact same issues I did afterwards, try
hiddenimports = ['pandas._libs.tslibs.np_datetime','pandas._libs.tslibs.nattype','pandas._libs.skiplist']
If you wish to dive deeper (or run into a different pandas ImportError than the ones I did), this is the code in pandas's __init__.py referenced in your traceback log (lines 23 to 35):
from pandas.compat.numpy import *
try:
from pandas._libs import (hashtable as _hashtable,
lib as _lib,
tslib as _tslib)
except ImportError as e: # pragma: no cover
# hack but overkill to use re
module = str(e).replace('cannot import name ', '')
raise ImportError("C extension: {0} not built. If you want to import "
"pandas from the source directory, you may need to run "
"'python setup.py build_ext --inplace --force' to build "
"the C extensions first.".format(module))
From that I went into the
C:\Python27\Lib\site-packages\pandas_libs
and
C:\Python27\Lib\site-packages\pandas_libs\tslibs
folders and found the exact names of the modules that resulted the errors.
I hope that solves your problem as it did mine.
Cheers!

Related

mypyc, KeyError: '__file__'

I successfully use mypyc in my project and it has performed well until just a couple days ago. I now get the following error:
File "mypyc/__main__.py", line 18, in <module>
KeyError: '__file__'
Line 18 above, i.e., the line that is failing, is just
base_path = os.path.join(os.path.dirname(__file__), '..')
which I wouldn't expect to fail. I am in my venv virtualenv when I execute mypyc using the same command as has always worked before.
I thought perhaps a regression was introduced in mypyc so I used git to go back in time to see if that line had changed in any recent version of mypy, but it hadn't.
I also tried downgrading mypy to an older version that worked before but that version also failed with the same error. To be sure it wasn't being experienced by others I also checked the issues at the mypy repo on github and did a search for __file__ to see if that part of the error message showed up and it didn't. Perhaps it is some weird issue with my environment?
I experience the issue with venv virtualenvs created with Python 3.10, 3.10.1 but also 3.9.9 too. It worked fine on Python 3.10 before. Any ideas on what to investigate next?

nbsphinx causes build to fail when building Jupyter Notebooks

Details
I am getting a build failure of my read-the-docs that I don't understand. The assertion of "verbatim" in line 2151 of nbsphinx.py is causing the build failure. So the build fails when I try to include the Jupyter Notebook tutorials I created. I compared current versions of the tutorials to previous versions which had not caused the build to fail, and I can't find a difference that could account for the current failure.
Read the Docs project URL: lofti_gaia
Build URL: https://github.com/logan-pearce/lofti_gaia
Read the Docs username: logan-pearce
Expected Result
A passing build including *.ipynb files
Actual Result
Build failed at line 2151 of nbsphinx.py due to assertion of 'Verbatim' failing.
Terminal output:
Running Sphinx v4.1.2
loading translations [en]... done
making output directory... done
WARNING: html_static_path entry '_static' does not exist
loading pickled environment... done
building [mo]: targets for 0 po files that are out of date
building [latex]: all documents
updating environment: 0 added, 0 changed, 0 removed
looking for now-outdated files... none found
processing lofti_gaia.tex... index installation tutorials/QuickStart tutorials/Tutorial api lofti loftitools
resolving references...
done
writing... failed
Exception occurred:
File "/home/docs/checkouts/readthedocs.org/user_builds/lofti-gaia/conda/latest/lib/python3.7/site-packages/nbsphinx.py", line 2151, in depart_codearea_latex
assert 'Verbatim' in lines[0]
AssertionError
The full traceback has been saved in /tmp/sphinx-err-x1h83s3m.log, if you want to report the issue to the developers.
Please also report this if it was a user error, so that a better error message can be provided next time.
A bug report can be filed in the tracker at <https://github.com/sphinx-doc/sphinx/issues>. Thanks!
According to the github issue 584 for nbsphinx (https://github.com/spatialaudio/nbsphinx/issues/584), this is due to a compatibility issue with sphinx 4.1.0. It can be gotten around by requiring sphinx version 4.0.2.
So in my requirements.txt file, I included sphinx==4.0.2, after which the build passes. So now my requirements.txt file looks like:
numpy
matplotlib
astropy>=4.0.1.post1
astroquery>=0.4
sphinx==4.0.2
ipython==7.19.0
nbsphinx>=0.8.6
and the build passes.
I have encountered the same issue. I did not solve it with .ipynb format, but converting the jupyter notebook to .rst format works.
May it helps

Use Mypy with Ruamel.yaml

I am attempting to use MyPy with modules that use ruamel.yaml and Mypy cannot find ruamel.yaml even though Python has no problem finding it. I am puzzled because I can't find a module called YAML.py or class called YAML either, even though these statements work in Python:
from ruamel.yaml import YAML
yaml = YAML()
x = yaml.load()
What do I need to do to get MyPy to recognize ruamel.yaml?
A workaround is to run without the incremental logic of mypy:
python -m mypy --no-incremental myfile.py
Background
There is a known issue in mypy, see here.
In summary:
Something is not working with the incremental logic of mypy when it is encountering ruamel.
When you run it once, all goes ok. This is the command:
python -m mypy myfile.py
Then, when you run it again, you get an error:
error: Skipping analyzing 'ruamel': found module but no type hints or library stubs [import]
Then, when you run it again, it all goes ok
etc.
You should not be looking for a file YAML.py. The YAML in
yaml = YAML()
is a class that is defined in ruamel/yaml/main.py and that gets imported into ruamel/yaml/__init__.py (both under site-packages). That is why you do:
from ruamel.yaml import YAML
(the alternative would be that there is a file yaml.py under the directory ruamel, but the loader/dumper is a bit too much to put in one file).
What might work if the above knowledge doesn't help you resolve things, is explicitly set the global flag mypy_path or the environment variable MYPYPATH. This has to include the directory in which the directory ruamel is located.
( I could not find it mentioned in the documentation, but from the source ( mypy/build.py:mypy_path() ) you can see that this is supposed to be a string that gets split on os.pathsep (which is the colon (:) on my Linux based system))
I have the same issue.
Even after setting MYPYPATH=./.venv/lib/python3.7/site-packages
A temporary 'solution' is ignoring the missing import exception
mypy --ignore-missing-imports

Apache spark-shell error import jars

I have a local spark 1.5.2 (hadoop 2.4) installation on Windows as explained here.
I'm trying to import a jar file that I created in Java using maven (the jar is jmatrw that I uploaded on here on github). Note the jar does not include a spark program and it has no dependencies to spark. I tried the following steps, but no one seems to work in my installation:
I copied the library in "E:/installprogram/spark-1.5.2-bin-hadoop2.4/lib/jmatrw-v0.1-beta.jar"
Edit spark-env.sh and add SPARK_CLASSPATH="E:/installprogram/spark-1.5.2-bin-hadoop2.4/lib/jmatrw-v0.1-beta.jar"
In a command window I run > spark-shell --jars "E:/installprogram/spark-1.5.2-bin-hadoop2.4/lib/jmatrw-v0.1-beta.jar", but it says "Warning: skip remote jar"
In the spark shell I tried to do scala> sc.addJar("E:/installprogram/spark-1.5.2-bin-hadoop2.4/lib/jmatrw-v0.1-beta.jar"), it says "INFO: added jar ... with timestamp"
When I type scala> import it.prz.jmatrw.JMATData, spark-shell replies with error: not found: value it.
I spent lot of time searching on Stackoverflow and on Google, indeed a similar Stakoverflow question is here, but I'm still not able to import my custom jar.
Thanks
There are two settings in 1.5.2 to reference an external jar. You can add it for the driver or for the executor(s).
I'm doing this by adding settings to the spark-defaults.conf, but you can set these at spark-shell or in SparkConf.
spark.driver.extraClassPath /path/to/jar/*
spark.executor.extraClassPath /path/to/jar/*
I don't see anything really wrong with the way you are doing it, but you could try the conf approach above, or setting these using SparkConf
val conf = new SparkConf()
conf.set("spark.driver.extraClassPath", "/path/to/jar/*")
val sc = new SparkContext(conf)
In general, I haven't enjoyed working with Spark on Windows. Try to get onto Unix/Linux.

SBT configuration failed to load in Typesafe Activator

I'm currently trying to start a play-slick application through the Typesafe Activator, but it fails to load the SBT configuration and I get this error;
/play-slick/build.sbt:30: error: reference to fork is ambiguous;
it is imported twice in the same scope by
import _root_.play.Project._
and import Keys._
fork in run := true
^
Type error in expression
Failed to load project.
Does this mean I have SBT downloaded twice and what can I do to resolve it? Thanks.
Just wanted to say that I came across this exact same issue when trying to use the Play-Slick example linked from the Play Tutorials page.
The solution to get it working seems to have indeed been to follow the suggestion in the Github link that Seth Tisue included in a comment above, where corruptmemory suggested removing the following line from build.sbt:
fork in run := true
In my case, this was enough to convince IntelliJ to open the project and let me tinker with it. (Just in case this is the first result for anyone else coming across this problem)
just remove
fork in run := true
from build.sbt and hit activator clean run from cmd

Resources