I had run the test code in the comon conda python3.8 with these:
os.environ['NUMBA_CPU_FEATURES']='+adx,+aes,+avx,+avx2,+avx512bw,+avx512cd,+avx512dq,+avx512f,+avx512vl,+avx512vnni,+bmi,+bmi2,+clflushopt,+clwb,+cmov,+cx16,+cx8,+f16c,+fma,+fsgsbase,+fxsr,+invpcid,+lzcnt,+mmx,+movbe,+pclmul,+pku,+popcnt,+prfchw,+rdrnd,+rdseed,+sahf,+sse,+sse2,+sse3,+sse4.1,+sse4.2,+ssse3,+xsave,+xsavec,+xsaveopt,+xsaves'
https://github.com/IntelPython/numba-dpex/blob/main/numba_dpex/examples/sum.py- The issue exists with this sample too.
When I run in intel python3.8 and the time up to 2.5 min and I get the below Error.
Showing Error :
No device of requested type available. Please check https://software.intel.com/content/www/us/en/develop/articles/intel-oneapi-dpcpp-system-requirements... -1 (CL_DEVICE_NOT_FOUND)
/opt/conda/envs/idp/lib/python3.8/site-packages/numba_dppy/config.py:57: UserWarning: Please install dpctl 0.8.* or higher.
warnings.warn(msg, UserWarning)
/opt/conda/envs/idp/lib/python3.8/site-packages/numba/core/dispatcher.py:303: UserWarning: Numba extension module 'numba_dppy.numpy_usm_shared' failed to load due to 'ImportError(Importing numba_dppy failed)'.
How can I resolve this error?
I used conda to create intel python3.8-full and test the code of numpy and numba, Ubuntu 16.04, XEON Gold 5220R, without GPU.
Since you are unable to import numba_dppy package, can you please try the below command?
conda install numba-dppy
If the issue still persists, we can try with a basetoolkit image. Please follow the below steps:
Downloading image from docker hub:
docker pull intel/oneapi-basekit
Run the container from the image:
docker run -idt intel/oneapi-basekit
Look for the container ID:
docker ps
docker exec -it <container ID> bash
Update list of packages:
apt-get update
Update conda:
conda update conda
Creating conda env:
conda create -n idp3.8 intelpython3_full python=3.8
activate environment:
source activate idp3.8
install dpctl package:
python -m pip install --index-url https://pypi.org/simple dpctl --ignore-installed
install numba_dppy package:
conda install numba-dppy
I ran this sample (https://github.com/IntelPython/numba-dppy/blob/main/numba_dppy/examples/sum.py) inside the docker container
usually i do python -m cProfile myapp.py to run profiling.
i used pyinstaller my.spec --dist mydir --noconfirm
and created mydir/myapp an executable program
now i would like to run cProfile myapp and see the pstats.
how do I do that on this executable program?
I try run from underactuated.exercises.pend.test_vibrating_pendulum import TestVibratingPendulum in vibrating_pendulum.ipynb and I have got:
ModuleNotFoundError: No module named 'timeout_decorator'
My guess is that you're running the underactuated notebooks on your local machine and did not install the underactuated requirements?
pip3 install -r underactuated/requirements.txt
will install the timeout-decorator package, and any others you're missing.
I am using a saved Xgboost model in a executable file created with PyInstaller. I setup a virtual env and downloaded Xgboost and ensured it ran but after I create the exe and run the exe I get an error about xgboost.core:
ModuleNotFoundError: No module nemed 'xgboost.core'
Actually I can't see any import problem with xgboost, first, make sure that you are using the latest version inside your env with pip install -U xgboost next try to add xgboost.core as a hidden-import and add xgboost's DLLs as data-files.
Suppose that your virtualenv is named env, use below command to generate your executable:
├───myscript.py
├───env
Code:
import traceback
try:
from xgboost import core
input("xgboost.core imported successfully!")
except Exception:
traceback.print_exc()
input("Import Error!")
Command:
(env) > pyinstaller myscript.py -F --hidden-import=xgboost.core --add-data "./env/xgboost/*;xgboost/"
--add-data "./env/Lib/site-packages/xgboost/VERSION;xgboost/"
The answer by #Masoud Rahimi did not work for me. What did was running pyinstaller using the --collect-all option:
pyinstaller -D <app_name>.py --noconfirm --collect-all "xgboost"
See this issue and the arguments in the pyinstaller manual
I'm trying to use a Jupyter Notebook in a virtual environment.
I have created a new virtualenv virtualenv ker12
+ activate + installed a specific version of keras or any other library.
also as mentioned in Using a virtualenv in an IPython notebook I did:
pip install ipykernel
and
python -m ipykernel install --user --name=my-virtualenv-name
when I run the notebook and write
! which jupyter the output is correct
/Users/myname/virtualenv/ker12/bin/python
but when I try to import a library, for example import keras there is an error.
ImportError: No module named keras
But from the other side when I write pip freeze | grep Keras
the output is:
Keras==1.2.0
UPDATE 1:
this problem is not related to Keras it occurs with any other library (for example pandas)
If I print os.path the output is following:
<'module 'posixpath' from /usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/posixpath.pyc>
From a "command line python" the os.path looks correct
<'module 'posixpath' from '/Users/my_name/virtualenv/ker12/lib/python2.7/posixpath.pyc'>
UPDATE 2:
If I print sys.path from terminal and jupyter the output is also different:
from terminal
/Users/myname/virtualenv/ker12/lib/python27.zip
/Users/myname/virtualenv/ker12/lib/python2.7
/Users/myname/virtualenv/ker12/lib/python2.7/plat-darwin
/Users/myname/virtualenv/ker12/lib/python2.7/plat-mac
/Users/myname/virtualenv/ker12/lib/python2.7/plat-mac/lib-scriptpackages
/Users/myname/virtualenv/ker12/lib/python2.7/lib-tk
/Users/myname/virtualenv/ker12/lib/python2.7/lib-old
/Users/myname/virtualenv/ker12/lib/python2.7/lib-dynload
/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7
/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin
/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk
/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac /usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages
/Users/myname/virtualenv/ker12/lib/python2.7/site-packages
from JUPYTER
/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python27.zip
/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7
/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin
/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac /usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages
/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk
/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old
/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload
/usr/local/lib/python2.7/site-packages
/usr/local/lib/python2.7/site-packages/IPython/extensions
/Users/myname/.ipython `
the solution is to open jupyter notebook with following command:
~/virtualenv/my_venv_name/bin/jupyter-notebook
You should not install ipykernel - instead, you should go for a full Jupyter installation (pip install jupyter) inside your virtual environment. Additionally, be sure that you don't create your virtual environment with the --system-site-packages option.
See also this answer.