Unable to Import Numba Package Getting Error - intel

I had run the test code in the comon conda python3.8 with these:
os.environ['NUMBA_CPU_FEATURES']='+adx,+aes,+avx,+avx2,+avx512bw,+avx512cd,+avx512dq,+avx512f,+avx512vl,+avx512vnni,+bmi,+bmi2,+clflushopt,+clwb,+cmov,+cx16,+cx8,+f16c,+fma,+fsgsbase,+fxsr,+invpcid,+lzcnt,+mmx,+movbe,+pclmul,+pku,+popcnt,+prfchw,+rdrnd,+rdseed,+sahf,+sse,+sse2,+sse3,+sse4.1,+sse4.2,+ssse3,+xsave,+xsavec,+xsaveopt,+xsaves'
https://github.com/IntelPython/numba-dpex/blob/main/numba_dpex/examples/sum.py- The issue exists with this sample too.
When I run in intel python3.8 and the time up to 2.5 min and I get the below Error.
Showing Error :
No device of requested type available. Please check https://software.intel.com/content/www/us/en/develop/articles/intel-oneapi-dpcpp-system-requirements... -1 (CL_DEVICE_NOT_FOUND)
/opt/conda/envs/idp/lib/python3.8/site-packages/numba_dppy/config.py:57: UserWarning: Please install dpctl 0.8.* or higher.
warnings.warn(msg, UserWarning)
/opt/conda/envs/idp/lib/python3.8/site-packages/numba/core/dispatcher.py:303: UserWarning: Numba extension module 'numba_dppy.numpy_usm_shared' failed to load due to 'ImportError(Importing numba_dppy failed)'.
How can I resolve this error?
I used conda to create intel python3.8-full and test the code of numpy and numba, Ubuntu 16.04, XEON Gold 5220R, without GPU.

Since you are unable to import numba_dppy package, can you please try the below command? 
conda install numba-dppy
 If the issue still persists, we can try with a basetoolkit image. Please follow the below steps: 
Downloading image from docker hub: 
docker pull intel/oneapi-basekit
 Run the container from the image: 
docker run -idt intel/oneapi-basekit
 Look for the container ID: 
docker ps
docker exec -it <container ID> bash
 Update list of packages: 
apt-get update
 Update conda: 
conda update conda
 Creating conda env: 
conda create -n idp3.8 intelpython3_full python=3.8
 activate environment: 
source activate idp3.8
 install dpctl package: 
python -m pip install --index-url https://pypi.org/simple dpctl --ignore-installed
 install numba_dppy package: 
conda install numba-dppy
 I ran this sample (https://github.com/IntelPython/numba-dppy/blob/main/numba_dppy/examples/sum.py) inside the docker container 

Related

Docker run not working, unable to view R workspace image

I'm trying to set up the GeoMXAnalysis Workflow, Documented here. The way it works is I need to install a bunch of packages, download the github repo, and then use docker to view an R markdown file which walks you through the workflow. My problem is that following all the steps, on both a galaxy server and my local Windows laptop, is impossible.
## Title: GeoMX workflow Intro
## INSTALL PACKAGES from Bioconductor:
if (!require("BiocManager", quietly = TRUE))
install.packages("BiocManager")
BiocManager::install(c("standR","edgeR","limma","msigdb","GSEABase","igraph","vissE","SpatialExperiment","scater"), force = TRUE)
#install geomx workflow package
remotes::install_github("DavisLaboratory/GeoMXAnalysisWorkflow", build_vignettes = FALSE, force=TRUE)
#build and load docker image
system("cmd.exe", input="docker run -e PASSWORD=pass -p 8787:8787 ghcr.io/ningbioinfo/geomxanalysisworkflow:latest")
This is my script that I'm using to download and install the packages. My problem is with the docker command. First I didn't have it installed, but now that I have I'm receiving the following error:
> #build and load docker image
> system("cmd.exe", input="docker run -e PASSWORD=pass -p 8787:8787 ghcr.io/ningbioinfo/geomxanalysisworkflow:latest")
Microsoft Windows [Version 10.0.19044.2486]
(c) Microsoft Corporation. All rights reserved.
C:\Users\a\Documents>docker run -e PASSWORD=pass -p 8787:8787 ghcr.io/ningbioinfo/geomxanalysisworkflow:latest
Unable to find image 'ghcr.io/ningbioinfo/geomxanalysisworkflow:latest' locally
docker: Error response from daemon: Head "https://ghcr.io/v2/ningbioinfo/geomxanalysisworkflow/manifests/latest": denied.
See 'docker run --help'.
C:\Users\a\Documents>[1] 0
Do I need to switch the Daemon somehow, or is this a problem with the github repo not existing? pasting it into my browser doesn't lead me anywhere concrete.
Any thoughts on how to resolve this much appreciated.

No module named 'timeout_decorator' in drake vibrating_pendulum

I try run from underactuated.exercises.pend.test_vibrating_pendulum import TestVibratingPendulum in vibrating_pendulum.ipynb and I have got:
ModuleNotFoundError: No module named 'timeout_decorator'
My guess is that you're running the underactuated notebooks on your local machine and did not install the underactuated requirements?
pip3 install -r underactuated/requirements.txt
will install the timeout-decorator package, and any others you're missing.

Using R in a Snakemake workflow with Mambaforge

I'm building a pipeline with Snakemake. One rule involves an R script that reads a CSV file using readr. I get this error when I run the pipeline with --use-singularity and --use-conda
Error: Unknown TZ UTC
In addition: Warning message:
In OlsonNames() : no Olson database found
Execution halted
Google suggests readr is crashing due to missing tzdata but I can't figure out how to install the tzdata package and make readr see it. I am running the entire pipeline in a Mambaforge container to ensure reproducibility. Snakemake recommends using Mambaforge over a Miniconda container as it's faster, but I think my error involves Mambaforge as using Miniconda solves the error.
Here's a workflow to reproduce the error:
#Snakefile
singularity: "docker://condaforge/mambaforge"
rule targets:
input:
"out.txt"
rule readr:
input:
"input.csv"
output:
"out.txt"
conda:
"env.yml"
script:
"test.R"
#env.yml
name: env
channels:
- default
- bioconda
- conda-forge
dependencies:
- r-readr
- tzdata
#test.R
library(readr)
fp <- snakemake#input[[1]]
df <- read_csv(fp)
print(df)
write(df$x, "out.txt")
I run the workflow with snakemake --use-conda --use-singularity. How do I run R scripts when the Snakemake workflow is running from a Mambaforge singularity container?
Looking through the stack of R code leading to the error, I see that it checks a bunch of default locations for the zoneinfo folder that tzdata includes, but also checks for a TZDIR environment variable.
I believe a proper solution to this would be for the Conda tzdata package to set this variable to point to it. This will require a PR to the Conda Forge package (see repo issue). In the meantime, one could do either of the following as workarounds.
Workaround 1: Set TZDIR from R
Continuing to use the tzdata package from Conda, one could set the environment variable at the start of the R script.
#!/usr/bin/env Rscript
## the following assumes active Conda environment with `tzdata` installed
Sys.setenv("TZDIR"=paste0(Sys.getenv("CONDA_PREFIX"), "/share/zoneinfo"))
I would consider this a temporary workaround.
Workaround 2: Derive a New Docker
Otherwise, make a new Docker image that includes a system-level tzdata installation. This appears to be a common issue, so following other examples (and keeping things clean), it'd go something like:
Dockerfile
FROM --platform=linux/amd64 condaforge/mambaforge:latest
## include tzdata
RUN apt-get update > /dev/null \
&& DEBIAN_FRONTEND="noninteractive" apt-get install --no-install-recommends -y tzdata > /dev/null \
&& apt-get clean
Upload this to Docker Hub and use it instead of the Mambaforge image as the image for Snakemake. This is probably a more reliable long-term solution, but perhaps not everyone wants to create a Docker Hub account.

R X13binary missing in docker build

I have a docker file where i'm trying to install the R seasonal library:
FROM continuumio/miniconda3:4.5.12 # Debian
. . .
# Install packages not on conda
RUN conda activate r_env && \
R -e "install.packages(c('RUnit', 'seasonal'), dependencies=TRUE, repos='https://cran.case.edu')"
Everything looks like it installs correctly, however when I get into the container and run library(seasonal) I get the error:
> library(seasonal)
The binaries provided by 'x13binary' do not work on this
machine. To get more information, run:
x13binary::checkX13binary()
> x13binary::checkX13binary()
Error in x13binary::checkX13binary() : X-13 binary file not found
After some googling it looks like I can manually set the path for the binary and a findutil shows that the binary exists on the machine:
(r_env) root#89c7265d9316:/# find / -name "*x13*"
/opt/conda/envs/arimaApiR/lib/R/library/x13binary
/opt/conda/envs/arimaApiR/lib/R/library/x13binary/help/x13binary.rdx
/opt/conda/envs/arimaApiR/lib/R/library/x13binary/help/x13binary.rdb
/opt/conda/envs/arimaApiR/lib/R/library/x13binary/html/x13path.html
/opt/conda/envs/arimaApiR/lib/R/library/x13binary/html/x13binary-package.html
/opt/conda/envs/arimaApiR/lib/R/library/x13binary/bin/x13ashtml.exe
/opt/conda/envs/arimaApiR/lib/R/library/x13binary/R/x13binary.rdx
/opt/conda/envs/arimaApiR/lib/R/library/x13binary/R/x13binary.rdb
/opt/conda/envs/arimaApiR/lib/R/library/x13binary/R/x13binary
/opt/conda/envs/arimaApiR/conda-meta/r-x13binary-1.1.39_2-r36h6115d3f_0.json
/opt/conda/pkgs/r-x13binary-1.1.39_2-r36h6115d3f_0
/opt/conda/pkgs/r-x13binary-1.1.39_2-r36h6115d3f_0/lib/R/library/x13binary
/opt/conda/pkgs/r-x13binary-1.1.39_2-r36h6115d3f_0/lib/R/library/x13binary/help/x13binary.rdx
/opt/conda/pkgs/r-x13binary-1.1.39_2-r36h6115d3f_0/lib/R/library/x13binary/help/x13binary.rdb
/opt/conda/pkgs/r-x13binary-1.1.39_2-r36h6115d3f_0/lib/R/library/x13binary/html/x13path.html
/opt/conda/pkgs/r-x13binary-1.1.39_2-r36h6115d3f_0/lib/R/library/x13binary/html/x13binary-package.html
/opt/conda/pkgs/r-x13binary-1.1.39_2-r36h6115d3f_0/lib/R/library/x13binary/bin/x13ashtml.exe
/opt/conda/pkgs/r-x13binary-1.1.39_2-r36h6115d3f_0/lib/R/library/x13binary/R/x13binary.rdx
/opt/conda/pkgs/r-x13binary-1.1.39_2-r36h6115d3f_0/lib/R/library/x13binary/R/x13binary.rdb
/opt/conda/pkgs/r-x13binary-1.1.39_2-r36h6115d3f_0/lib/R/library/x13binary/R/x13binary
/opt/conda/pkgs/r-x13binary-1.1.39_2-r36h6115d3f_0.tar.bz2
However no matter whatever I set the path to be, the library still throws errors on where the actual path is:
(r_env) root#89c7265d9316:/# export X13_PATH=/opt/conda/envs/arimaApiR/lib/R/library/x13binary
(r_env) root#89c7265d9316:/# R -e "library(seasonal)"
The system variable 'X13_PATH' has been manually set to:
/opt/conda/envs/arimaApiR/lib/R/library/x13binary
Since version 1.2, 'seasonal' relies on the 'x13binary'
package and does not require 'X13_PATH' to be set anymore.
Only set 'X13_PATH' manually if you intend to use your own
binaries. See ?seasonal for details.
Binary executable file /opt/conda/envs/arimaApiR/lib/R/library/x13binary/x13as or /opt/conda/envs/arimaApiR/lib/R/library/x13binary/x13ashtml not found.
See ?seasonal for details.
I feel like I'm running in circles. Has anyone had luck running this inside a container?
I've prepared my own container but I didn't use continuumio/miniconda since I don't know how it works inside.
This is the Dockerfile I've prepared:
FROM r-base:3.6.1
RUN apt-get update \
&& apt-get install -y libxml2-dev
RUN R -e "install.packages('RUnit', dependencies=TRUE, repos='https://cran.case.edu')"
RUN R -e "install.packages('x13binary', dependencies=TRUE, repos='https://cran.case.edu')"
RUN R -e "install.packages('seasonal', dependencies=TRUE, repos='https://cran.case.edu')"
CMD [ "bash" ]
If I run your test commands, I receive this:
> library(seasonal)
> x13binary::
x13binary::checkX13binary x13binary::supportedPlatform x13binary::x13path
> x13binary::checkX13binary
x13binary::checkX13binary
> x13binary::checkX13binary()
x13binary is working properly
>
NOTE: the Dockerfile can be improve, e.g. you can put together the packages c(RUnit, x13binary, seasonal) and you can remove the apt cache after installing the package but I just wanted to run a test to see if it'd work.

Jupyter Notebook in virtual environment doesn't see the virtual env packages

I'm trying to use a Jupyter Notebook in a virtual environment.
I have created a new virtualenv virtualenv ker12
+ activate + installed a specific version of keras or any other library.
also as mentioned in Using a virtualenv in an IPython notebook I did:
pip install ipykernel
and
python -m ipykernel install --user --name=my-virtualenv-name
when I run the notebook and write
! which jupyter the output is correct
/Users/myname/virtualenv/ker12/bin/python
but when I try to import a library, for example import keras there is an error.
ImportError: No module named keras
But from the other side when I write pip freeze | grep Keras
the output is:
Keras==1.2.0
UPDATE 1:
this problem is not related to Keras it occurs with any other library (for example pandas)
If I print os.path the output is following:
<'module 'posixpath' from /usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/posixpath.pyc>
From a "command line python" the os.path looks correct
<'module 'posixpath' from '/Users/my_name/virtualenv/ker12/lib/python2.7/posixpath.pyc'>
UPDATE 2:
If I print sys.path from terminal and jupyter the output is also different:
from terminal
/Users/myname/virtualenv/ker12/lib/python27.zip
/Users/myname/virtualenv/ker12/lib/python2.7
/Users/myname/virtualenv/ker12/lib/python2.7/plat-darwin
/Users/myname/virtualenv/ker12/lib/python2.7/plat-mac
/Users/myname/virtualenv/ker12/lib/python2.7/plat-mac/lib-scriptpackages
/Users/myname/virtualenv/ker12/lib/python2.7/lib-tk
/Users/myname/virtualenv/ker12/lib/python2.7/lib-old
/Users/myname/virtualenv/ker12/lib/python2.7/lib-dynload
/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7
/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin
/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk
/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac /usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages
/Users/myname/virtualenv/ker12/lib/python2.7/site-packages
from JUPYTER
/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python27.zip
/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7
/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin
/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac /usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages
/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk
/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old
/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload
/usr/local/lib/python2.7/site-packages
/usr/local/lib/python2.7/site-packages/IPython/extensions
/Users/myname/.ipython `
the solution is to open jupyter notebook with following command:
~/virtualenv/my_venv_name/bin/jupyter-notebook
You should not install ipykernel - instead, you should go for a full Jupyter installation (pip install jupyter) inside your virtual environment. Additionally, be sure that you don't create your virtual environment with the --system-site-packages option.
See also this answer.

Resources