dvc (data version control) error - ImportError: cannot import name 'fsspec_loop' from 'fsspec.asyn' - dvc

I use Python version 3.7.13 and create a virtual environment (venv) for a MLOps project.
A dvc package (=2.10.2) that is compatible with Python== 3.7.13 is installed in this venv.
(venv) (base) tony3#Tonys-MacBook-Pro mlops % dvc --version
2.10.2
But when running the dvc initiation:
(venv) (base) tony3#Tonys-MacBook-Pro mlops % dvc init
An import error as follows occurs:
from fsspec.asyn import fsspec_loop
ImportError: cannot import name 'fsspec_loop' from 'fsspec.asyn'
I try the following:
Go to the file location /venv/lib/python3.7/site-packages/fsspec/asyn.py and inspect the file asyn.py. Find that there is no function with the name "fsspec_loop".
Try to upgrade the dvc to a newer version by,
pip install dvc --upgrade
But the dvc version remains the same (2.10.2).
Uninstall dvc by,
pip uninstall dvc
and try to install the newest version,
pip install dvc==2.42.0
The response shows that the latest version of dvc that is compatible with Python 3.7.13 is 2.10.2. As a result, version 2.42.0 cannot be installed.
Try to install dvc using brew. But the dvc is installed in a location outside the venv (at /usr/local/bin, where a later version of Python is used).
(venv) (base) tony3#Tonys-MacBook-Pro mlops % brew install dvc
(venv) (base) tony3#Tonys-MacBook-Pro mlops % dvc --version
2.41.1
(venv) (base) tony3#Tonys-MacBook-Pro mlops % which dvc
/usr/local/bin/dvc
The entire traceback (most recent call last) is as follows,

Thanks to the comment by #ruslankuprieiev.
dvc version 2.10.2 is successfully installed and initialized in the venv with Python 3.7.13 after downgrading fsspec to version 2022.11.0 .
The following are the steps.
Install dvc version 2.10.2,
Check which dvc is used (the one in venv),
Check fsspec version number (== 2023.1.0),
Force reinstall to downgrade fsspec to 2022.11.0,
Check fsspec version number again (== 2022.11.0), and
Force initialize dvc since there is an existing .dvc folder in the project directory.
The code is as follows,
(venv) (base) tony3#Tonys-MacBook-Pro mlops % pip install dvc==2.10.2
(venv) (base) tony3#Tonys-MacBook-Pro mlops % which dvc
/PathtoFile/venv/bin/dvc
(venv) (base) tony3#Tonys-MacBook-Pro mlops % pip show fsspec
Name: fsspec
Version: 2023.1.0
...
(venv) (base) tony3#Tonys-MacBook-Pro mlops % pip install --force-reinstall -v "fsspec==2022.11.0"
(venv) (base) tony3#Tonys-MacBook-Pro mlops % pip show fsspec
Name: fsspec
Version: 2022.11.0
...
(venv) (base) tony3#Tonys-MacBook-Pro mlops % dvc init -f

I experienced the same problem and solved it by installing an older version "fsspec". (my python is 3.8 and dvc 2.8.3)
pip uninstall fsspec
pip install fsspec==2022.7.1

Related

I can't install lableImg Annotation tool in M1 Mac

While installing LabelImg in M1 Mac using below command
pip install pyqt5 lxml
This is the error I got
ERROR: pyqt5 from https://files.pythonhosted.org/packages/7c/5b/e760ec4f868cb77cee45b4554bf15d3fe6972176e89c4e3faac941213694/PyQt5-5.14.0.tar.gz#sha256=0145a6b7de15756366decb736c349a0cb510d706c83fda5b8cd9e0557bc1da72 has a pyproject.toml file that does not comply with PEP 518: 'build-system.requires' contains an invalid requirement: 'sip >=5.0.1 <6'
How to install lableImg annotation tool in M1 Mac?
I got it to work by using the following commands
brew install pyqt#5
pip install labelimg
And that's it, it just works
You just need to type labelimg in the Terminal and the app will start running
I don't know why they don't tell you this in the installation guide
Alrighty!
On MacOS Monterey, none of the other solutions posted here solved this problem for me. However, I managed to easily solve the issue, without a virtual environment or too much fiddling about like so:
Firstly, you have to download all labelImg packages from this link:
https://github.com/tzutalin/labelImg#macos
(You can download it as a .zip file or clone it)
Unzip and then in your terminal cd into whatever directory you downloaded the above files to.
Then run the following commands in order:
pyrcc5 -o libs/resources.py resources.qrc
Then,
pip3 install lxml
Finally,
python3 labelImg.py
It should run without an issue now.
You can go one of two ways:
Using brew:
You can use homebrew to install the dependencies - like qt and libxml2. This will let your package manager handle everything and generally should solve the problem with the . Then you can run
python3 labelimg.py
Using Virtual Environments:
This is the more recommended way to go about in such cases. You can use conda, pipenv or venv to create a virtual environment which is isolated from your system python installation. Then you can try to install it as explained in the README.rst in the root of the repository:
brew install python3
pip3 install pipenv
pipenv run pip install pyqt5==5.12.1 lxml
pipenv run make qt5py3
pipenv run python3 labelImg.py
[Optional] rm -rf build dist; python setup.py py2app -A;mv "dist/labelImg.app" /Applications
You can try the two methods and and get back with the errors if there are any.
This is my note.
I just succeed on my Mac M1 Chip
CHECK THIS OUT!
Installation of labelimg on mac m1 chip
my first reference
my second reference
First, you must use terminal with rosetta version
Then, you already have python3
Then...
[Done]
# check where python3 is
$ where python3
# create env
$ /usr/bin/python3 -m venv env
# check env is
$ where env
# activate env list
$ source env/bin/activate
# updated to the newest
$ pip install --upgrade pip
# installation of PyQt5
$ pip install PyQt5
# start to run labelImg.py
$ cd Documents/repos/labelImg
$ pip3 install pyqt5 lxml
$ make qt5py3
# [run ok!!]
$ python3 labelImg.py
Using Conda
Create a virtual environment in conda and activate it
conda create -n venv
conda activate venv
Install pyqt using conda
conda install pyqt
Install lxml using pip
pip install lxml
Change directory to the downloaded/cloned labelImg folder
cd path/to/labelImg/folder/
Make qt5py3
make qt5py3
Run LabelImg
python labelImg.py

When installing airflow, no files are created in the airflow_home folder

I have successfully installed it in centos7 in VMware before.
However, in the same way, there was a problem installing manually from centos7 in docker.
(The official build of CentOS.)
(venv) [jykim#0f0090962efa dev]$ cat /etc/*release*
CentOS Linux release 7.9.2009 (Core)
When airflow was installed with the command below, no files were created in the specified AIRFLOW_HOME directory.
pip3.8 install 'apache-airflow[postgres]'
Naturally, we registered AIRFLOW_HOME with .bashrc and confirmed it was working fine.
(venv) [jykim#0f0090962efa ~]$ pwd
/home/jykim
(venv) [jykim#0f0090962efa ~]$ cd $AIRFLOW_HOME/
(venv) [jykim#0f0090962efa airflow_home]$ pwd
/home/jykim/dev/airflow/airflow_home
Reinstalling python resulted in the same result.
This blew the day away. I need your help!
(venv) [jykim#0f0090962efa airflow_home]$ python -V
Python 3.8.8
(venv) [jykim#0f0090962efa airflow_home]$ pip show apache-airflow
Name: apache-airflow
Version: 2.0.2
Installing the Airflow package will not create configuration files in the Airflow home directory. Run Airflow once for it to create the default configuration files, e.g. with:
airflow info

Error while install airflow: By default one of Airflow's dependencies installs a GPL

Getting the following error after running pip install airflow[postgres] command:
> raise RuntimeError("By default one of Airflow's dependencies installs
> a GPL "
>
> RuntimeError: By default one of Airflow's dependencies installs a GPL
> dependency (unidecode). To avoid this dependency set
> SLUGIFY_USES_TEXT_UNIDECODE=yes in your environment when you install
> or upgrade Airflow. To force installing the GPL version set
> AIRFLOW_GPL_UNIDECODE
I am trying to install in Debian 9
Try the following:
export AIRFLOW_GPL_UNIDECODE=yes
OR
export SLUGIFY_USES_TEXT_UNIDECODE=yes
Using export makes the environment variable available to all the subprocesses.
Also, make sure you are using pip install apache-airflow[postgres] and not pip install airflow[postgres]
Which should you use: if using AIRFLOW_GPL_UNIDECODE, airflow will install a dependency that is under GPL license, which means you won't be able to distribute your resulting application commercially. If that's a problem for you, go for SLUGIFY_USES_TEXT_UNIDECODE.
If you are installing using sudo run one of these commands:
sudo AIRFLOW_GPL_UNIDECODE=yes pip3 install apache-airflow
OR
sudo SLUGIFY_USES_TEXT_UNIDECODE=yes pip3 install apache-airflow
NOTE: If pip3 (python3) does not work for you, try pip command.
The pip command can be pointing to python2 or python3 installation depending on your system. Verify this by running pip --version.
Windows users can use the command below before installing apache-airflow:
$ set AIRFLOW_GPL_UNIDECODE=yes
then
$ pip install apache-airflow
In case you are installing the airflow on Windows and through Python terminal then you need to write this:
Set SLUGIFY_USES_TEXT_UNIDECODE=yes
pip install apache-airflow[postgres]
It worked with me after I struggled with trying many other options. Hope this will work with you too.
Below command should install apache-airflow and lets you pull changes into PyCharm for building DAGs and coding for Airflow.
SLUGIFY_USES_TEXT_UNIDECODE=yes
pip install apache-airflow
Also, if you are installing using sudo you can use:
export AIRFLOW_GPL_UNIDECODE='yes'
sudo -E pip3 install apache-airflow
(or use SLUGIFY_USES_TEXT_UNIDECODE)
Run the following command in your python terminal: SLUGIFY_USES_TEXT_UNIDECODE=yes pip install apache-airflow==1.10.0
Use below command to install apache-airflow
sudo SLUGIFY_USES_TEXT_UNIDECODE=yes \
pip install apache-airflow[async,devel,celery,crypto,druid,gcp_api,jdbc,hdfs,hive,kerberos,ldap,password,postgres,qds,rabbitmq,s3,samba,slack]

Parallel Group setup & mpi4py/OpenMDAO 2.2.X

I am trying to use the parallelization with mpi/openmdao.
I have tried on various ubuntu computers as well as ubuntu bash on windows (a windows 10 feature)
The dependencies work fine independently (i.e. import petsc4py and import mpi4py works fine and I can run the tests of these similar to the links: https://openmdao.readthedocs.io/en/1.7.3/getting-started/mpi_linux.html &
http://mpi4py.scipy.org/docs/usrman/install.html)
But the Paralel Group code in the openmdao 2.2. manual does not work.
For each attempt (varying computers) i seem to get another error most of them seemed like compatibility errors (i.e. I install petsc4py which breaks numpy or mpi4py installation causing proble in the existing openmdao core. )
On some computers I had my own openmpi and petsc installed but conda install command already installs those as far as I see.
Eventually I have tried these steps on a newly started amazon instance
but had similar problems.
sudo apt-get install build-essential
wget http://repo.continuum.io/archive/Anaconda3-5.2.0-Linux-x86_64.sh
bash Anacond*
sudo apt-get install libibnetdisc-dev
sudo apt-get install libblas-dev libatlas-dev liblapack-dev
conda install mpi4py
conda install -c conda-forge petsc4py
if i check ''conda list'' onone of the computers the abbreviated output is ;
mpi 1.0 mpich conda-forge
mpi4py 3.0.0 py36_mpich_1 conda-forge
mpich 3.2.1 1 conda-forge
mpich2 1.4.1p1 0 anaconda
mpmath 1.0.0 py36hfeacd6b_2
msgpack-python 0.5.1 py36h6bb024c_0
multipledispatch 0.4.9 py36h41da3fb_0
mumps 5.0.2 blas_openblas_208 [blas_openblas]
conda-forge
numpy 1.14.3 py36_blas_openblas_200 [blas_openblas] conda-forge
numpydoc 0.7.0 py36h18f165f_0
openblas 0.2.20 8 conda-forge
openmdao 2.2.1 <pip>
openpyxl 2.4.10 py36_0
openssl 1.0.2o 0 conda-forge
petsc 3.9.1 blas_openblas_0 [blas_openblas]
conda-forge
petsc4py 3.9.1 py36_0 conda-forge
pexpect 4.3.1 py36_0
pickleshare 0.7.4 py36h63277f8_0
pillow 5.0.0 py36h3deb7b8_0
pip 10.0.1 <pip
On the same system if try to run
mpirun -n 2 python my_par_model.py
based on the manual code this is what i get
Does anyone have a suggestion where it could be failing or what steps i could follow for ubuntu implementation of anconda/openmdao/petsc/mpi4py and succesful run of paralel openmdao ?
You could take a look at the installation implementation for linux that exists in our .travis.yml file? https://github.com/OpenMDAO/OpenMDAO/blob/master/.travis.yml
This works for installing and testing OpenMDAO from scratch on Trusty Tahr instances on Travis CI. One difference I see at first glance would be our use of pip to install mpi and PETSc into the conda-installed python.
I think MPI compatibility was the main issue. I was not aware that it had to be openmpi and indeed conda install command installs the mpich and possibly causing a problem with openmdao.
I will continue doing more tests but for a working system starting from a brand new installation of ubuntu-16.04.4-desktop-amd64.iso I followed these steps;
(Steps that take time are the openmpi installation and petsc4py pip instalattion.)
1 ) For some dependencies (taken from https://gist.github.com/mrosemeier/088115b2e34f319b913a)
sudo apt-get install libibnetdisc-dev
sudo apt-get install libblas-dev libatlas-dev liblapack-dev
2) Download/Install OpenMPI (mostly taken from http://lsi.ugr.es/jmantas/pdp/ayuda/datos/instalaciones/Install_OpenMPI_en.pdf)
wget https://download.open-mpi.org/release/open-mpi/v3.1/openmpi-3.1.0.tar.gz
tar -xzf openmpi-3.1.0.tar.gz
cd openmpi-*
./configure --prefix="/home/$USER/.openmpi"
make
sudo make install
echo export PATH="$PATH:/home/$USER/.openmpi/bin" >> /home/$USER/.bashrc
echo export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/home/$USER/.openmpi/lib/" >> /home/$USER/.bashrc
3) MINICONDA & Rest (mostly taken from https://github.com/OpenMDAO/OpenMDAO/blob/master/.travis.yml)
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
sh Minicond* # agree to add to the path etc.
conda install --yes python=3.6
conda install --yes numpy==1.14 scipy=0.19.1 nose sphinx mock swig pip;
pip install --upgrade pip;
pip install mpi4py
pip install petsc4py==3.9.1
#petsc4py Gives an error failed building wheel for petsc but then installs petsc itself, afterwards, petsc4py is also installed
sudo apt install git # in the cases git does not exist
# not sure why we need this part but i followed
pip install redbaron;
pip install git+https://github.com/OpenMDAO/testflo.git;
pip install coverage;
pip install git+https://github.com/swryan/coveralls-python#work;
# pyoptsparse and openmdao
git clone https://github.com/mdolab/pyoptsparse.git;
cd pyoptsparse;
python setup.py install;
cd ..;
conda install --yes matplotlib;
git clone http://github.com/OpenMDAO/OpenMDAO
cd OpenMDAO
pip install .
# optional
conda install spyder
4) Check the versions
mpirun --version : Open MPI 3.1.0
python --version : 3.6.5
pip --version :
pip 10.0.1 from /home/user/miniconda3/lib/python3.6/site-packages/pip (python 3.6)
conda list : (note that there is no mpich or similar in the conda list)
openmdao 2.2.1 <pip>
mpi4py 3.0.0 <pip>
petsc 3.9.2 <pip>
petsc4py 3.9.1 <pip>

getting "pygpu was configured but could not be imported" error while trying with OpenCL+Theano on AMD Radeon

I have followed the instructions from this:
https://gist.github.com/jarutis/ff28bca8cfb9ce0c8b1a
But then when I tried : THEANO_FLAGS=device=opencl0:0 python test.py
on the test file I am getting error:
ERROR (theano.sandbox.gpuarray): pygpu was configured but could not be imported
Traceback (most recent call last):
File "/home/mesayantan/.local/lib/python2.7/site-packages/theano/sandbox/gpuarray/init.py", line 20, in
import pygpu
File "/usr/src/gtest/clBLAS/build/libgpuarray/pygpu/init.py", line 7, in
from . import gpuarray, elemwise, reduction
File "/usr/src/gtest/clBLAS/build/libgpuarray/pygpu/elemwise.py", line 3, in
from .dtypes import dtype_to_ctype, get_common_dtype
File "/usr/src/gtest/clBLAS/build/libgpuarray/pygpu/dtypes.py", line 6, in
from . import gpuarray
ImportError: cannot import name gpuarray
I do not have good idea. I am using all these for the first time. I am working on Ubuntu 14.04 LTS. How can I resolve this error?
I fixed this issue with the step-by-step installation given in the lipgpuarray website!
Download
git clone https://github.com/Theano/libgpuarray.git
cd libgpuarray
Install libgpuarray
# extract or clone the source to <dir>
cd <dir>
mkdir Build
cd Build
# you can pass -DCMAKE_INSTALL_PREFIX=/path/to/somewhere to install to an alternate location
cmake .. -DCMAKE_BUILD_TYPE=Release # or Debug if you are investigating a crash
make
make install
cd ..
Install pygpu
# This must be done after libgpuarray is installed as per instructions above.
python setup.py build
python setup.py install
Source:
http://deeplearning.net/software/libgpuarray/installation.html
This worked for me!
Good Luck
Installing the blas library seems enough. I'm doing tests for the same problem.
cd ~
git clone https://github.com/clMathLibraries/clBLAS.git
cd clBLAS/
mkdir build
cd build/
sudo apt-cache search openblas
sudo apt-get install libopenblas-base libopenblas-dev
sudo apt-get install liblapack3gf liblapack-doc liblapack-dev
cmake ../src
make
sudo make install
And after that
git clone https://github.com/Theano/libgpuarray.git
cd libgpuarray
mkdir Build
cd Build
cmake .. -DCMAKE_BUILD_TYPE=Release
make
sudo make install
cd ..
sudo apt-get install cython
sudo apt-get install python-numpy python-scipy python-dev python-pip python-nose g++ libopenblas-dev git
Building and Installing with regard to python3
python3 setup.py build
sudo -H python3 setup.py install
I hope it can help you. Now just the dev version of theano is missing for me.

Resources