Issue with installing ansible-galaxy azure.azcollection - ansible-galaxy

[root#jenkins-dev playbooks]# ansible-galaxy collection install azure.azcollection
ERROR! Unexpected Exception, this is probably a bug: cannot import name 'CollectionRequirement' from 'ansible.galaxy.collection' (/usr/local/lib/python3.7/site-packages/ansible/galaxy/collection/__init__.py)
the full traceback was:
Traceback (most recent call last):
File "/usr/local/bin/ansible-galaxy", line 92, in <module>
mycli = getattr(__import__("ansible.cli.%s" % sub, fromlist=[myclass]), myclass)
File "/usr/local/lib/python3.7/site-packages/ansible/cli/galaxy.py", line 24, in <module>
from ansible.galaxy.collection import (
ImportError: cannot import name 'CollectionRequirement' from 'ansible.galaxy.collection' (/usr/local/lib/python3.7/site-packages/ansible/galaxy/collection/__init__.py)

This exception indicates you have overlapping conflicting installations of ansible-core (or ansible-base) and ansible<2.10.
You will need to clean up your installs to resolve the issue. Potentially via:
$ sudo pip uninstall -y ansible-base ansible-core
And then install again:
$ sudo pip install ansible-base ansible-core

Related

Broken DAG : ModuleNotFoundError: No module named 'airflow.providers.snowflake'

Hi I have setup my environment for Airflow run. I want to run the DAG whcih connects to Snowflake. I have installed below necessary packages from cloud shell.
pip3 install snowflake-connector-python==2.4.5
pip3 install snowflake-sqlalchemy==1.2.4
pip3 install apache-airflow-providers-snowflake==2.3.0
pip3 install apache-airflow-providers-common-sql
I have established snowflake connection in Airflow.
Now while executing the DAG i am getting this error since long time:
Broken DAG: [/home/airflow/gcs/dags/snowflake_connect_mine.py] Traceback (most recent call last):
File "/home/airflow/gcs/dags/snowflake_connect_mine.py", line 6, in <module>
from airflow.contrib.hooks.snowflake_hook import SnowflakeHook
File "/opt/python3.8/lib/python3.8/site-packages/airflow/contrib/hooks/snowflake_hook.py", line 23, in <module>
from airflow.providers.snowflake.hooks.snowflake import SnowflakeHook # noqa
ModuleNotFoundError: No module named 'airflow.providers.snowflake'
Please help me to resolve this issue.
Regards
Sachin Mittal
9560315720

Raspberry pi No module named 'cx_Oracle'

I want to use the raspberry pi to send values to the Oracle11g database, but when I run import cx_Oracle syntax for that process, I get the following error:
Traceback (most recent call last):
File "/home/pi/20190222ex01.py", line 1, in <module>
import cx_Oracle
File "/usr/lib/python3/dist-packages/thonny/backend.py", line 317, in _custom_import
module = self._original_import(*args, **kw)
ImportError: No module named 'cx_Oracle'
How can I solve this problem?
Update: Oracle has released Oracle Instant Client ARM64: https://www-sites.oracle.com/database/technologies/instant-client/linux-arm-aarch64-downloads.html
It means, that you have not installed module cx_Oracle.
First you must install Oracle driver with PIP:
python -m pip install cx_Oracle --upgrade
Hope it helped you.

Error in install_keras() in R since Ubuntu update

I used the book "Deep Learning with R" since one month now, and it enables me to make my first neural networks.
I am using Ubuntu. Until 2 days ago, everything was OK and worked fine. But two days ago I updated my Ubuntu to Ubuntu 18.02. Since then, my R code is not working anymore.
I have re-done what is recommended in the book (and what has worked one month ago):
$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install python-pip python-dev
$ sudo apt-get install build-essential cmake git unzip pkg-config libopenblas-dev liblapack-dev
I had no error.
then in R:
install.packages("keras")
library(keras)
install_keras()
This last command is supposed to install the core Keras library along with its dependencies in a Python virtual environment using TensorFlow.
But I obtained the following error that I really do not understand:
> install_keras()
Using existing virtualenv at ~/.virtualenvs/r-tensorflow
Upgrading pip ...
Traceback (most recent call last):
File "/home/baragatt/.virtualenvs/r-tensorflow/bin/pip", line 7, in <module>
from pip._internal import main
File "/home/baragatt/.virtualenvs/r-tensorflow/local/lib/python2.7/site-packages/pip/_internal/__init__.py", line 5, in <module>
import logging
File "/usr/lib/python2.7/logging/__init__.py", line 26, in <module>
import sys, os, time, cStringIO, traceback, warnings, weakref, collections
File "/usr/lib/python2.7/weakref.py", line 14, in <module>
from _weakref import (
ImportError: cannot import name _remove_dead_weakref
Erreur : Error 1 occurred installing TensorFlow
I have re-installed R, python, tensorflow, but I always have the same error. I do not understand this error. Maybe this is a problem with the virtualenv?
Can someone help me please? It is so frustrating, because two days ago my code was running, and now impossible to work...
I am working with Ubuntu 18.02, at the installed versions are python 2.7.15~rc1-1, R-3.4.4 and tensorflow-1.10.0.
Thanks a lot for this post. I do not really understand what the commands in this post are supposed to fix. But I have done the following:
cd /home/baragatt/.virtualenvs/r-tensorflow/
Then, as proposed in the post:
virtualenv . --system-site-packages
I obtained the following messages:
Running virtualenv with interpreter /usr/bin/python2
New python executable in /home/baragatt/.virtualenvs/r-tensorflow/bin/python2
Not overwriting existing python script /home/baragatt/.virtualenvs/r-tensorflow/bin/python (you must use /home/baragatt/.virtualenvs/r-tensorflow/bin/python2)
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/virtualenv.py", line 2375, in <module>
main()
File "/usr/lib/python3/dist-packages/virtualenv.py", line 724, in main
symlink=options.symlink)
File "/usr/lib/python3/dist-packages/virtualenv.py", line 946, in create_environment
site_packages=site_packages, clear=clear, symlink=symlink))
File "/usr/lib/python3/dist-packages/virtualenv.py", line 1417, in install_python
os.symlink(py_executable_base, full_pth)
OSError: [Errno 17] File exists
I also tried:
virtualenv -p /usr/bin/python2.7 .
And I obtained:
Running virtualenv with interpreter /usr/bin/python2.7
New python executable in /home/baragatt/.virtualenvs/r-tensorflow/bin/python2.7
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/virtualenv.py", line 2375, in <module>
main()
File "/usr/lib/python3/dist-packages/virtualenv.py", line 724, in main
symlink=options.symlink)
File "/usr/lib/python3/dist-packages/virtualenv.py", line 946, in create_environment
site_packages=site_packages, clear=clear, symlink=symlink))
File "/usr/lib/python3/dist-packages/virtualenv.py", line 1278, in install_python
shutil.copyfile(executable, py_executable)
File "/usr/lib/python2.7/shutil.py", line 97, in copyfile
with open(dst, 'wb') as fdst:
IOError: [Errno 40] Too many levels of symbolic links: '/home/baragatt/.virtualenvs/r-tensorflow/bin/python2.7'
I finally find a solution, by looking at different forums.
I thought that the problem should be because of the virtualenvironment that should be created when doing the following command in R.
install_keras()
Hence, I deleted the virtual environment(s) by deleting the directory in which these environements are located (I imagine).
cd ~/.virtualenvs
rm -r r-tensorflow/
Then I have tried the following commands in R
install.packages("keras")
library(keras)
install_keras()
And it works! Honestly, I still do not understand what was the problem that occured after my Ubuntu update.

Airflow worker breaking due to update in kombu - async error

Airflow v1.9.0 docker deployment based on puckel just broke for me with this error:
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 27, in <module>
args.func(args)
File "/usr/local/lib/python2.7/dist-packages/airflow/bin/cli.py", line 891, in worker
worker.run(**options)
File "/usr/local/lib/python2.7/dist-packages/celery/bin/worker.py", line 255, in run
**kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/worker/worker.py", line 99, in __init__
self.setup_instance(**self.prepare_args(**kwargs))
File "/usr/local/lib/python2.7/dist-packages/celery/worker/worker.py", line 122, in setup_instance
self.should_use_eventloop() if use_eventloop is None
File "/usr/local/lib/python2.7/dist-packages/celery/worker/worker.py", line 241, in should_use_eventloop
self._conninfo.transport.implements.async and
File "/usr/local/lib/python2.7/dist-packages/kombu/transport/base.py", line 125, in __getattr__
raise AttributeError(key)
AttributeError: async
The Cobman solution works, just FYI you can fix it too upgrading the Celery version (as they recommend in their repository)
&& pip install celery[redis]==4.1.1
It is related to Celery AttributeError: async error error just reported where kombu was updated from 4.1.0 to 4.2.0. I fixed it by switching the install order per below:
&& pip install kombu==4.1.0 \
&& pip install celery[redis]==4.0.2 \
&& pip install apache-airflow[crypto,celery,postgres,hive,jdbc,mysql,s3]==$AIRFLOW_VERSION \
Seems like kombu needs to pinned to this version in source...

ModuleNotFoundError: No module named 'six'

I am trying to setup lamp server on my Fedora 27. Referring this site, I am following every step, but running this command firewall-cmd --permanent --add-service=http, here are the following errors I get
Traceback (most recent call last):
File "/usr/bin/firewall-cmd", line 31, in <module>
from firewall.client import FirewallClient, FirewallClientIPSetSettings, \
File "/usr/lib/python3.6/site-packages/firewall/client.py", line 29, in <module>
import slip.dbus
File "/usr/lib/python3.6/site-packages/slip/dbus/__init__.py", line 8, in <module>
from . import service
File "/usr/lib/python3.6/site-packages/slip/dbus/service.py", line 30, in <module>
from six import with_metaclass
ModuleNotFoundError: No module named 'six'
I reinstalled six package, but still the same error message.
You probably don't have the six Python module installed. You can find it on
pypi
To install it:
$ easy_install six
If you have pip installed you can run $ pip install 'six'

Resources