I installed Apache airflow 1.9 from GitHub thanks to this command line on debian 9: pip install git+https://github.com/apache/incubator-airflow.git#v1-9-stable
However, I have an error during the airflow initdb caused by Fernet, do you know how to solve this issue?
INFO [alembic.runtime.migration] Running upgrade 947454bf1dff -> d2ae31099d61, Increase text size for MySQL (not relevant for other DBs' text types)
[2017-12-27 17:19:24,586] {models.py:643} ERROR - Failed to load fernet while encrypting value, using non-encrypted value.
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/airflow/models.py", line 639, in set_extra
fernet = get_fernet()
File "/usr/local/lib/python2.7/dist-packages/airflow/models.py", line 103, in get_fernet
raise AirflowException('Failed to import Fernet, it may not be installed')
AirflowException: Failed to import Fernet, it may not be installed
[2017-12-27 17:19:24,601] {models.py:643} ERROR - Failed to load fernet
And how can I specify extrapackage like in pip install apache-airflow[gcp-api] from my previous pip command install with GitHub?
How to install the latest 1.9.0RC too? I have an assertionError.
The answer marked as good have a broken link, if you have landed here as me and it continues broken, these steps have worked for me:
pip install cryptography
python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())"
Add the generated key to the config file airflow.cfg, fernet_key = YOUR_GENERATED_KEY
During install from source you have to replace fernet_key in airflow.cfg such as you can find in the docs here.
In apache-airflow documentation, the script for generating fernet key is apparently wrong.
it says to use the following script.
from cryptography.fernet import Fernet
fernet_key= Fernet.generate_key()
print(fernet_key) # your fernet_key, keep it in secured place!
but it raises an exception at 'airflow initdb' command.
to solve this instead of Fernet.generate_key() use Fernet.generate_key().decode() as shown in #skozz answer.
Related
I am trying to deploy an Airfow DAG to MWAA.
My requirements.txt:
apache-airflow[amazon] == 3.2.0
I import EcsOperator like this:
from airflow.contrib.operators.ecs_operator import EcsOperator
However, I get this error:
Broken DAG: [/usr/local/airflow/dags/mydag.py] Traceback (most recent call last):
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/usr/local/airflow/dags/mydag.py", line 4, in <module>
from airflow.contrib.operators.ecs_operator import EcsOperator
ImportError: cannot import name 'EcsOperator' from 'airflow.contrib.operators.ecs_operator' (/usr/local/lib/python3.7/site-packages/airflow/contrib/operators/ecs_operator.py)
What am I doing wrong here?
What am I doing wrong here?
You might be referencing a different version (1.10.12?) of the Airflow documentation.
airflow.contrib.operators.ecs_operator (1.10.12)
The documentation for 3.2.0 is here. You can import the EcsOperator like this:
from airflow.providers.amazon.aws.operators.ecs import EcsOperator
airflow.providers.amazon.aws.operators.ecs (3.2.0)
The correct requirements.txt:
(empty)
And the correct import:
from airflow.providers.amazon.aws.operators.ecs import ECSOperator
Note the casing!
There are several issues here so I'll compile a detailed answer since privious answers didn't cover all of them.
First, the updated import path (provider release 3.2.0) is:
from airflow.providers.amazon.aws.operators.ecs import EcsOperator
The reason this doesn't work for you is because you install the provider with extras as:
apache-airflow[amazon]
as explained in the provider extra docs when installing provider in that manner you get the provider version which was released at the time of the Airflow version that you are using. Thus you are not guaranteed to get the updated provider version. So in case you are using Airflow 2.2.4 (latest at the time of writing this answer) you will get Amazon provider version 3.0.0 which is not the most recent one.
To get updated provider you should install it as:
pip install apache-airflow-providers-amazon
if you like to pick a specific version then:
pip install apache-airflow-providers-amazon==3.2.0
Please note that you should always install from constraint files provided by Airflow. Example:
pip install "apache-airflow-providers-amazon" --constraint "https://raw.githubusercontent.com/apache/airflow/constraints-main/constraints-3.7.txt"
Note that the provider is referring to constraints-main which constantly updated and not to constraints-2.2.4 or any other specific Airflow version.
You can read more about it in the doc about Installation and upgrading of Airflow providers separately.
I was installing Plumi for creating a production site. after installing plumi.app i faced below error whie running buildout using command : python3 bootstrap.py -c production.cfg
Note:: Zope is installed and zope user is added..
The ERROR IS BELOW::enter image description here
root#Saif:/home/plumi.app# /bin/python3 bootstrap.py && ./bin/buildout -v
ez_setup.py is deprecated and when using it setuptools will be pinned to 33.1.1 since it's the last version that supports setuptools self upgrade/installation, check https://github.com/pypa/setuptools/issues/581 for more info; use pip to install setuptools
Downloading https://pypi.io/packages/source/s/setuptools/setuptools-33.1.1.zip
Extracting in /tmp/tmpvwkpnumd
Now working in /tmp/tmpvwkpnumd/setuptools-33.1.1
Building a Setuptools egg in /tmp/tmpgfkbh3om
warning: no files found matching '*' under directory 'setuptools/_vendor'
/tmp/tmpgfkbh3om/setuptools-33.1.1-py3.8.egg
Couldn't find index page for 'zc.buildout' (maybe misspelled?)
Couldn't find index page for 'zc.buildout' (maybe misspelled?)
No local packages or working download links found for zc.buildout
error: Could not find suitable distribution for Requirement.parse('zc.buildout')
Traceback (most recent call last):
File "bootstrap.py", line 171, in <module>
raise Exception(
Exception: Failed to execute command:
'/bin/python3', '-c', 'from setuptools.command.easy_install import main; main()', '-mZqNxd', '/tmp/tmpgfkbh3om', 'zc.buildout'
root#Saif:/home/plumi.app# ^C
root#Saif:/home/plumi.app# python3 bootstrap.py -c production.cfg
ez_setup.py is deprecated and when using it setuptools will be pinned to 33.1.1 since it's the last version that supports setuptools self upgrade/installation, check https://github.com/pypa/setuptools/issues/581 for more info; use pip to install setuptools
Downloading https://pypi.io/packages/source/s/setuptools/setuptools-33.1.1.zip
Extracting in /tmp/tmp6dqehae9
Now working in /tmp/tmp6dqehae9/setuptools-33.1.1
Building a Setuptools egg in /tmp/tmpdja0839z
warning: no files found matching '*' under directory 'setuptools/_vendor'
/tmp/tmpdja0839z/setuptools-33.1.1-py3.8.egg
Couldn't find index page for 'zc.buildout' (maybe misspelled?)
Couldn't find index page for 'zc.buildout' (maybe misspelled?)
No local packages or working download links found for zc.buildout
error: Could not find suitable distribution for Requirement.parse('zc.buildout')
Traceback (most recent call last):
File "bootstrap.py", line 171, in <module>
raise Exception(
Exception: Failed to execute command:
'/usr/bin/python3', '-c', 'from setuptools.command.easy_install import main; main()', '-mZqNxd', '/tmp/tmpdja0839z', 'zc.buildout'
warning: no files found matching '*' under directory 'setuptools/_vendor'
I found out that setuptools version 33.1.1 doesn't have vendor folder.
https://github.com/pypa/setuptools/tree/v33.1.1/setuptools
I am using Redhat 6.6 and installed scons and using scons to install serf 1.3.8 but seeing the below error. Anyone has faced the below error. I am stuck and i need serf for http/https access for svn 1.9. Seeking advice for below error. I followed below URL. "http://www.linuxfromscratch.org" site for install procedure. I see error while executing cmd "/local//scons-2.4.1/script/scons check"
Has anyone know the solution or alternative to install serf to bypass this option and install serf without no issues
scons: Building targets ... test/test_buckets.c: In function
'test_deflate_4GBplus_buckets': test/test_buckets.c:1559: warning: integer
overflow in expression scons: *** [test/test_buckets.o] Error 1 scons: building
terminated because of errors.
Please advice.
[EDIT]
What env variables are required for scons to work properly. I see different error, i messed the environment. Seeing the below error. Any idea what I have missed. I have the custom install for python7. I have added the python install path to PATH,LD_LIBRARY_PATH. Error is below. Import failed. Unable to find SCons files in: /local/appln/pkgs/scons-2.4.1/bin/../engine /local/appln/pkgs/scons-2.4.1/bin/scons-local-2.4.1 /local/appln/pkgs/scons-2.4.1/bin/scons-local /usr/lib/scons-2.4.1 /usr/local/lib/scons-2.4.1 /local/apps/pkgs/scons-2.4.1/lib/python2.6/site-packages/scons-2.4.1 /usr/lib/python2.6/site-packages/scons-2.4.1 /usr/local/lib/python2.6/site-packages/scons-2.4.1 /usr/lib64/scons-2.4.1 /local/apps/pkgs/scons-2.4.1/lib/scons /usr/lib/scons /usr/local/lib/scons /local/appln/pkgs/scons-2.4.1/lib/python2.6/site-packages/scons /usr/lib/python2.6/site-packages/scons
/usr/local/lib/python2.6/site-packages/scons /usr/lib64/scons
Traceback (most recent call last): File "/local/appln/pkgs/scons-2.4.1/bin/scons", line 190, in import SCons.Script
ImportError: No module named SCons.Script
[Closed]
I am a team member of research group and my current research needs to use Openstack Swift.
We have installed Openstack Juno and it works perfectly. For installation Packstack have used used. The swift service is also installed on the server and it works! We have tried to access it from the console create container, upload a file etc. everything works.
So we went further and tried to access swift using its API. Here we faced a problem on the phase of authentication.
Below you can see the simple python code I am using to check if I can connect to Swift.
import swiftclient
import keystoneclient
conn = swiftclient.Connection(
authurl='http://*[server ip]*:5000/v2.0/',
user='account_name:username',
key='serverpassword',
auth_version="2.0").get_auth()[0]
for container in conn.get_account()[1]:
print container['name']
Before executing the cod on client computer, I have installed the following necessary packages.
sudo aptitude install python-pip
sudo pip install python-swiftclient
sudo pip install python-keystoneclient
Here you can see the error which occurs during execution of the code.
Traceback (most recent call last):
File "new.py", line 15, in <module>
auth_version="2.0").get_auth()[0]
File "/usr/local/lib/python2.7/dist-packages/swiftclient/client.py", line 1332, in get_auth
timeout=self.timeout)
File "/usr/local/lib/python2.7/dist-packages/swiftclient/client.py", line 463, in get_auth
auth_version=auth_version)
File "/usr/local/lib/python2.7/dist-packages/swiftclient/client.py", line 366, in get_auth_keystone
ksclient, exceptions = _import_keystone_client(auth_version)
File "/usr/local/lib/python2.7/dist-packages/swiftclient/client.py", line 351, in _import_keystone_client
variables to be set or overridden with -A, -U, or -K.''')
swiftclient.exceptions.ClientException:
Auth versions 2.0 and 3 require python-keystoneclient, install it or use Auth
version 1.0 which requires ST_AUTH, ST_USER, and ST_KEY environment
variables to be set or overridden with -A, -U, or -K.
I have tried to find the solution searching in the Internet but I have not succeed.
Karaf 2.2.3 recently released and finally has a pre-bundled spring-jms feature. In order to make life easy I added it to the featuresBoot config property with the other defaults:
featuresBoot=config,ssh,management,spring-jms
However, when I start Karaf it behaves uncontrollably. Sometimes it will install on boot and other times it doesn't. When it doesn't auto-install I attempt to add it via the command line:
features:install spring-jms
And even that behaves wildly. See below:
karaf#root> features:install spring-jms
Error executing command: java.lang.IllegalArgumentException
karaf#root> features:install spring-jms
Error executing command: invalid entry size (expected 3293 but got 16823 bytes)
karaf#root> features:install spring-jms
Error executing command: Manifest not present in the first entry of the zip mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.aopalliance/1.0_5
karaf#root> features:install spring-jms
Refreshing bundles org.springframework.context.support (50)
Error executing command: Could not start bundle mvn:org.eclipse.jetty/jetty-client/7.4.5.v20110725 in feature(s) jetty-7.4.5.v20110725: Unresolved constraint in bundle org.eclipse.jetty.client [83]: Unable to resolve 83.0: missing requirement [83.0] package; (&(package=org.eclipse.jetty.http)(version>=7.4.0)(!(version>=8.0.0)))
karaf#root> features:install spring-jms
Refreshing bundles org.springframework.context.support (50)
Those are back-to-back executions of the install command. The last execution works.
Anyone else see this behavior? Or know how to correct it?
Tony,
First, make sure that you are using the correct version Java, I use jdk 1.6_24. When using this, with no other bundles installed (a fresh installation), it installs properly. If I were you I would:
1) try installing a fresh instance of Karaf,
2) copy your maven repository to a new location, and
3) run Karaf in a fresh installation,
4) install spring-jms again.
If that doesn't work, reply to this and let me know your environment, along with all of the exceptions generated in your karaf log file.
By any chance are you using a customized org.ops4j.pax.url.mvn.cfg? I am, and it has caused a huge boot-time race condition problem that led to features sporadically failing to load.
Take a look at https://issues.apache.org/jira/browse/KARAF-910 "Race between FeatureService and ConfigAdmin for resolving mvn: URLs?"