I am new to apache airflow and while installing it on my local machine I followed the quick start guide. However, when I made it to the step where you have to initialize the database I got several problems. The main one was because I was using Python 3.9, I solved those issues by downgrading to python 3.8.6.
# initialize the database
airflow initdb
Now, while doing the same command with python 3.8.6 I am getting the following error in the terminal and I cannot for the life of me figure it out.
ImportError: cannot import name '_Union' from 'typing' (/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/typing.py)
If anyone can point me in the right direction on how to solve this issue it would be greatly appreciated. I am still new to this, so please go easy on me if this is something which is a trivial fix for you.
Run the following command with Python 3.8 and then try airflow initdb again
pip install apache-airflow==1.10.13 \
--constraint "https://raw.githubusercontent.com/apache/airflow/constraints-1.10.13/constraints-3.8.txt"
Related
I have a python project that imports bokeh, and I use pip install bokeh to install bokeh 3.0.3 version, and everything works fine. However, after I compile this project using pyinstaller, the executable file crashes at launch with the following error:
importlib.metadata.PackageNotFoundError: bokeh
Also the way I'm importing bokeh functionalities into my project is like this:
from bokeh.io import output_file, show
I have searched around, but haven't been able to find any useful clue on how to solve this issue. I would appreciate any help on this.
Python versoin 3.8.13
Pyinstaller version: 5.7.0
I have also tried using conda to install bokeh, but it does not make any difference.
You may find this issue useful for your case.
https://github.com/pyinstaller/pyinstaller/discussions/6033
In your case there is the need to include the "--copy-metadata bokeh" command when you are running pyInstaller or use the copy_metadata() function in your spec file.
For more details refer to the github issue as it goes a little more in detail.
I want to sync my old firestore data of a collection to algolia. I have followed the documentation provided and face issues with npx. If anyone could help me would mean a lot.
Here's the output I get if I use a PowerShell terminal
/bin/bash: C:/Program Files/nodejs/npx: No such file or directory
I receive this error when using bash in wsl even when providing the full path to npx and the script file.
Any help would mean a lot
I have tried and tested all the solutions I could find on the internet from using git bash to execute the script to using wsl to execute the bash script without any luck
Use npx firestore-algolia-search#0.5.14. Something is broken in 0.5.15.
EDIT: I created a github ticket for this issue.
my first posting on setting up Yocto development environment
on my Ubuntu system (Ubuntu 18.04.3 LTS/bionic), based on the information enclosed in the document from
this web link (https://www.yoctoproject.org/docs/current/brief-yoctoprojectqs/brief-yoctoprojectqs.html).
All is well until... ~/poky/build$ bitbake core-image-sato
which results in this error:
File "/usr/local/lib/python3.5/sqlite3/dbapi2.py", line 27, in <module>
from _sqlite3 import *
ImportError: No module named '_sqlite3'
Below is my effort to proceed past this error, which didn't resolve the
error above. Please be generous and provide some guidance. I searched for
relevant posting locations; any advice on a better place is appreciated.
Thank you.
------------------------------------------------
A web search on this error () results in:
How to Use SQLite in Ubuntu | Chron.com
with
~/poky/build$ sudo apt-get install sqlite3 libsqlite3-dev
which tells me this:
Reading package lists... Done
Building dependency tree
Reading state information... Done
libsqlite3-dev is already the newest version (3.22.0-1ubuntu0.1).
sqlite3 is already the newest version (3.22.0-1ubuntu0.1).
The following packages were automatically installed and are no longer
required:
linux-headers-5.0.0-23 linux-headers-5.0.0-23-generic linux-image-5.0.0-23-generic linux-modules-5.0.0-23-generic
linux-modules-extra-5.0.0-23-generic
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 12 not upgraded.
So, evidently sqlite3 exists on my system. Here are the SO references that I checked:
[ImportError: No module named '_sqlite3' in python3.3][1]
[importerror no module named '_sqlite3' python3.4][2]
[ImportError: No module named _sqlite3 (even after doing eveything)][3]
[ImportError: No module named _sqlite3][4]
[1]: https://stackoverflow.com/questions/20126475/importerror-no-module-named-sqlite3-in-python3-3
[2]: https://stackoverflow.com/questions/24052137/importerror-no-module-named-sqlite3-python3-4
[3]: https://stackoverflow.com/questions/35889383/importerror-no-module-named-sqlite3-even-after-doing-eveything
[4]: https://stackoverflow.com/questions/2665337/importerror-no-module-named-sqlite3
I have just kicked off a build verifying the Brief-Quickstart steps verbatim on an otherwise fresh Ubuntu 18.04 install. There is not even SQLite installed at all, yet the build proceeds nicely. So the chances are pretty high the python installation in your development host is busted in some way or the other. Yet, there might be reasons for it:
you maybe selected python 3.5 explicitly because some other thing you did requires it
you maybe selected python 3.5 implicitly because you forwarded from on old installation, installed something that depended on it, or similar.
In any case, I'd guess that now tinkering with the link might break things somewhere else on your machine, which should be avoided IMHO.
So what are your options now? My advice would be to start building in a container, in the simplest for that requires no more than installing docker and kicking off docker run -it ubuntu:bionic /bin/bash - at least to verify things are generally working.
In the longer term you might want to make a specialized container for this with one or two additions:
1) have all the needed packages set up already
2) using a standard user instead of root.
This is the way I do things personally. An alternative would be to use the prepared things by CROPS as it is a known good solution, and it significantly reduces problems originating from host system pecularities.
I have a script in Python that imports out of some other packages, the NLTK package.
The OS is Debian Stretch. Executing it directly on Linux everything works as should it be. But running the mentioned script with Sympony - Process, it returns the following error:
Traceback (most recent call last):
File \"/var/www/html/public/_import.py\", line 1, in <module>
import nltk
ModuleNotFoundError: No module named 'nltk'
If simply I just comment "import nltk", all the script works properly even with Symfony Process at all.
I could not leave this problem, as I have resolved it, without an answer for you that might be facing the same issue!
The problem at all was not caused by Symfony - Process, but, unfortunatly, by instalation behavior of the module named: "NLTK";
In my case, the user used to install things on Linux Debian Stretch was the root.
Try (now, you, before changes and check...) to use the following command in your terminal: pip show nltk.
The output is:
Name: nltk
Version: 3.4.5
Summary: Natural Language Toolkit
Home-page: http://nltk.org/
Author: Steven Bird
Author-email: stevenbird1#gmail.com
License: Apache License, Version 2.0
Location: /usr/local/lib/python3.7/site-packages
Requires: six
Required-by:
Take a look at Location. Now it is the right place. But, as default, pip install nltk will place it, in my case with user "root" installing things, at "/root/.local/lib/python3.7/site-packages/nltk" as the base path! So, when the user "www-data" tried to runing it... ModuleNotFoundError: No module named 'nltk'
No module named 'nltk' - because it was not there to the place should it be accessed!
So, as you might guess now, the problem as caused by permission issues! Unfortunatly, no error (logs) output happened that would drive me to mind that would it be related to permission issues!
The solution:
1) Not messing around with the system or executing unecessary changes was considered at first place;
2) Had uninstalled it with pip the module NLTK and I have seeked to find where the other modules were installed;
3) Installed it (NLTK) again, but now, placing it where I want, I mean, where it should be since the begening: among the others modules - as numpy -, that could be accessed easily and without issues!
The command used was:
pip install --target="/usr/local/lib/python3.7/site-packages"
--upgrade nltk
Now, NLTK resides at /usr/local/lib/python3.7/site-packages followed by all the other modules working properly like a charm!
I hope this helps you!
I have ran airflow initdb for the second time and received the following error:
alembic.util.exc.CommandError: Can't locate revision identified by '9635ae0956e7'
So I understand that I need to remove the already registered version, but I cannot seem to find it. when I open the mysql cli (using sudo mysql) I only see 4 dbs: information_schema ,mysql, performance_schem, sys
How to do I remove this revision and start fresh?
So one solution, that worked for me, is to remove the file airflow.db under $ARIFLOW_HOME/ and run airflow initdb again. This recreates the file with a new revision