I am facing an issue where my DAG isn't importing into Airflow due to a "ModuleNotFoundError: No Module Name package" error.
The provider throwing the error shows in the Airflow UI and works if I import it inside of a Python Operator. I am installing the provider via a requirements file in a Docker Image which also shows correct instillation and show's it's installed in site packages. I am running via a Celery executor on Kubernetes. Could this have something to do with the issue?
This occurred due to several Kubernetes pods not being re-built. This caused my scheduler to not have the required packages which caused the import errors. I manually restarted those nodes and re-built it from the last Docker file and everything ran smoothly.
Related
So, I'm trying to install Airflow on my laptop and I was able to do that using WSL but I can't seem to open it.
My webserver is running but I cant open it on localhost:8080
The system is taking eons to run airflow scheduler
I upgraded docker image to use airflow 1.10.14. Airflow is deployed with helm and I have an init-container which execute script to initialize airflow. The init script contain commands
...
airflow upgradedb
alembic upgrade heads
...
The upgrade failed so I need to rollback to previous deployed release version which contains the 1.10.10 version of airflow but it is now getting the alembic error. I tried to delete the row/record in the alembic_version table based on my search.
The error in scheduler container is this:
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.DuplicateColumn) column "operator" of relation "task_instance" already exists
All the other pods are running fine (webserver and workers).
Any resolution/workaround to this issue?
Unless you are ok with scrapping your entire metadata DB (connections, variables, task runs, etc) I might opt to just push to 1.10.15 and see if the bug you encountered is resolved there. From my best understanding it is not possible to downgrade the DB after the upgrade has been done.
Suggesting upgrade to 1.10.15 based on if you remember encountering an issue similar to this user here. The CLI fix can be found here. If you found another issue with your 1.10.14 upgrade besides the one I noted for the CLI, it might be worth investigating a path to resolving that instead.
I have installed Github Airflow by downloading every single PyPi dependency and installing it in offline mode.
Now when I am running web-server, its giving a warning "please make sure to build the frontend in static/ directory and then restart the server".
I can access Airflow webpage but its distorted. Airflow logo is almost capturing the 50% page.
Can somebody please help me setting up Airflow properly?
Thanks in advance
I found solution to my problem, just in case someone else also faces same issue and searching for solution.
You need to compile assets with a script "compile_assets.sh".
But the catch is, you need to execute this script from where the Airflow is installed and not from the source location.
The message "please make sure to build the frontend in static/ directory and then restart the server" is suggesting to compile the static assets.
./airflow/www/compile_assets.sh script was useful for this purpose but it is removed in the newer version of airflow.
In case you are using breeze, run the below command to compile the assets
breeze compile-www-assets
help on breeze is available
breeze --help
If you are building the package yourself then run the below command (This is present at the end of INSTALL file)
python setup.py compile_assets
COMPILING FRONT-END ASSETS (in case you see "Please make sure to build
the frontend in static/ directory and then restart the server")
python setup.py compile_assets
My OS is Mac OS. I followed airflow official installation guide to install. But when I test: airflow test tutorial print_date 2015-06-01 from airflow testing it doesn't print any output. The result is here.. I wonder did I install it successfully? I've run other commands on the official airflow testing page. They report no error.
So far I see only WARNING output, it doesn't mean that airflow isn't running nor installed improperly. You'd have an easier time testing your install with airflow list_dags and you probably must run airflow initdb before most of the commands (and look at the airflow.cfg file).
I am trying to get my localhost working on my remote (mediatemple) server.
I have bundled it up and have a /myurl.com/bundle folder with the following files.
this folder contains
main.js
npm-debug.log
programs
server
How do I get this to run?
You should take a look in the README inside the bundle folder. Normally everything ist described there to start your app.
Make sure that NODEJS and MONGO is installed on your remote server. This is NOT included in your bundle as well as NODEJS is not present.
If you are running a system like debian or ubuntu normally you can do the installation with
apt-get install nodejs mongo
Make sure, that the nodejs has release v0.10.36 or v0.10.38
node --version
At the README you see the necessary ENV-VARS like MONGO_URL and PORT you need to set to start your meteor app.
If you have running a apache server already the PORT 80 is already blocked, so try PORT=3000 to start your meteor app.
Example:
MONGO_URL='mongodb://localhost:27017/yourapp' ROOT_URL="http://yourhost" PORT=3000 node main.js
If using as above you do not need to export the ENV-VARS before start
Sometime when starting, there are missing NPM – you get fiber errors
In that case
cd programs/server
npm install
and the try start again.
Good luck
Tom
(I'm writing this response assuming that you are not worried about scalability issue, respond in comment if you want to scale your app)
The best option for running a node application, which Meteor application is, is by using forever.
npm install forever
forever start simple-server.js
If you want to figure out how to see the log files and how to stop/restart your service, you can run forever --help to see all the commands.