I couldn't run a gcloud command in GCP Composer.
Here is an issue:
ERROR: (gcloud.composer.environments.run) The subcommand "variables" is not supported for Composer environments with Airflow version 2.1.2.
Here is the command:
gcloud composer environments run composer \
--location europe-west1 \
--project=platform-name \
--impersonate-service-account=SA-account.com variables -- \
--import /home/airflow/gcs/data/env_var.json
Could someone help me?
Support for the mentioned release of Composer has only just become available in the last few days before your post. It is likely that your gcloud version is out of date compared to the product. Try refreshing the gcloud command by refreshing the GCP SDK. Alternatively, try the "gcloud beta" commands of gcloud to engage pre-GA function.
The issue you are encountering is likely caused by the gcloud command thinking that the version of Composer being used is not GA and requiring that you use gcloud beta.
Related
My .gitlab-ci.yml needs to call not only the gcloud cli but also firebase cli.
The image: google/cloud-sdk:latest allows me to call the gcloud cli but not the firebase cli.
The image: devillex/docker-firebase allows me to call the firebase cli but not the gcloud cli.
I've tried installing the firebase CLI by following firebase's CI/CD instructions but I got an error about permissions which seemingly required sudo. However, gitlab doesn't even offer sudo, not surprisingly.
I've tried searching the stackoverflow and hub.docker.com but I can't find an image that offers both CLIs. Do you know of one that offers both?
Is there somewhere else I can search or some way to search differently (e.g. are certain keywords helpful in searching for a docker image)?
If I wanted to try to combine image: google/cloud-sdk:latest and image: devillex/docker-firebase into one image, how would I do that? What's the first step? I've never made a Docker image let alone tried to merge two existing ones.
You can base your docker image on google/cloud-sdk:alpine, and then install npm and use it to install firebase tools:
FROM google/cloud-sdk:alpine AS base
RUN apk add --update npm
RUN npm install -g firebase-tools
Hi wonderful people of stackoverflow!
Background
I have an Angular 9 application and CI set up with Codeship. This has been running fine until about two weeks ago when suddenly it stopped working after I upgraded from Angular 7.
Set up commands:
nvm ls
nvm install v10.15.1
nvm use v10.15.1
gem install rb-inotify -v 0.9.10
gem install sass
npm install -g firebase-tools#6.12.0
npm i firebase-functions#3.3.0
yes | npm install -g #angular/cli#9.1.12
npm i
cd functions
nvm use v10.15.1
npm i
cd ..
Which runs as expected. I have checked the versions in the CI environment with npm outdated, which show me that the correct versions are being installed the same as local:
Deploy script:
firebase use default
firebase functions:config:set test="test" --token "$FIREBASE_TOKEN"
firebase deploy --token "$FIREBASE_TOKEN"
Error:
firebase use default is successful, but firebase functions:config:set test="test" --token "$FIREBASE_TOKEN" now returns:
Error: HTTP Error: 404, Method not found.
Notes:
I've reset up the $FIREBASE_TOKEN with the new cli and can confirm that this probably isn't the issue, because when the token is incorrect (I removed the last character from the TOKEN), it throws a different error stating this.
I can also confirm that the same script run locally works and deploys just fine - so while I can get around the problem this way, it isn't an ideal or long term solution.
Any ideas or help would be genuinely appreciated as I'm somewhat lost as to what to do next?
This seems to be related with firebase-tools version. When I installed the same version as you have (6.12.0) I got the same error.
I have tried on new version (I have 8.7.0) and it is working fine with one more remark. When I tried exactly the same command as you have error:
Error: Invalid argument, each config value must have a 2-part key (e.g. foo.bar).
So working command will be like this:
firebase functions:config:set test.test="test"
If you need old version of firebase-tools I tested few other versions and it seems that this is working since version 7.1.0.
For anyone else having this issue - I never managed to solved this sorry. However I migrated my CI over to use GitHub Actions easily and it all works without any issue.
I receive this error soon after upating Cloud Composer with PyPi packages - occurs consistently across the 4 configurations outlined below
python packages added to Cloud Composer
forex_python>=1.5.0
datalab>=1.1.5
Airflow webserver error
502 Server Error
Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds.
GCP status over period status.cloud.google.com - no issues with any of:
Google Cloud Composer
Google Kubernetes Engine
Sample of StackDriver errors found
severity: "ERROR" textPayload: "worker: Warm shutdown (MainProcess)
severity: "ERROR" textPayload: "INFO:googleapiclient.discovery:URL being requested: POST https://pubsub.googleapis.com/v1/projects/FAKE_PROJECT/topics/europe-west2-FAKE_INSTANCE-composer-agent-to-backend-topic-FAKE_TOPIC:publish?alt=json
severity: "ERROR" textPayload: "Fetching cluster endpoint and auth data.
severity: "ERROR" textPayload: "kubeconfig entry generated for europe-west2-FAKE_INSTANCE-gke.
severity: "ERROR" textPayload: "/usr/local/lib/airflow/airflow/configuration.py:569: DeprecationWarning: Specifying both AIRFLOW_HOME environment variable and airflow_home in the config file is deprecated. Please use only the AIRFLOW_HOME environment variable and remove the config file entry.
Initial Issue
Env 1) created via Cloud Composer GUI
-created composer env X1 same specs as 2) below
-added 2 python packages listed above
-DAGS added and were working until 6-dec-2019
-around 6-dec-2019 Airflow webserver error -> result is environment unusable
Further Testing
CREATE STEP
Env 2)
gcloud beta composer environments create ${COMPOSER_NAME} \
--location=${COMPOSER_LOCATION} \
--image-version=composer-1.8.2-airflow-1.10.3 \
--disk-size=100GB \
--python-version=3 \
--node-count=3
Env 3)
gcloud composer environments create ${COMPOSER_NAME} \
--location=${COMPOSER_LOCATION} \
--image-version=composer-1.8.1-airflow-1.10.3 \
--disk-size=100GB \
--python-version=3 \
--node-count=3
Env 4) manually created composer env X2 same config as 2)
All Successful according to gcloud CLI and Cloud Composer GUI
PY PACAKGES STEP
Update 2) and 3) using...
gcloud composer environments update ${COMPOSER_NAME} \
--location ${COMPOSER_LOCATION} \
--update-pypi-packages-from-file=PyPi_req.txt
Update 4) using Cloud Composer GUI
All Successful according to gcloud CLI and Cloud Composer GUI
BUT All have the Airflow webserver error -> result is environment unusable
Has anyone observed and resolved this issue?
It's great to hear that the issue is solved, just to compliment your earlier comment:
It is important to note that The Airflow webserver is an add-on. Even when it is down, Airflow can still run normally, if nothing else is broken in the Composer environment. Based on this, if your Airflow webserver is being affected, you could use the Airflow CLI (via gcloud).
I also suggest you look at this documentation where you will find useful documentation about how to manage this kind of issues and the causes that could provoke it.
Regarding the concern about the time that takes to update your Composer environment please note that Composer needs to create a lot of resources, I suggest you look in the architecture of the Composer environment where you will find all the components that needs to be updated which each change.
I'm trying to use dotnet-warp as a global tool in my .NET Core Travis-CI build, because I like the idea of a single executable so much better than a folder full of 75ish files.
I can successfully add the tool and verify there's a tools/dotnet folder in the $PATH...
But the log indicates that because .NET Core has been added recently, I'll need to restart or logout before I can actually use the tool.
Is anyone aware of a way to make this work in the Travis-CI environment?
Ran into the same issue, using the info from the Travis CI Installing Dependencies page and this comment on an issue about it, adding the following following to to my .travis.yml solved the problem:
before_script:
- export PATH=$PATH:/home/travis/.dotnet/tools
My build log:
$ export PATH=$PATH:/home/travis/.dotnet/tools
$ dotnet tool install -g dotnet-warp
You can invoke the tool using the following command: dotnet-warp
Tool 'dotnet-warp' (version '1.0.9') was successfully installed.
The command "dotnet tool install -g dotnet-warp" exited with 0.
$ cd ./src/[my project]/
The command "cd ./src/[my project]/" exited with 0.
$ dotnet-warp
Running Publish...
Running Pack...
Saved binary to "[my project]"
The command "dotnet-warp" exited with 0.
At this point I'm thinking about calling bash command pip install fabric2 each time my operator executed, but this does not looke like a good idea.
Create a requirements.txt file similar and pass that as a variable while creating the cloud composer enviroment.
Sample requirements.txt file:
scipy>=0.13.3
scikit-learn
nltk[machine_learning]
Pass the requirements.txt file to the environments.set-python-dependencies command to set your installation dependencies.
gcloud beta composer environments update ENVIRONMENT_NAME \
--update-pypi-packages-from-file requirements.txt \
--location LOCATION
Turns out you can use: PythonVirtualenvOperator it supports pip deps.
Another option that is available for the Composer users is to install deps via the composer itself: https://cloud.google.com/composer/docs/how-to/using/installing-python-dependencies