I want to automatically delete instances in google compute engine.
For that I use gcloud gcloud compute instances delete instance-name --zone instance-zone
However gcloud asks me to confirm the action. Is it possible to omit this step and do it automatically?
gcloud -q compute instances delete instance-name --zone instance-zone
(--quiet, -q Disable all interactive prompts.)
Related
I'm deploying a firebase HTTP function using the command line and would like to change the security-level to 'secure-always' like you can do in the gcloud deploy command:
gcloud functions deploy FUNCTION_NAME --trigger-http --security-level=secure-always
I cannot use the UI to deploy this function (I do not store my source code on source repos), and am in a Firebase functions directory so I cannot use gcloud to deploy. Is there a gcloud command I can use to edit the function's security-level after deployment similar to:
gcloud functions remove-iam-policy-binding
or a way to specify the security-level pre-deployment? For reference I am deploying a NodeJS Cloud Function and using firebase deploy --only functions:FUNCTION_NAME. My current fallback option is manually examining the headers in the function, as it is the only option described in this docs page available in my situation: https://cloud.google.com/functions/docs/writing/http#security_levels
So far, there's no gcloud command to update the Cloud Function's security level from HTTP to HTTPS. For now, you can only update the Cloud Function's security level by re-deploying the function or edit the function through Google Cloud Function Console:
Click the prefer Function
Click Edit button
At the bottom of Trigger title, click the edit button
Check or tick the check box of Require HTTPS and click Save button
Click the Next button at the bottom of screen and deploy.
I have functions deployed to gcloud functions and i want to configure CI/CD for deploying this functions from gitlab.
To do any operations from gitlab i need to get firebase auth token with
firebase login:ci
command.
The problem is that i need to get this token using gcloud service account, which is not displayed in browser, when i run
firebase login:ci
I have this service account data (project_id, private_key, private_key_id, etc.)
How should i authorize using this acc?
If you set an environment variable called GOOGLE_APPLICATION_CREDENTIALS as a path pointing to your service account JSON file, the Firebase CLI will automatically pick that up and use it to authorize commands. You don't need to be logged in or provide a --token argument when this is the case.
If anyone finds this and is wondering how to do it on CircleCI, this worked for me.
Generate a json file key for you service account in the GCP Console
Set the json to a CircleCI environment variable at the org level or the project level
We use one at the org level called GSA_KEY
In your workflow config, before you run the firebase command, run this command:
echo echo $GSA_KEY > "$HOME"/gcloud.json
Then run your firebase deploy command, first setting the path to GOOGLE_APPLICATION_CREDENTIALS
The deployment run looks like:
steps:
- checkout
- run:
name: Create SA key JSON
command: echo $GSA_KEY > "$HOME"/gcloud.json
- run:
name: Deploy to Firebase
command:
GOOGLE_APPLICATION_CREDENTIALS="$HOME"/gcloud.json firebase deploy [project specific stuff]
Use command:
gcloud auth activate-service-account xyz#project-id.iam.gserviceaccount.com --key-file=/path/to/file.json --project=project-id
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/file.json"
in Bash
run the following commands 1 and 2 in order
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/file.json"
"/path/to/file.json" -- the location of the file where the service account json file is saved.
npx firebase-tools deploy --json
Do not forget to use the right project when deploying like
firebase use dev or
firebase use qa
This is what worked for me:
Added the environment variable GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
firebase deploy --non-interactive # needed to bypass any prompts that might stall out a CI script.
You can also set the value of the Environment variable to be the JSON in the key file.
I try to run a bash command in this pattern ssh user#host "my bash command" using BashOperator in Airflow. This works locally because I have my publickey in the target machine.
But I would like to run this command in Google Cloud Composer, which is Airflow + Google Kubernetes Engine. I understood that the Airflow's core program is running in 3 pods named according to this pattern airflow-worker-xxxxxxxxx-yyyyy.
A naive solution was to create an ssh keys for each pod and add it's public key to the target machine in Compute Engine. The solution worked until today, somehow my 3 pods have changed so my ssh keys are gone. It was definitely not the best solution.
I have 2 questions:
Why Google cloud composer have changed my pods ?
How can I resolve my issue ?
Pods restarts are not specifics to Composer. I would say this is more related to kubernetes itself:
Pods aren’t intended to be treated as durable entities.
So in general pods can be restarted for different reasons, so you shouldn't rely on any changes that you make on them.
How can I resolve my issue ?
You can solve this taking into account that Cloud Composer creates a Cloud Storage bucket and links it to your environment. You can access the different folders of this bucket from any of your workers. So you could store your key (you can use only one key-pair) in "gs://bucket-name/data", which you can access through the mapped directory "/home/airflow/gcs/data". Docs here
I have a script in a VM that write data in a bucket in another project.
I want to schedule this script with Airflow but I have IAM access problem when the script need to write data:
AccessDeniedException: 403 148758369895-compute#developer.gserviceaccount.com does not have storage.objects.list access to ******
To launch the script I use the following command :
bash_command=' gcloud config set project project2 && gcloud compute --project "project1" ssh --zone "europe-west1-c" "VMname" --command="python script.py"',
If I want to launch the script with Google Cloud Shell, I need to use gcloud auth login but how can I do this with Airflow/Composer ??
I tried
bash_command='gcloud auth login && gcloud config set project project2 && gcloud compute --project "project1" ssh --zone "europe-west1-c" "VMname" --command="python script.py"',
without success
The recommended way to share resources across GCP projects is to grant the service account associated with your Cloud Composer environment the minimum required IAM permissions in the target project.
In your case, you would grant the service account user 148758369895-compute#developer.gserviceaccount.com the objectViewer role on either the target project or the target bucket.
Be careful, though! The 148758369895-compute#developer.gserviceaccount.com appears to be the default Google Compute Engine service account. Granting additional permissions may also give other VMs in your Composer project the same permissions. To avoid this: create a custom service account and associate it with the Composer environment when you create the environment.
I'm using Codeship to deploy a firebase app.
In order to do so, I first need to login using the firebase login command. Problem is, I need to login in the browser and then return to the command line and perform the deployment.
Is there an automated way to supply credentials to Firebase?
Cheers
firebase login --no-localhost is what worked for me. You get the Authorisation code from browser which you need to paste into your terminal window.
The accepted answer is correct for the old version of firebase-tools, however this has been deprecated as of version 3. The new command to get the token is:
firebase login:ci
You should save this in some kind of environment variable, ideally, FIREBASE_TOKEN.
Then, with any command you intend to run via ci (i.e. deploy), you can run:
firebase [command] --token [FIREBASE_TOKEN]
See wvm2008's answer for a more up to date version
One option would be to mint a token for the build server and pass it into the CLI with:
firebase --token <token>
You can also get a token from a system where you interactively logged in with:
firebase login:ci
See this page for more options.
Answer: Environmental Variables.
Specifically, using a machine with a browser and firebase tools installed, run firebase login:ci --no-localhost and paste the resulting key from the firebase CLI tool into an Environmental Variable and name it FIREBASE_TOKEN (not $FIREBASE_TOKEN).
In your deployment, say
npm install -g firebase-tools
firebase deploy
Done. If you care about Why? Read on.
The firebase/firebase-tools repo README indicates the following regarding Usage with CI Systems.
The Firebase CLI requires a browser to complete authentication, but is
fully compatible with CI and other headless environments.
On a machine with a browser, install the Firebase CLI. Run firebase
login:ci to log in and print out a new access token (the current CLI
session will not be affected).
NOTE: You actually want to type firebase login:ci --no-localhost
Store the output token in a secure but accessible way in your CI
system. There are two ways to use this token when running Firebase
commands:
Store the token as the environment variable FIREBASE_TOKEN and it will
automatically be utilized. Run all commands with the --token <token>
flag in your CI system.
👉 NOTE: You MUST put your token in quotes IIF using the --token flag
🔥 👉BIGGER NOTE Do NOT prefix your environment variable with $ or you will get a nonsensical error message below!!!
Your CLI authentication needs to be updated to take advantage of new features.
Please run firebase login --reauth
Error: Command requires authentication, please run firebase login
The order of precedence for token loading is flag, environment
variable, active project.
👌 Recommendation is to use Environmental Variable so the secret token is not stored/visible in the logs.