In my package.json I have this line:
{
"deploy:ci": firebase deploy --force --only functions --token \"$SECRET\"
}
And my cloudbuild.yaml:
steps:
- name: "gcr.io/cloud-builders/docker"
args: ["build", "--tag", "gcr.io/$PROJECT_ID/functions", "."]
- name: "gcr.io/$PROJECT_ID/functions"
args: ["yarn", "deploy:ci"]
secretEnv: ["FIREBASE_TOKEN"]
secrets:
- kmsKeyName: projects/myproject/locations/global/keyRings/enviroment/cryptoKeys/firebase
secretEnv:
FIREBASE_TOKEN: VERY_LONG_UNGLY_AND_BORING_BASE64_STRING
I want to know if it is possible to add some "special" permissions to the cloudbuild in order to allow the deployment without this FIREBASE_TOKEN.
(all files are in the same project)
yes, it is possible. But there are couple of stuff you need to do
assuming your cloud build looks like this
steps:
- name: "gcr.io/cloud-builders/npm"
args: ["install"]
- name: "gcr.io/cloud-builders/npm"
args: ["run", "build]
- name: "gcr.io/cloud-builders/firebase"
args: ["deploy"]
You need to upload your own firebase image from here, I assume you are already familiar with this, otherwise, I wrote basically a similar post about how to do this part here anyway...
After that, the IAM you are asking for is Firebase Admin, you need to assign this to your ...#cloudbuild.gserviceaccount.com account.
Voila! you can test it like (using the sdk)
gcloud builds submit --config cloudbuild.yaml .
Of course point to your file location.
Opinionated comment: I'm not a big fan of this approach, I tried a few times and there were always some issues with it, but well, that's why we called it opinionated comment :)
Good luck.
Related
GitHub actions yaml:
name: Deploy to Firebase Functions on merge
"on":
push:
branches:
- main
env:
CI: false
jobs:
build_and_deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: "Install yarn packages"
run: yarn
working-directory: "functions"
- name: "Deploy to Firebase"
uses: w9jds/firebase-action#master
with:
args: deploy --only functions
env:
FIREBASE_TOKEN: ${{ secrets.FIREBASE_TOKEN }}
When pushing to main and viewing this action, I got this weird error:
Error: Missing permissions required for functions deploy. You must have permission iam.serviceAccounts.ActAs on service account argus-750f6#appspot.gserviceaccount.com.
To address this error, ask a project Owner to assign your account the "Service Account User" role from this URL: https://console.cloud.google.com/iam-admin/iam?project=argus-750f6
I have enabled that role (I made a custom role for just Service Account User) on argus-750f6#appspot.gserviceaccount.com. I even made argus-750f6#appspot.gserviceaccount.com owner, but that didn't work. I'm at a loss. Any suggestions?
You can check this guide on how to Simple guide to start GitHub Actions to Firebase Functions.
According to the error the service account doesn't have the correct permission role. The service account in the logs is the default service account of App Engine, make sure the default role of this service account is editor. It seems you changed the role of this default service account, that's why it cause an error to your deployment.
Then if you are using Firebase Token, you can proceed to the step 14 in order to generate your token that you will also use, and then proceed to the step 15 to store the token you got.
I'll try to be as clear as possible. I have also asked about related issues but didn't receive a convincing response.
I'm using React and firebase for hosting.
Also, I'm storing my firebase web API key in my .env file.
I set up firebase hosting using Firebase CLI and chose to automatically deploy on merge or pull request.
After the setup finished a .github folder with .yml file was created in my working directory.
.github
- workflows
-firebase-hosting-merge.yml
-firebase-hosting-pull-request.yml
So now when I deploy my project(without pushing to GitHub) manually to firebase by running firebase deploy everything works fine and my app is up and running.
However when I make changes and push my changes to Github. Github actions are triggered and the automatic deployment to the firebase process starts. The build passes all the checks. However, when I visit the hosted URL there is an error I get in the console saying Your API key is invalid, please check you have copied it correctly.
I tried few workarounds like storing my firebase web API key into the Github secrets and accessing it in my .yml file.
# This file was auto-generated by the Firebase CLI
# https://github.com/firebase/firebase-tools
name: Deploy to Firebase Hosting on merge
'on':
push:
branches:
- master
jobs:
build_and_deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- run: npm ci && npm run build --prod
- uses: FirebaseExtended/action-hosting-deploy#v0
with:
repoToken: '${{ secrets.GITHUB_TOKEN }}'
firebaseServiceAccount: '${{ secrets.FIREBASE_SERVICE_ACCOUNT_EVENTS_EASY }}'
channelId: live
projectId: my-project
env:
REACT_APP_API_KEY: ${{secrets.REACT_APP_API_KEY}}
FIREBASE_CLI_PREVIEWS: hostingchannels
But I am still getting the error. I feel that the error is definitely due to the environment variables.
I have stored my firebase web API key in my .env.production file located in the root directory.
Somehow GitHub actions are not using my environment variables defined.
Please let me know how can I manage my env variables so that it can be accessed by my workflow.
The answer is put custom env vars in first level before jobs:
name: Deploy to Firebase Hosting on merge
'on':
push:
branches:
- master
env: # <--- here
REACT_APP_API_KEY: ${{secrets.REACT_APP_API_KEY}} # <--- here
jobs:
build_and_deploy:
...
And add this secrets in Github > Your project > Settings > Secrets
You can use Create Envfile Github Action to create a .env file in your workflow.
To add a key to the envfile, add a key/pair to the with: section. It must begin with envkey_.
steps:
- uses: actions/checkout#v2
- name: Use Node.js
uses: actions/setup-node#v1
- name: Make envfile
uses: SpicyPizza/create-envfile#v1
with:
envkey_REACT_APP_API_KEY: ${{secrets.REACT_APP_API_KEY}}
directory: './'
file_name: '.env'
I was trying to automate the deployment process of my Next.JS application to App Engine using Cloud Build but at the build phase it keeps on failing with:
Error: > Build directory is not writeable. https://err.sh/vercel/next.js/build-dir-not-writeable
I cant seem to figure out what to fix for this.
My current build file is and it keeps failing on step 2:
steps:
# install dependencies
- name: 'gcr.io/cloud-builders/npm'
args: ['install']
# build the container image
- name: 'gcr.io/cloud-builders/npm'
args: ['run', 'build']
# deploy to app engine
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
env:
- 'PORT=8080'
- 'NODE_ENV=production'
timeout: "1600s"
app.yaml:
runtime: nodejs12
handlers:
- url: /.*
secure: always
script: auto
env_variables:
PORT: 8080
NODE_ENV: 'production'
any help would be appreciated
Can reproduce the same behavior after upgrading to next version 9.3.3.
Cause
The issue is related to the npm dependency which is managed by google if you use gcr.io/cloud-builders/npm seems they are running your build inside of Google Cloud Build on an old node version.
Here you can find the currently supported version
https://console.cloud.google.com/gcr/images/cloud-builders/GLOBAL/npm?gcrImageListsize=30
As you can see Googles latest node version is 10.10. The newest next.js version requires at least node 10.13
Solution
Change gcr.io/cloud-builders/npm to
- name: node
entrypoint: npm
in order to use the official docker npm package which runs on node12.
After those changes your build will be successful again.
Sidenote
Switching to the official npm will increase the build duration (at least in my case). It takes around 2 minutes longer then the gcr npm.
Disclaimer: I do have almost no knowledge with DevOps, containers and CI/CD pipelines and it's something I'm learning on the fly.
I currently have a private Xamarin.Forms project hosted on Bitbucket, and I've created a SonarCloud account that I want to use to analyze the code within Bitbucket. From what I was able to gather from the SonarCloud onboarding process I need to setup a build pipeline within Bitbucket.
This is a code snippet of what SonarCloud says that I need to use within the bitbucket-pipelines.yml file:
image: ************************** # Choose an image matching your project needs
clone:
depth: full # SonarCloud scanner needs the full history to assign issues properly
definitions:
caches:
sonar: ~/.sonar/cache # Caching SonarCloud artifacts will speed up your build
steps:
- step: &build-test-sonarcloud
name: Build, test and analyze on SonarCloud
caches:
- ************************** # See https://confluence.atlassian.com/bitbucket/caching-dependencies-895552876.html
- sonar
script:
- ************************** # Build your project and run
- pipe: sonarsource/sonarcloud-scan:1.0.1
- step: &check-quality-gate-sonarcloud
name: Check the Quality Gate on SonarCloud
script:
- pipe: sonarsource/sonarcloud-quality-gate:0.1.3
pipelines: # More info here: https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html
branches:
master:
- step: *build-test-sonarcloud
- step: *check-quality-gate-sonarcloud
pull-requests:
'**':
- step: *build-test-sonarcloud
- step: *check-quality-gate-sonarcloud
I've read and re-read the https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html article, but I wasn't ablet to gather much information. Seems like a rabbit hole with links to another articles that don't seem like they will be able to get me where I need.
So, a the moment my major doubts are:
Can a Bitbucket pipeline build Xamarin.Forms projects? (Android and iOS)
If so, how can I set up a bitbucket-pipelines.yml in order to allow that?
If not, should I go to Azure Devops or some other platform that will allow me to do that and integrate with SonarCloud?
I'm already using Visual Studio App Center to build the Android and iOS app (integraged with Bitbucket). But it seems like the AppCenter can't be natively integrated with SonarCloud.
Can anyone help me with that?
Attaching a sample working bitbucket-pipelines.yml file for reference
image: node:10.15.0 # Choose an image matching your project needs
clone:
depth: full # SonarCloud scanner needs the full history to assign issues properly
definitions:
caches:
sonar: ~/.sonar/cache # Caching SonarCloud artifacts will speed up your build
steps:
- step: &build-test-sonarcloud
name: Build, test and analyze on SonarCloud
caches:
- node # See https://confluence.atlassian.com/bitbucket/caching-dependencies-895552876.html
- sonar
script:
- npm install
- pipe: sonarsource/sonarcloud-scan:1.2.0
- step: &check-quality-gate-sonarcloud
name: Check the Quality Gate on SonarCloud
script:
- pipe: sonarsource/sonarcloud-quality-gate:0.1.4
pipelines: # More info here: https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html
custom: # defines that this can only be triggered manually or by a schedule
staging: # The name that is displayed in the list in the Bitbucket Cloud GUI
- step:
script:
- echo "Scheduled builds in Pipelines are awesome!"
- step:
name: Build, test and analyze on SonarCloud
caches:
- node # See https://confluence.atlassian.com/bitbucket/caching-dependencies-895552876.html
- sonar
script:
- pipe: sonarsource/sonarcloud-scan:1.2.0
- step: &check-quality-gate-sonarcloud
name: Check the Quality Gate on SonarCloud
script:
- pipe: sonarsource/sonarcloud-quality-gate:0.1.4
branches:
master:
- step: *build-test-sonarcloud
- step: *check-quality-gate-sonarcloud
pull-requests:
'**':
- step: *build-test-sonarcloud
- step: *check-quality-gate-sonarcloud
Pipeline file is used to create a small container in bitbucket infrastructure and runs the code analysis there.
This file provides information regarding
When all checks should run
What all checks should be done.
Configuration of the image and container which will be created to run the pipeline.
Cache configuration for faster builds.
Reusable build-level code components like steps etc.
bit bucket docs for reference
https://support.atlassian.com/bitbucket-cloud/docs/get-started-with-bitbucket-pipelines/
https://support.atlassian.com/bitbucket-cloud/docs/pipeline-triggers/
https://support.atlassian.com/bitbucket-cloud/docs/use-docker-images-as-build-environments/
https://support.atlassian.com/bitbucket-cloud/docs/yaml-anchors/
I'm trying to use Concourse to grab a dockerfile defintion from a git repository, do some work, build the docker image, and push the new image to Artifactory. See below for the pipeline definition. At this time I have all stages up to the artifactory stage (the one that pushes to Artifactory) working. The artifactory stage exits with error with the following output:
waiting for docker to come up...
sha256:c6039bfb6ac572503c8d97f42b6a419b94139f37876ad331d03cb7c3e8811ff2
The push refers to repository [artifactory.server.com:2077/base/golang/alpine]
a4ab5bf94afd: Preparing
unauthorized: The client does not have permission to push to the repository.
This would seem straight-forward as an Artifactory permissions issue, except that I've tested locally with the docker cli and am able to push using the same user/pass as specified within destination_username and destination_password. I double checked the credentials to make sure I'm using the same ones and find that I am.
Question #1: is there any other known cause for getting this error? I've scoured the resource github page without finding anything. Any ideas why I may be getting the permissions error?
Without having an answer to the above question, I'd really like to dig deeper into troubleshooting the problem. To do so I use fly hijack to get a shell in the corresponding container. I notice that docker is installed on the container, so next step I think would be to do a docker import on the tarball for the image I'm trying to push and then perform a docker push to push it to the repo. When attempting to run the import I get the error:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is
the docker daemon running?
Question #2: Why can't I use docker commands from within the container? Perhaps this has something to do with the issue I'm seeing with pushing to repo when running the pipeline (I don't think so)? Is it because the container isn't running with privilege? I thought that the privileged argument would be supplied in the resource type definition, but if not, how can I run with privilege?
resources:
- name: image-repo
type: git
source:
branch: master
private_key: ((private_key))
uri: ssh://git#git-server/repo.git
- name: artifactory
type: docker-image
source:
repository: artifactory.server.com:2077/((repo))
tag: latest
username: ((destination_username))
password: ((destination_password))
jobs:
- name: update-image
plan:
- get: image-repo
- task: do-stuff
file: image-repo/scripts/do-stuff.yml
vars:
repository-directory: ((repo))
- task: build-image
privileged: true
file: image-repo/scripts/build-image.yml
- put: artifactory
params:
import_file: image/image.tar
Arghhhh. Found after much troubleshooting that the destination_password wasn't being picked up properly due to special characters and a lack of quotes. Fixed the issue by properly setting the password within yaml file being included with the --load-vars flag.