It is interesting that we must have used Jenkins predefined build environment variables like $WORKSPACE, $BUILD_NUMBER etc in a lot of our Jenkins job.
I find it boggling to understand that, how does Jenkins set rules such that when we print $WORKSPACE, it prints the current workspace of various jobs. How does it map the variable $WORKSPACE to the corresponding Jenkins Job.
Jenkins needs to know certain things about your build environment and jobs in order to do its job properly. For instance it needs to know the current build number, the location where your project should be checked out, who started the current build, etc. These things are typically exposed to you through the web interface.
Jenkins also exposes this information to your build scripts through environment variables that are injected into your scripts by Jenkins when your it is first launched. These environment variables can then be picked up by your script to do whatever is necessary with them.
In the example you gave ($WORKSPACE) Jenkins needs to know the absolute path to this location on your build slave because if it didn't, it wouldn't be able to check out your source and build it. Since it knows this information it exposes it to you as well to make writing your scripts easier.
There's a complete list of generally available environment variables provided by Jenkins available here.
I have 3 stages (dev / staging / production). I've successfully set up publishing for each, so that the code will be deployed, using msbuild, to the correct location, with the correct web configs transformed - all within Jenkins.
The problem I'm having is that I don't know to deploy the code to staging from what was built on dev (and staging to production). I'm currently using SVN as the source control, so I think I would need to somehow save the latest revision number dev has built and somehow tell Jenkins to build/deploy staging based on that number?
Is there a way to do this, or a better alternative?
Any help would be appreciated.
Edit: Decided to use the save the revision number method, which parses a file containing the revision number to the next job -- to do this, I followed this answer:
How to promote a specific build number from another job in Jenkins?
It explains how to copy an artifact from one job to another using the promotion plugin. For the artifact itself, I added a "Execute Windows batch command" build step after the main build with:
echo DEV_ENVIRONMENT_CORE_REVISION:%SVN_REVISION%>env.properties
Then in the staging job, using that above guide, copied that file, and then using a plugin EnvInject, to read from that file and set an environment variable, which can then be used as a parameter to the SVN Repository URL.
You should be able to identify the changeset number that was built in DEV and manually pass that changeset to the Jenkins build to pull that same changeset from SVN. Obviously that makes your deployment more manual. Maybe you can setup Jenkins to publish the changeset number to a file and then have the later env build to read that file for the changeset number.
We used to use this model as well and it was always complex. Eventually we moved to a build once and deploy many times model using WebDeploy. This has made the process much more simple. Check it out - http://www.dotnetcatch.com/2016/04/16/msbuild-once-msdeploy-many-times/
I am considering using Firebase as MBaaS, however I couldn't find any reliable solution to the following problem:
I would like to set up two separate Firebase environments, one for development and one for production, but I don't want to do a manual copy of features (eg. remote configuration setup, notification rules, etc.) between the development and production environment.
Is there any tool or method I can rely on? Setting up remote configuration or notification rules from scratch can be a daunting task and too risky.
Any suggestions? Is there a better approach than having two separate environments?
Before you post another answer to the question which explains how to set up separate Firebase accounts: it is not the question, read it again. The question is: how to TRANSFER changes between separate dev and prod accounts or any better solution than manually copy between them.
If you are using firebase-tools there is a command firebase use which lets you set up which project you are using for firebase deploy
firebase use --add will bring up a list of your projects, select one and it will ask you for an alias. From there you can firebase use alias and firebase deploy will push to that project.
In my personal use, I have my-app and my-app-dev as projects in the Firebase console.
As everyone has pointed out - you need more than one project/database.
But to answer your question regarding the need to be able to copy settings/data etc from development to production. I had the exact same need. A few months in development and testing, I didn't want to manually copy the data.
My result was to backup the data to a storage bucket, and then restore it from there into the other database. It's a pretty crude way to do it - and I did a whole database backup/restore - but you might be able to look in that direction for a more controlled way. I haven't used it - it's very new - but this might be a solution: NPM Module firestore-export-import
Edit: Firestore backup/export/import info here Cloud Firestore Exporting and Importing Data
If you're using Firebase RTDB, and not Firestore - this documentation might help:
Firebase Automated Backups
You will need to set the permissions correctly to allow your production database access to the same storage bucket as your development.
Good luck.
I'm not currently using Firebase, but considering it like yourself. Looks like the way to go is to create a completely separate project on the console. There was a blogpost up recommending this on the old Firebase site, looks to be removed now though. https://web.archive.org/web/20160310115701/https://www.firebase.com/blog/2015-10-29-managing-development-environments.html
Also this discussion recommending same:
https://groups.google.com/forum/#!msg/firebase-talk/L7ajIJoHPcA/7dsNUTDlyRYJ
The way I did it:
I had 2 projects on firebase- one for DEV other for PROD
Locally my app also had 2 branches - one named DEV, the other named PROD
In my DEV branch I always have JSON file of DEV firebase project & likewise for PROD
This way I am not required to maintain my JSONs.
You will need to manage different build types
Follow this
First, create a new project at Firebase console, name id as YOURAPPNAME-DEV
Click "Add android app" button and create a new app. Name it com.yourapp.debug, for example. New google-services.json file will
be downloaded automatically
Under your project src directory create new directory with name "debug" and copy new google-services.json file here
In your module level build.gradle add this
debug {
applicationIdSuffix ".debug"
}
Now when you build a debug build google-services.json from "debug" folder will be used and when you will build in release mode google-services.json from module root directory will be considered.
I'm updating this answer based on information I just found.
Step 1
In firebase.google.com, create your multiple environments (i.e.; dev, staging, prod)
mysite-dev
mysite-staging
mysite-prod
Step 2
a. Move to the directly you want to be your default (i.e.; dev)
b. Run firebase deploy
c. Once deployed, run firebase use --add
d. An option will come up to select from the different projects you currently have.
Scroll to the project you want to add: mysite-staging, and select it.
e. You'll then be asked for an alias for that project. Enter staging.
Run items a-e again for prod and dev, so that each environment will have an alias
Know which environment you're in
Run firebase use
default (mysite-dev)
* dev (mysite-dev)
staging (mysite-staging)
prod (mysite-dev)
(one of the environments will have an asterisk to the left of it. That's the one you're currently in. It will also be highlighted in blue)
Switch between environments
Run firebase use staging or firebase use prod to move between them.
Once you're in the environment you want, run firebase deploy and your project will deploy there.
Here's a couple helpful links...
CLI Reference
Deploying to multiple environments
Hope this helps.
We chose to fire up instances of the new Firebase emulator on a local dev server for Test and UAT, leaving GCP out of the picture altogether. It's designed exactly for this use-case.
https://firebase.google.com/docs/emulator-suite
This blogpost describes a very simple approach with a debug and release build type.
In a nutshell:
Create a new App on Firebase for each build type using different application id suffix.
Configure your Android project with the latest JSON file.
Using applicationIdSuffix, change the Application Id to match the different Apps on Firebase depending on the build type.
=> see the blogpost for a detailed description.
If you want to use different build flavors, read this extensive blogpost from the official firebase blog. It contains a lot of valuable information.
Hope that helps!
To solve this for my situation I created three Firebase projects, each with the same Android project (i.e. same applicationId without using the applicationIdSuffix suggested by others). This resulted in three google-services.json files which I stored in my Continuous Integration (CI) server as custom environment variables. For each stage of the build (dev/staging/prod), I used the corresponding google-services.json file.
For the Firebase project associated with dev, in its Android project, I added the debug SHA certificate fingerprint. But for staging and prod I just have CI sign the APK.
Here is a stripped-down .gitlab-ci.yml that worked for this setup:
# This is a Gitlab Continuous Integration (CI) Pipeline definition
# Environment variables:
# - variables prefixed CI_ are Gitlab predefined environment variables (https://docs.gitlab.com/ee/ci/variables/predefined_variables.html)
# - variables prefixed GNDR_CI are Gitlab custom environment variables (https://docs.gitlab.com/ee/ci/variables/#creating-a-custom-environment-variable)
#
# We have three Firebase projects (dev, staging, prod) where the same package name is used across all of them but the
# debug signing certificate is only provided for the dev one (later if there are other developers, they can have their
# own Firebase project that's equivalent to the dev one). The staging and prod Firebase projects use real certificate
# signing so we don't need to enter a Debug signing certificate for them. We don't check the google-services.json into
# the repository. Instead it's provided at build time either on the developer's machine or by the Gitlab CI server
# which injects it via custom environment variables. That way the google-services.json can reside in the default
# location, the projects's app directory. The .gitlab-ci.yml is configured to copy the dev, staging, and prod equivalents
# of the google-servies.json file into that default location.
#
# References:
# https://firebase.googleblog.com/2016/08/organizing-your-firebase-enabled-android-app-builds.html
# https://stackoverflow.com/questions/57129588/how-to-setup-firebase-for-multi-stage-release
stages:
- stg_build_dev
- stg_build_staging
- stg_build_prod
jb_build_dev:
stage: stg_build_dev
image: jangrewe/gitlab-ci-android
cache:
key: ${CI_PROJECT_ID}-android
paths:
- .gradle/
script:
- cp ${GNDR_CI_GOOGLE_SERVICES_JSON_DEV_FILE} app/google-services.json
- ./gradlew :app:assembleDebug
artifacts:
paths:
- app/build/outputs/apk/
jb_build_staging:
stage: stg_build_staging
image: jangrewe/gitlab-ci-android
cache:
key: ${CI_PROJECT_ID}-android
paths:
- .gradle/
dependencies: []
script:
- cp ${GNDR_CI_GOOGLE_SERVICES_JSON_STAGING_FILE} app/google-services.json
- ./gradlew :app:assembleDebug
artifacts:
paths:
- app/build/outputs/apk/
jb_build_prod:
stage: stg_build_prod
image: jangrewe/gitlab-ci-android
cache:
key: ${CI_PROJECT_ID}-android
paths:
- .gradle/
dependencies: []
script:
- cp ${GNDR_CI_GOOGLE_SERVICES_JSON_PROD_FILE} app/google-services.json
# GNDR_CI_KEYSTORE_FILE_BASE64_ENCODED created on Mac via:
# base64 --input ~/Desktop/gendr.keystore --output ~/Desktop/keystore_base64_encoded.txt
# Then the contents of keystore_base64_encoded.txt were copied and pasted as a Gitlab custom environment variable
# For more info see http://android.jlelse.eu/android-gitlab-ci-cd-sign-deploy-3ad66a8f24bf
- cat ${GNDR_CI_KEYSTORE_FILE_BASE64_ENCODED} | base64 --decode > gendr.keystore
- ./gradlew :app:assembleRelease
-Pandroid.injected.signing.store.file=$(pwd)/gendr.keystore
-Pandroid.injected.signing.store.password=${GNDR_CI_KEYSTORE_PASSWORD}
-Pandroid.injected.signing.key.alias=${GNDR_CI_KEY_ALIAS}
-Pandroid.injected.signing.key.password=${GNDR_CI_KEY_PASSWORD}
artifacts:
paths:
- app/build/outputs/apk/
I'm happy with this solution because it doesn't rely on build.gradle tricks which I believe are too opaque and thus hard to maintain. For example, when I tried the approaches using applicationIdSuffix and different buildTypes I found that I couldn't get instrumented tests to run or even compile when I tried to switch build types using testBuildType. Android seemed to give special properties to the debug buildType which I couldn't inspect to understand.
Virtuously, CI scrips though are quite transparent and easy to maintain, in my experience. Indeed, the approach I've described worked: When I ran each of the APKs generated by CI on an emulator, the Firebase console's "Run your app to verify installation" step went from
Checking if the app has communicated with our servers. You may need to uninstall and reinstall your app.
to:
Congratulations, you've successfully added Firebase to your app!
for all three apps as I started them one by one in an emulator.
Firebase has a page on this which goes through how to set it up for dev and prod
https://firebase.google.com/docs/functions/config-env
Set environment configuration for your project To store environment
data, you can use the firebase functions:config:set command in the
Firebase CLI. Each key can be namespaced using periods to group
related configuration together. Keep in mind that only lowercase
characters are accepted in keys; uppercase characters are not allowed.
For instance, to store the Client ID and API key for "Some Service",
you might run:
firebase functions:config:set someservice.key="THE API KEY" someservice.id="THE CLIENT ID"
Retrieve current environment configuration To inspect what's currently
stored in environment config for your project, you can use firebase
functions:config:get. It will output JSON something like this:
{
"someservice": {
"key":"THE API KEY",
"id":"THE CLIENT ID"
}
}
Create the Tow project with Dev and production Environment on the firebase
Download the json file from thre
and setup the SDK as per : https://firebase.google.com/docs/android/setup Or for Crashlytics: https://firebase.google.com/docs/crashlytics/get-started?platform=android
First, place the respective google_services.json for each buildType in the following locations:
app/src/debug/google_services.json
app/src/test/google_services.json
app/google_services.json
Note: Root app/google_services.json This file should be there according to the build variants copy the json code in the root json file
Now, let’s whip up some gradle tasks in your: app’s build.gradle to automate moving the appropriate google_services.json to app/google_services.json
copy this in the app/Gradle file
task switchToDebug(type: Copy) {
description = 'Switches to DEBUG google-services.json'
from "src/debug"
include "google-services.json"
into "."
}
task switchToRelease(type: Copy) {
description = 'Switches to RELEASE google-services.json'
from "src/release"
include "google-services.json"
into "."
}
Great — but having to manually run these tasks before you build your app is cumbersome. We would want the appropriate copy task above run sometime before: assembleDebug or :assembleRelease is run. Let’s see what happens when :assembleRelease is run: copy this one in the /gradlew file
Zaks-MBP:my_awesome_application zak$ ./gradlew assembleRelease
Parallel execution is an incubating feature.
.... (other tasks)
:app:processReleaseGoogleServices
....
:app:assembleRelease
Notice the :app:processReleaseGoogleServices task. This task is responsible for processing the root google_services.json file. We want the correct google_services.json to be processed, so we must run our copy task immediately beforehand.
Add this to your build.gradle. Note the afterEvaluate enclosing.
copy this in the app/Gradle file
afterEvaluate {
processDebugGoogleServices.dependsOn switchToDebug
processReleaseGoogleServices.dependsOn switchToRelease
}
Now, anytime :app:processReleaseGoogleServices is called, our newly defined :app:switchToRelease will be called beforehand. Same logic for the debug buildType. You can run :app:assembleRelease and the release version google_services.json will be automatically copied to your app module’s root folder.
The way we are doing it is by creating different json key files for different environments. We have used service account feature as recommended by google and have one development file and another for production
I'm newbie for these techs (open stack / docker / vagrant), not sure if I understood them correctly (most likely did not), for me I understood it is something like having a portable application to run it with same development configuration to ensure all the development team have same setup, but did not understand, what after development, and how to get benefit from them with dart app.
my question is:
1. Correct my understanding
2. Do I need the end user to have these things installed in his system, and run my application through them, same as in the development stage?
3. How can I build/develop/distribute dart lang app through them, may be as hese as well as dart are new, I could not find enough info while googling.
thanks
Docker is similar to a virtual machine like VM-Ware or Virtualbox as it creates an abstraction layer between the host operating system and the operating system running within a Docker container. The difference is that Docker doesn't emulate the entire hardware. The disadvantage is that Docker only runs on Linux and only Linux can be run inside Docker. If your host is an Intel system you can't run an ARM Linux inside the container. (theoretically you can run Virtualbox inside Docker and run Windows. or other OSes in it)
With Docker you can test your application locally in the same environment as the application will run when deployed.
When you for example create an application you want to run in Google Compute Engine you install and test it locally inside a Docker container and then deploy the Docker container to Google Compute Engine as a whole unit. When there is a bug in the deployed application you should be able to reproduce it locally as well because it's just a 1:1 copy. No bug could have been introduce because the operating system or other dependencies were installed differently on the deployment environment than in the develeopment/test environment.
The Dockerfile is a set of instructions to set up a Docker container. If you want to create a new Docker container (for example for a new developer) you just let Docker process the Dockerfile and a new Docker container is created from it. This allows to easily create new Containers.
If you want to update one dependency to a newer version or want to add remove components to/from the environment you change the Dockerfile and create a new container from it. This way you avoid that manual addition/removal form/to an existing container manually lets containers of different developers/testers/deployment diverge from each other.
I haven't used OpenStack myself but from the web page it seems to provide components and tools to build and manage your own cloud infrastructure.
I also haven't used Vagrant myself but it seems to help to automate a lot of tasks related to creating and managing virtual machines like VM-Ware, Virtualbox, Docker and probably others.
When you have for example a server application it probably consist of a number of components you don't want all to run in one container but split up into several containers. One container for the Database, one for the web server, one for the backend application (created in Dart for example), and others. It can become cumbersome to manage all those containers. Vagrant helps to automate related tasks.
Application
I have an angularjs application that is built by grunt which uses grunt to inject environment variables (like API endpoints) into the angularjs code. However this problem is more specific to grunt applications being deployed in docker containers.
Motivation
I've just recently started trying to integrate docker into the deployment process (something similar to this) but realized I don't know how to best get the environment variables into the application anymore. I'll describe the sequence of events:
Make changes to my code
Build a complete docker image that accepts API endpoints as environment variables
Push the docker image to my server
Run a container based off this new image
As you can see, grunt build occurs in step 2, but the environment variables are not available until step 4.
Possible Solution
I could wrap the call to start the static server for my angular application in a small bash script which creates a javascript file containing the environment variable. I could then add a <script> tag in my index.html to import it and then start the server as usual, with everything working. However that feels like I'm sidestepping grunt inappropriately.
Does anyone know of a straightforward way to inject environment variables into client side code around the same time the connect:dist:keepalive task runs? To clarify, I'm already using the ng-constant grunt task but that can only access environment variables at build time, not server startup time.