I'm creating a Quasar 2/Vue 3 app that runs in Openshift. We have secrets created in Openshift for env variables like the path to the auth server. I have variables created in quasar.config.js that work with static values. Now I need to replace them with the values from secrets.
build: {
env: {
VUE_APP_AUTHORITY_URL: "https://keycloak.dev.xxxxxx.com/auth"
},
. . .
How do I format this so my build process in Openshift pulls the secrets?
Related
I'll try to be as clear as possible. I have also asked about related issues but didn't receive a convincing response.
I'm using React and firebase for hosting.
Also, I'm storing my firebase web API key in my .env file.
I set up firebase hosting using Firebase CLI and chose to automatically deploy on merge or pull request.
After the setup finished a .github folder with .yml file was created in my working directory.
.github
- workflows
-firebase-hosting-merge.yml
-firebase-hosting-pull-request.yml
So now when I deploy my project(without pushing to GitHub) manually to firebase by running firebase deploy everything works fine and my app is up and running.
However when I make changes and push my changes to Github. Github actions are triggered and the automatic deployment to the firebase process starts. The build passes all the checks. However, when I visit the hosted URL there is an error I get in the console saying Your API key is invalid, please check you have copied it correctly.
I tried few workarounds like storing my firebase web API key into the Github secrets and accessing it in my .yml file.
# This file was auto-generated by the Firebase CLI
# https://github.com/firebase/firebase-tools
name: Deploy to Firebase Hosting on merge
'on':
push:
branches:
- master
jobs:
build_and_deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- run: npm ci && npm run build --prod
- uses: FirebaseExtended/action-hosting-deploy#v0
with:
repoToken: '${{ secrets.GITHUB_TOKEN }}'
firebaseServiceAccount: '${{ secrets.FIREBASE_SERVICE_ACCOUNT_EVENTS_EASY }}'
channelId: live
projectId: my-project
env:
REACT_APP_API_KEY: ${{secrets.REACT_APP_API_KEY}}
FIREBASE_CLI_PREVIEWS: hostingchannels
But I am still getting the error. I feel that the error is definitely due to the environment variables.
I have stored my firebase web API key in my .env.production file located in the root directory.
Somehow GitHub actions are not using my environment variables defined.
Please let me know how can I manage my env variables so that it can be accessed by my workflow.
The answer is put custom env vars in first level before jobs:
name: Deploy to Firebase Hosting on merge
'on':
push:
branches:
- master
env: # <--- here
REACT_APP_API_KEY: ${{secrets.REACT_APP_API_KEY}} # <--- here
jobs:
build_and_deploy:
...
And add this secrets in Github > Your project > Settings > Secrets
You can use Create Envfile Github Action to create a .env file in your workflow.
To add a key to the envfile, add a key/pair to the with: section. It must begin with envkey_.
steps:
- uses: actions/checkout#v2
- name: Use Node.js
uses: actions/setup-node#v1
- name: Make envfile
uses: SpicyPizza/create-envfile#v1
with:
envkey_REACT_APP_API_KEY: ${{secrets.REACT_APP_API_KEY}}
directory: './'
file_name: '.env'
So I have a .net app with file structure:
-MyApp
|__Client
|__Server
|__Shared
Inside Shared directory I have defined a model for users to subscribe.
Subscriber.cs
public class Subscriber { }
Inside Client directory, I used this shared Subscriber model.
public interface ISubscriberService
{
IList<Subscriber> Subscribers { get; set; }
}
Inside Server directory
public IEnumerable<Subscriber> Get()
{
List<Subscriber> subscribers = _unitOfWork.Subscribers.GetAll().ToList();
return subscribers;
}
Now Here's when I ran into problems.
So I needed to Dockerize MyApp project, I placed a Dockerfile inside client, although this creates a problem, the Subscriber model is in the Shared directory. Its (forbidden path) to COPY ../Shared . , nor do I find placing one dockerfile in main directory and specifing Client, Server, and Shared builds confiuration.
example:
-MyApp
|__Client
|__Server
|__Shared
|__Dockerfile
My solution facing this problem, was instead of using one docker file, was to use multiple
-MyApp
|__Client
|__Server
|__Shared
|__Dockerfile.Client
|__Dockerfile.Server
Feel like this will allow me more control, when accessing the shared library any sugestions.
SOLVED
docker-compose.yml
library:
build:
context: ./Shared
dockerfile: dockerfile
client:
build:
context: ./Client
dockerfile: dockerfile
depends_on:
- library
server:
build:
context: ./Server
dockerfile: dockerfile
depends_on:
- library
So creating a docker-compose file then telling docker directory depends_on another directory (shared library) solves it.
*EDIT
Never mind first solution seem to work correctly wonder if there's another solution where i can place my dockerfiles into the project solution rather then outside then cd into it
I have a release pipeline in Azure DevOps, one of the steps is running test designed using Nunit.
YAML of task:
steps:
- task: DotNetCoreCLI#2
displayName: 'Run Tests'
inputs:
command: custom
projects: '**/a/Tests.dll'
custom: vstest
arguments: '--logger:"trx;LogFileName=test.trx" '
In my tests Setup I am retrieving few settings from Environment variables using:
private static IConfiguration InitConfiguration()
{
var config = new ConfigurationBuilder()
.AddJsonFile("appsettings.json")
.AddEnvironmentVariables()
.Build();
return config;
}
I am also storing the values of my environment variables in Azure DevOps variable groups.
Everything works fine when my variables are not locked, they are retrieved successfully and tests run fine.
However when I lock the secret ones their value comes empty hence my tests fail to run.
What should I change in order for locked variables to be correctly read from my application.
By default variables are mapped into environment variables, but when you locked them you actually make them secrets. And to have secret maps as environment you must map them explicitly. Take a look on this
variables:
GLOBAL_MYSECRET: $(mySecret) # this will not work because the secret variable needs to be mapped as env
GLOBAL_MY_MAPPED_ENV_VAR: $(nonSecretVariable) # this works because it's not a secret.
steps:
- powershell: |
Write-Host "Using an input-macro works: $(mySecret)"
Write-Host "Using the env var directly does not work: $env:MYSECRET"
Write-Host "Using a global secret var mapped in the pipeline does not work either: $env:GLOBAL_MYSECRET"
Write-Host "Using a global non-secret var mapped in the pipeline works: $env:GLOBAL_MY_MAPPED_ENV_VAR"
Write-Host "Using the mapped env var for this task works and is recommended: $env:MY_MAPPED_ENV_VAR"
env:
MY_MAPPED_ENV_VAR: $(mySecret) # the recommended way to map to an env variable
- bash: |
echo "Using an input-macro works: $(mySecret)"
echo "Using the env var directly does not work: $MYSECRET"
echo "Using a global secret var mapped in the pipeline does not work either: $GLOBAL_MYSECRET"
echo "Using a global non-secret var mapped in the pipeline works: $GLOBAL_MY_MAPPED_ENV_VAR"
echo "Using the mapped env var for this task works and is recommended: $MY_MAPPED_ENV_VAR"
env:
MY_MAPPED_ENV_VAR: $(mySecret) # the recommended way to map to an env variable
So if you want to have secret available in one task you need to add something like this:
env:
MY_MAPPED_ENV_VAR: $(mySecret)
And you need to do this for each secret separately.
With classic it is the same. You need to map them here:
I'm using Cloud Functions for Firebase with three different projects for development, testing and production purposes. Each project has a service-account.json. When I deploy the sources to an environment, the initialization looks like this:
var serviceAccount = require("./service-account-dev.json");
firebase.initializeApp({
credential: firebase.credential.cert(serviceAccount),
databaseURL: "https://nwDEV.firebaseio.com"
});
This is a bit difficult to handle, because I have to change the code everytime I want to deploy to a different environment. Is there a way to have an overall configuration, e.g. in firebase.json or.firebasesrc, which allows to integrate the service-account and decides on deployment which configuration to choose?
Otherwise is there a possibility to detect under which environment the code is running and to load the specific service-account.json and to set the databaseURL-property?
You can use environment variables. https://firebase.google.com/docs/functions/config-env
Select the project (you can use the command firebase projects:list to see them):
firebase use my-project-development
Set an environment variable
firebase functions:config:set app.environment="dev"
In your functions file, apply a conditional to choose the file:
const serviceAccount = functions.config().app.environment === 'dev' ? 'credentials-dev.json' : 'credentials-prod.json';
Then you can use the file depending on the project:
firebase.initializeApp({
credential: firebase.credential.cert(serviceAccount),
databaseURL: "https://nwDEV.firebaseio.com"
});
From what I understand of your question, what you are looking for boils down to a solution to translate the cloud functions you are deploying to the appropriate settings, i.e. production, development, and testing, which I assume means each of these is a unique project, and therefore database, in your Firebase environment.
If the above is true then the following should help.
Firebase Cloud Functions, and CLI more generally, is able to deploy to a specific project in your Firebase environment. To do this execute the following command in the terminal while in the cloud functions directory.
$ firebase use --add
This will allow you to pick your additional project (for instance, development) and assign it an alias (I recommend "development" if it is as such). Then when deploying your functions you can choose which project (and therefore database) to deploy to by using the alias.
$ firebase use default # sets environment to the default alias
$ firebase use development # sets environment to the development alias
For more information please see: https://firebase.googleblog.com/2016/07/deploy-to-multiple-environments-with.html
One thing you may have to do for this to work would be to use the default config settings for Cloud Functions.
$ admin.initializeApp(functions.config().firebase);
Short answer:
the GCLOUD_PROJECT environment variable will be unique to your project, hence you can utilise it like this (sample code is for 2 different projects but you can extend it using switch or any other conditional statement):
const env = process.env.GCLOUD_PROJECT === 'my-app-prod' ? 'prod' : 'dev';
then use that env variable to load intended configuration.
Full example: (TypeScript)
update .firebaserc file
{
"projects": {
"default": "my-app-dev",
"prod": "my-app-prod",
}
}
create and modify your ./somewhere/config.ts file accordingly, let's say you're using AWS services (please ensure to secure your configuration details)
export const config = {
dev: {
awsRegion: 'myDevRegion',
awsAccessKey: 'myDevKey',
awsSecretKey: 'myDevSecretKey'
},
prod: {
awsRegion: 'myProdRegion',
awsAccessKey: 'myProdKey',
awsSecretKey: 'myProdSecretKey'
}
};
now above items can be used in the index.ts file
import { config } from './somewhere/config';
import * as aws from 'aws-sdk';
. . .
const env = process.env.GCLOUD_PROJECT === 'my-app-prod' ? 'prod' : 'dev';
const awsCredentials = {
region: config[env].awsRegion,
accessKeyId: config[env].awsAccessKey,
secretAccessKey: config[env].awsSecretKey
};
aws.config.update(awsCredentials);
. . .
export const myFuncToUseAWS = functions....
Now the deployment
Dev environment deployment: $ firebase deploy --only functions -P default
Prod environment deployment: $ firebase deploy --only functions -P prod
In my Meteor (1.2) app I've separated files for development and production
e.g.
client/lib/appVars.config.PROD.js
client/lib/appVars.config.CONFIG.js
Ideally the "twin" files have the same variables, functions etc. with little differences but (global) variables and functions which are common to debug and production have the same name.
Is there a way to call meteor run with a command line parameter DEBUG_MODE = true | false so that I cad load either one or the other file, depending on the current mode (debug, production)?
Set different environmental variables and run via CLI with meteor run --settings settings.json
Then you just need a development and production (and staging?) settings.json
Example of a settings file:
{
"awsBucket": "my-example-staging",
"awsAccessKeyId": "AABBCCddEEff12123131",
"awsSecretKey": "AABBCCddEEff12123131+AABBCCddEEff12123131",
"public": {
"awsBucketUrl": "https://my-meteor-example.s3.amazonaws.com",
"environment": "staging"
},
"googleApiKey": "AABBCCddEEff12123131"
}
EDIT ADD:
To access your environmental keys, just select
Meteor.settings.awsBucket
Security Update (thanks Dave Weldon)
See https://docs.meteor.com/#/full/structuringyourapp
Re production vs development, you should have two settings.json files, the standard one for production (.config/settings.json) and a development one (.config/development/config.json) and when you boot outside of production you boot meteor --settings .config/development/settings.json
Re client side, note that if you make the key public e.g.
{
"service_id":"...",
"service_secret":"...",
"public":{
"service_name":"..."
}
}
Then only Meteor.settings.public.service_name will be accessible on the client