Problem with hasura cli metadata apply && hasura migrate apply - hasura

I've tried to execute a command of hasura cli
hasura metadata apply && hasura migrate apply --database-name default && hasura metadata reload
Doing so, I get this error:
FATA[0000] cannot read config: invalid config version.
My version of hasura config.yml is 3, and the server: v2.0.0-alpha.4.

Related

Github Actions - How can I make my env variable(stored in .env file) available in my workflow

I'll try to be as clear as possible. I have also asked about related issues but didn't receive a convincing response.
I'm using React and firebase for hosting.
Also, I'm storing my firebase web API key in my .env file.
I set up firebase hosting using Firebase CLI and chose to automatically deploy on merge or pull request.
After the setup finished a .github folder with .yml file was created in my working directory.
.github
- workflows
-firebase-hosting-merge.yml
-firebase-hosting-pull-request.yml
So now when I deploy my project(without pushing to GitHub) manually to firebase by running firebase deploy everything works fine and my app is up and running.
However when I make changes and push my changes to Github. Github actions are triggered and the automatic deployment to the firebase process starts. The build passes all the checks. However, when I visit the hosted URL there is an error I get in the console saying Your API key is invalid, please check you have copied it correctly.
I tried few workarounds like storing my firebase web API key into the Github secrets and accessing it in my .yml file.
# This file was auto-generated by the Firebase CLI
# https://github.com/firebase/firebase-tools
name: Deploy to Firebase Hosting on merge
'on':
push:
branches:
- master
jobs:
build_and_deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- run: npm ci && npm run build --prod
- uses: FirebaseExtended/action-hosting-deploy#v0
with:
repoToken: '${{ secrets.GITHUB_TOKEN }}'
firebaseServiceAccount: '${{ secrets.FIREBASE_SERVICE_ACCOUNT_EVENTS_EASY }}'
channelId: live
projectId: my-project
env:
REACT_APP_API_KEY: ${{secrets.REACT_APP_API_KEY}}
FIREBASE_CLI_PREVIEWS: hostingchannels
But I am still getting the error. I feel that the error is definitely due to the environment variables.
I have stored my firebase web API key in my .env.production file located in the root directory.
Somehow GitHub actions are not using my environment variables defined.
Please let me know how can I manage my env variables so that it can be accessed by my workflow.
The answer is put custom env vars in first level before jobs:
name: Deploy to Firebase Hosting on merge
'on':
push:
branches:
- master
env: # <--- here
REACT_APP_API_KEY: ${{secrets.REACT_APP_API_KEY}} # <--- here
jobs:
build_and_deploy:
...
And add this secrets in Github > Your project > Settings > Secrets
You can use Create Envfile Github Action to create a .env file in your workflow.
To add a key to the envfile, add a key/pair to the with: section. It must begin with envkey_.
steps:
- uses: actions/checkout#v2
- name: Use Node.js
uses: actions/setup-node#v1
- name: Make envfile
uses: SpicyPizza/create-envfile#v1
with:
envkey_REACT_APP_API_KEY: ${{secrets.REACT_APP_API_KEY}}
directory: './'
file_name: '.env'

Artifcatory: Error when using artifactory docker bitbucket pipeline

I am passing the following
- pipe: JfrogDev/artifactory-docker:0.2.12
variables:
ARTIFACTORY_URL: "${JFROG_URL}"
ARTIFACTORY_USER: "${JFROG_USER}"
ARTIFACTORY_PASSWORD: "${JFROG_PASSWORD}"
DOCKER_IMAGE_TAG: "xxxx/web-interface:${BITBUCKET_BUILD_NUMBER}"
DOCKER_TARGET_REPO: "local-containers"
And I am getting back the following
Status: Downloaded newer image for jfrog-int-docker-open-docker.bintray.io/artifactory-docker:0.2.12
INFO: Starting pipe execution...
jfrog rt config --url=$JFROG_URL --user=$JFROG_USER --password=$JFROG_PASSWORD --interactive=false
[Error] Wrong number of arguments. You can read the documentation at https://www.jfrog.com/confluence/display/CLI/JFrog+CLI
It can’t find anything wrong based on the documentation, it looks like the pipeline makes a cli call and that call is failing
It looks like there is an issue when trying to config the server without the mandatory server-id argument:
https://bitbucket.org/JfrogDev/artifactory-docker/src/3b32b5d01d31c07acadb2a0d29a240110f88d59d/pipe/pipe.sh#lines-65
Instead, you can use the new jfrog-setup-cli pipe. This pipe downloads and configures the JFrog CLI.
For example, in your case:
- pipe: jfrog/jfrog-setup-cli:1.0.0
- source ./jfrog-setup-cli.sh
- jfrog rt docker-push "xxxx/web-interface:${BITBUCKET_BUILD_NUMBER}" local-containers
It requires to populate one secured environment variable prefixed by JF_ARTIFACTORY_ (i.e. JF_ARTIFACTORY_MYSERVER) with a JFrog CLI server token. This token created locally in your machine using the jfrog rt config export command.
Read more about this pipe here.
Read more about the jfrog rt config export command here.

Deploy on Meteor galaxy server with bitbucket and deployment token as variable

Hello I want to use the automatic deploymen on bitbucket to the galaxy server with a deployment token.
For this reason I am creating a deployment token that is comitted in the repository.
https://galaxy-guide.meteor.com/deploy-guide.html#deployment-token
To strenghten the security I would like to use Repository variables in bitbucket pipelines:
https://confluence.atlassian.com/bitbucket/environment-variables-794502608.html
And to store the deployment token of meteor in the variables instead in file.
For the deployment we use in the command:
METEOR_SESSION_FILE=deployment_token.json
And my question is - Is there any way so that I use some variable(string) where the token is used like
METEOR_SESSION_DEPLOYMENT_TOKEN=$METEOR_TOKEN
instead to call it from a file?
Some research, after having the same problem, brought me to this article, which simply solves the problem that you can't feed meteor just the json in an env var in the following simple way:
By adding the json file content as an env var and then echoes it out into a file on deploy.
echo $METEOR_TOKEN_FILE > deploy_token.json
METEOR_SESSION_FILE=deploy_token.json
Thanks to this article I figured it out.
Save json settings as env variable and then in deployment procesS:
echo $DEPLOY_SESSION_FILE > deployment_token.json
METEOR_SESSION_FILE=deployment_token.json DEPLOY_HOSTNAME=galaxy.meteor.com meteor deploy --allow-superuser myApp-staging.meteorapp.com --settings config/staging/settings.json --owner username

Google Cloud Composer DataflowJavaOperator: 403 Forbidden When Creating Job in Another Project

I am trying to use DataflowJavaOperator on our testing composer environment, but I am running into a 403 forbidden error. My intention is to kick off a Dataflow Java job on a different project using the test composer environment.
t2 = DataFlowJavaOperator(
task_id = "run-java-dataflow-job",
jar="gs://path/to/dataflow-jar.jar",
dataflow_default_options=config_params["dataflow_default_options"],
gcp_conn_id=config_params["gcloud_config"]["conn_id"],
dag=dag
)
My default options look like
'dataflow_default_options': {
'project': 'other-project',
'input': 'other-project:dataset.table',
'output': 'other-project:dataset.table'
...
}
I have tried creating a temporary composer test environment in the same project as the Dataflow, and this allows me to use DataflowJavaOperator as expected. Only when the composer environment resides in a different project as the Dataflow, does DataflowJavaOperator not work as expected.
My current workaround is to use BashOperator, use "env" to set the GOOGLE_APPLICATION_CREDENTIALS as gcp_conn_id path, store the jar file in our test composer bucket, and just run this bash command:
java -jar /path/to/dataflow-jar.jar \
[... all Dataflow job options]
Is it possible to use DataflowJavaOperator to kick off Dataflow jobs on another project?
You need a different GCP connection created for Composer to interact with your 2nd GCP project and you need to pass that connection id to gcp_conn_id in DataFlowJavaOperator

Wercker Firebase Deployment issue

I am trying automate Deployment to Firebase Hosting via Wercker and I am continously getting this error.
Following this tutorial
https://medium.com/#pradeep1991singh/integrate-wercker-with-bitbucket-firebase-and-slack-7eb3bc38543d
Stack Trace
> export WERCKER_STEP_ROOT="/pipeline/script-5ea4a2c6-b11f-4972-991a-eec61b3d43af"
export WERCKER_STEP_ID="script-5ea4a2c6-b11f-4972-991a-eec61b3d43af"
export WERCKER_STEP_OWNER="wercker"
export WERCKER_STEP_NAME="script"
export WERCKER_REPORT_NUMBERS_FILE="/report/script-5ea4a2c6-b11f-4972-991a-eec61b3d43af/numbers.ini"
export WERCKER_REPORT_MESSAGE_FILE="/report/script-5ea4a2c6-b11f-4972-991a-eec61b3d43af/message.txt"
export WERCKER_REPORT_ARTIFACTS_DIR="/report/script-5ea4a2c6-b11f-4972-991a-eec61b3d43af/artifacts"
source "/pipeline/script-5ea4a2c6-b11f-4972-991a-eec61b3d43af/run.sh" < /dev/null
[2017-08-15T13:38:45.071Z] ----------------------------------------------------------------------
[2017-08-15T13:38:45.076Z] Command: /usr/local/bin/node /usr/local/bin/firebase deploy --project --token --debug
[2017-08-15T13:38:45.076Z] CLI Version: 3.9.2
[2017-08-15T13:38:45.076Z] Platform: linux
[2017-08-15T13:38:45.076Z] Node Version: v7.10.1
[2017-08-15T13:38:45.077Z] Time: Tue Aug 15 2017 13:38:45 GMT+0000 (UTC)
[2017-08-15T13:38:45.077Z] ----------------------------------------------------------------------
[2017-08-15T13:38:45.091Z] > command requires scopes: ["email","openid","https://www.googleapis.com/auth/cloudplatformprojects.readonly","https://www.googleapis.com/auth/firebase","https://www.googleapis.com/auth/cloud-platform"]
[2017-08-15T13:38:45.091Z] > no authorization credentials were supplied or found
⚠ Your CLI authentication needs to be updated to take advantage of new features.
⚠ Please run firebase login --reauth
[2017-08-15T13:38:45.093Z] > command requires scopes: ["email","openid","https://www.googleapis.com/auth/cloudplatformprojects.readonly","https://www.googleapis.com/auth/firebase"]
[2017-08-15T13:38:45.093Z] > no authorization credentials were supplied or found
The Issue was related to the Wercker.yml file. The Step was not defined properly and it seems the environment variable wasnt getting read properly.
Steps to narrow down the issue
Log out of Fire base on local and then try to perform a firebase list - should get an error
try same with --token passing token and will get a list of all the valid projects if token is valid.
Take the valid Project name and token and then hardcode it in yml and try once to make sure the build executes properly
Finally when all are working , expose as protected variables and everything works like a charm!!.

Resources