running sbt command from ansible shell - sbt

I am trying to run a sbt command using ansible's shell command as follows from the main.yml in a role's task directory:
- shell: ./sbt clean reload compile
I have also tried the following:
- shell: /usr/sbin/sbt clean reload compile
Neither command works. The output from which sbt is
/usr/bin/sbt
The error message I get from ansible is:
fatal: [testserver]: FAILED! => {"changed": true, "cmd": "/usr/bin/sbt clean reload compile", "delta": "0:00:00.062588", "end": "2016-10-04 21:36:26.883947", "failed": true, "rc": 127, "start": "2016-10-04 21:36:26.821359", "stderr": "/bin/sh: 1: /usr/bin/sbt: not found", "stdout": "", "stdout_lines": [], "warnings": []}

I had to use local_action before using the shell command (the command was running on the remote machine, not the local one previously):
- local_action: shell /usr/bin/sbt clean reload compile

Related

firebase functions emulators are failing to load in github action

I'm trying to run a github action workflow which start firebase emulators,
It seems that for some reason the functions emulator fails to parse the .runtimeconfig.json ,I verified that it exits and valid.
Here is the error:
functions: Failed to load function definition from source: FirebaseError: Failed to load function definition from source: Failed to generate manifest from function source: TypeError: Cannot read properties of undefined (reading 'key')
Here is an example of a .runtimeconfig.json:
{
"coockie": {
"key": "SOME_STRING"
},
"field": {
"list": [
"SOME_STRING"
]
},
}
Here are some relevant steps from my workflow:
jobs:
run-job:
runs-on: ubuntu-latest
steps:
- name: Checkout_Workflows
uses: actions/checkout#v2
- name: Set up Node.js 16
uses: actions/setup-node#v1
with:
node-version: 16.15.0
- name: npm install -g firebase-tools-10.9.2
run: npm install -g firebase-tools#10.9.2
- name: npm install -g typescript-4.7.4
run: npm install -g typescript#4.7.4
- name: npm install firebase-functions
run: cd firebase/cloud-functions && npm install
- name: deploy serve & and run PlayWright
run: npm run start:emulator
shell: bash
env:
PROJECT_ID: MY_PROJRCT_ID
FIREBASE_TOKEN: MY_FIREBASE_TOKEN
The NPM command do the following:
"start:emulator": "cd firebase/cloud-functions; npm run start:automation"
"start:automation": "CLOUD_RUNTIME_CONFIG=/home/runner/work/uvcamp/uvcamp/firebase/cloud-functions/.runtimeconfig.json firebase use dev && firebase functions:config:get > .runtimeconfig.json && firebase emulators:exec --import=./emulator-data --only=auth,firestore,functions,storage \"npm run start:playwright-test\"",
I ran the same project locally and it worked find with no problems, including the script in npm run start:playwright-test
I tried setting an environment variable CLOUD_RUNTIME_CONFIG with the absolute path and to downgrade version of:
firebase-tools to 10.9.2
typescript to 4.7.4
and still got the same error.
any thought on what can be the issue?
thank!

Getting internal exception while executing pg_dump when trying to get the init migrations from hasura

I want to get the latest migrations from my hasura endpoint to my local filesystem.
Command I'm trying to run
hasura init config --endpoint someendpoint.cloudfront.net/ --admin-secret mysecret
Output
INFO Metadata exported
INFO Creating migrations for source: default
ERRO creating migrations failed:
1 error occurred:
* applying migrations on source: default: cannot fetch schema dump: pg_dump request: 500
{
"path": "$",
"error": "internal exception while executing pg_dump",
"code": "unexpected"
}
run `hasura migrate create --from-server` from your project directory to retry
INFO directory created. execute the following commands to continue:
cd /home/kanhaya/Documents/clineage/study-container/src/conifg
hasura console
The Haura is deployed using a custom Docker image:
FROM hasura/graphql-engine:v2.3.0 as base
FROM ubuntu:focal-20220302
ENV HASURA_GRAPHQL_ENABLE_CONSOLE=true
ENV HASURA_GRAPHQL_DEV_MODE=true
# ENV HASURA_GRAPHQL_PG_CONNECTIONS=15
ENV HASURA_GRAPHQL_DATABASE_URL=somedatabaseURL
ENV HASURA_GRAPHQL_ADMIN_SECRET=mysecret
ENV EVENT_TRIGGER=google.com
ENV STUDY_CONFIG_ID=1
COPY ./Caddyfile ./Caddyfile
COPY ./install-packages.sh ./install-packages.sh
USER root
RUN ./install-packages.sh
RUN apt-get -y update \
&& apt-get install -y libpq-dev
COPY --from=base /bin/graphql-engine /bin/graphql-engine
EXPOSE 8080
CMD caddy start && graphql-engine --database-url $HASURA_GRAPHQL_DATABASE_URL serve --admin-secret $HASURA_GRAPHQL_ADMIN_SECRET
Which version of Postgres are you currently running?
There's a conversation over here with a couple of workarounds for supporting 14 in the Hasura container (looks like version mis-match in one of the libraries):
https://github.com/hasura/graphql-engine/issues/7676

(GitHub Actions, Cypress run fails to record outputs): `Error: EACCES: permission denied, mkdir '/__w/**`

I'm trying to execute a cypress run job on GitHub Actions and I encounter the following issue:
Warning: We failed to record the video.
This error will not alter the exit code.
Error: EACCES: permission denied, mkdir '/__w/{{repo-name}}/{{repo-name}}/cypress/videos/'
Here is a snippet of my .yml file:
- name: Run Cypress tests
run: |
if [ ! -z ${{env.CYPRESS_RECORD_KEY}} ]; then
npx cypress run -P ${{env.CYPRESS_PROJECT_PATH}} -C ${{env.CYPRESS_CONFIG_FILE}} -r ${{env.CYPRESS_REPORTER}} ${{env.ADDITIONAL_OPTIONS}} --record
else
npx cypress run -P ${{env.CYPRESS_PROJECT_PATH}} -C ${{env.CYPRESS_CONFIG_FILE}} -r ${{env.CYPRESS_REPORTER}} ${{env.ADDITIONAL_OPTIONS}}
fi
env:
CYPRESS_RECORD_KEY: ${{secrets.CYPRESS_RECORD_KEY}}
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}
- name: Archive Cypress outputs
uses: actions/upload-artifact#v2
with:
# Artifact name. Optional, default is artifact
name: cypress-outputs
path: |
cypress/videos/
cypress/screenshots/
if-no-files-found: error
retention-days: 15
The error happens when the job reaches the npx cypress run section.

pip install on Jupyter kerner venv (virtual environment) is not updated

I'm creating the Jupyter kernel within the virtual environment but the pkg in Juypter notebook is still missing after pip install
First create your virtual env using this post and activate it!
Create the kernel using:
ipython kernel install --name=gen-py3
Now the kernel details would be in:
/usr/local/share/jupyter/kernels/gen-py3
note: you can cd to this folder and open the file kernel.json and see the details of your env, for example mine is:
"argv": [
"/usr/local/opt/python/bin/python3.7",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"display_name": "gen-py3",
"language": "python"
Which means is taken out of the global Python env b/c of this "/usr/local/opt/python/bin/python3.7"
In some cases when you're running in your venv it's constantly taken out of you global env and not your venv. Not try to run in your venv:
virtualenv -p python3 venv
source <env_name>/bin/activate ; in our case venv
pip install ipython
pip install ipykernel
and make sure you run
ipython kernel install --name=venv-test
from your venv ipython.
for example if your virtual env is venv and the location is
~/venv
run:
/Users/nathanielkohn/venv/bin/ipython kernel install --name=venv-test
And your kernel.json file is with the venv:
{
"argv": [
"/Users/<your_user_name>/venv/bin/python",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"display_name": "venv-test",
"language": "python",
"metadata": {
"debugger": true
}

Running ansible playbook from jenkins execute shell

I am trying to run an ansible playbook from Execute Shell section of Jenkins,But I am getting below errors.I have already install Ansible plugin and configured in global tool configuration section.
+ ansible-playbook -i inventory installapache.yml -vvvv
/tmp/jenkins1596894985578146945.sh: line 4: ansible-playbook: command not found
Build step 'Execute shell' marked build as failure
Finished: FAILURE
I am using below commands:
export ANSIBLE_FORCE_COLOR=true
cd /home/ec2-user
ansible-playbook -i hosts /home/ec2-user/installapache.yml -vvvv

Resources