Deploy on Meteor galaxy server with bitbucket and deployment token as variable - meteor

Hello I want to use the automatic deploymen on bitbucket to the galaxy server with a deployment token.
For this reason I am creating a deployment token that is comitted in the repository.
https://galaxy-guide.meteor.com/deploy-guide.html#deployment-token
To strenghten the security I would like to use Repository variables in bitbucket pipelines:
https://confluence.atlassian.com/bitbucket/environment-variables-794502608.html
And to store the deployment token of meteor in the variables instead in file.
For the deployment we use in the command:
METEOR_SESSION_FILE=deployment_token.json
And my question is - Is there any way so that I use some variable(string) where the token is used like
METEOR_SESSION_DEPLOYMENT_TOKEN=$METEOR_TOKEN
instead to call it from a file?

Some research, after having the same problem, brought me to this article, which simply solves the problem that you can't feed meteor just the json in an env var in the following simple way:
By adding the json file content as an env var and then echoes it out into a file on deploy.
echo $METEOR_TOKEN_FILE > deploy_token.json
METEOR_SESSION_FILE=deploy_token.json

Thanks to this article I figured it out.
Save json settings as env variable and then in deployment procesS:
echo $DEPLOY_SESSION_FILE > deployment_token.json
METEOR_SESSION_FILE=deployment_token.json DEPLOY_HOSTNAME=galaxy.meteor.com meteor deploy --allow-superuser myApp-staging.meteorapp.com --settings config/staging/settings.json --owner username

Related

Firebase CLI suddenly ignoring environment variables on Functions deployment

I have a Firebase code project with Functions meant to be deployed to multiple Firebase projects over multiple regions.
I used to set the deployment region like this:
return functions
.region(process.env.REGION)
// ...
and used this command to deploy:
$ REGION=us-central1 firebase deploy --only functions
it worked like a charm until recently. Now it seems to completely ignore REGION=us-central1 even if I export it before I run firebase deploy.
EDIT 2022-06-13 - Possible solution
I changed the code to dump the contents of process.env to a file during deployment. This is what I got:
{
"FIREBASE_CONFIG": "{\"projectId\":\"REDACTED\",\"storageBucket\":\"REDACTED.appspot.com\",\"locationId\":\"us-central\"}",
"GCLOUD_PROJECT": "REDACTED",
"CLOUD_RUNTIME_CONFIG": "{REDACTED}",
"__CF_USER_TEXT_ENCODING": "REDACTED"
}
so definitely is not the same list of variables I have in my local environment.
I could use the locationId from FIREBASE_CONFIG to get the target location, or CLOUD_RUNTIME_CONFIG (it contains the dump of the functions .config() object, so I could set the target there).
I also believe that I could use the .env and .env.{project alias or ID} files and their contents would be available in process.env at deployment time.
As per Osvaldo López's suggestion, here are some other details:
Running on MacOS
No errors are reported
No recent changes to the CLI
Any input would be very welcome! Thanks.

Symfony console output creates extra unwanted characters

i got this weird error, where every output in the console with symfony creates some extra characters. The following is the output after a basic symfony in the console.
$ symfony
←[32mSymfony CLI←[39m version ←[33mv4.22.0←[39m (c) 2017-2021 Symfony SAS
Symfony CLI helps developers manage projects, from local code to remote infrastructure
These are common commands used in various situations:
←[33mWork on a project locally←[39m
←[32mnew←[39m Create a new Symfony project
←[32mserve←[39m Run a local web server
←[32mserver:stop←[39m Stop the local web server
←[32msecurity:check←[39m Check security issues in project dependencies
←[32mcomposer←[39m Runs Composer without memory limit
←[32mconsole←[39m Runs the Symfony Console (bin/console) for current project
←[32mphp, pecl, pear, php-fpm, php-cgi, php-config, phpdbg, phpize←[39m Runs the named binary using the configured PHP version
←[33mManage a project on Cloud←[39m
←[32mlogin←[39m Log in with your SymfonyConnect account
←[32minit←[39m Initialize a new project using templates
←[32mlink←[39m Link current git repository to a SymfonyCloud project
←[32mprojects←[39m List active projects
←[32menvs←[39m List environments
←[32menv:create←[39m Create an environment
←[32mtunnel:open←[39m Open SSH tunnels to the app's services
←[32mssh←[39m Open an SSH connection to the app container
←[32mdeploy←[39m Deploy an environment
←[32mdomains←[39m List domains
←[32mvars←[39m List variables
←[32muser:add←[39m Add a user to the project
Show all commands with ←[32msymfony.exe help←[39m,
Get help for a specific command with ←[32msymfony.exe help COMMAND←[39m.
As you can see, I get this ←[33m here and there. Any help is gratefully accepted. Please let me know if you need any additional information from me.
$ bin/console --version
Symfony 5.2.1 (env: dev, debug: true)

Auth0 and Next.js deployed to Vercel: Location header error

I started with this sample repo: https://github.com/vercel/next.js/tree/canary/examples/auth0
My current repo: https://github.com/rebeccapeltz/next-auth-app-1
Login/Logout work fine locally. When I deploy to Vercel and logout I get this message in the browser:
Invalid character in header content ["Location"]
I've double checked the Auth0 env variables and they seem correct. Login works fine on Vercel. Can't figure out how to troubleshoot the header Location value that is causing the problem.
Nothing much going on yet and easy to reproduce: https://next-auth-app-1.now.sh/
Solved this by removing all env variables added to the Vercel online application settings. Then added the secrets using the now CLI now secrets add and deployed the app by setting up other env variables in now.json and using now --prod. Working OK now. For further external env secrets and references, I'm wondering if t's better to add them via now.json or to use the online settings GUI. One thing that wasn't clear is that when you add variables with now add secrets you need to prefix the value in the now.json with #. Kind of like accessing bash env variables with $. So after adding secrets my now.json looks like this
{
"build": {
"env": {
"AUTH0_DOMAIN": "<name of auth0 domain>",
"AUTH0_CLIENT_ID": "<what you get from auth0>",
"AUTH0_CLIENT_SECRET": "#auth0_client_secret",
"REDIRECT_URI": "<name of vercel app or domain name>/api/callback",
"POST_LOGOUT_REDIRECT_URI": "<name of vercel app>/",
"SESSION_COOKIE_SECRET": "#session_cookie_secret"
}
}
}
Should you add all env using secrets add and then just reference by name in the now.json? not sure.

Google Cloud Composer DataflowJavaOperator: 403 Forbidden When Creating Job in Another Project

I am trying to use DataflowJavaOperator on our testing composer environment, but I am running into a 403 forbidden error. My intention is to kick off a Dataflow Java job on a different project using the test composer environment.
t2 = DataFlowJavaOperator(
task_id = "run-java-dataflow-job",
jar="gs://path/to/dataflow-jar.jar",
dataflow_default_options=config_params["dataflow_default_options"],
gcp_conn_id=config_params["gcloud_config"]["conn_id"],
dag=dag
)
My default options look like
'dataflow_default_options': {
'project': 'other-project',
'input': 'other-project:dataset.table',
'output': 'other-project:dataset.table'
...
}
I have tried creating a temporary composer test environment in the same project as the Dataflow, and this allows me to use DataflowJavaOperator as expected. Only when the composer environment resides in a different project as the Dataflow, does DataflowJavaOperator not work as expected.
My current workaround is to use BashOperator, use "env" to set the GOOGLE_APPLICATION_CREDENTIALS as gcp_conn_id path, store the jar file in our test composer bucket, and just run this bash command:
java -jar /path/to/dataflow-jar.jar \
[... all Dataflow job options]
Is it possible to use DataflowJavaOperator to kick off Dataflow jobs on another project?
You need a different GCP connection created for Composer to interact with your 2nd GCP project and you need to pass that connection id to gcp_conn_id in DataFlowJavaOperator

SBT not passing credentials when publishing to Artifactory

I am coding a Java project and I'm automating the build and the publishing to JFrog Artifactory using SBT.
Whenever it's time to publish to Artifactory I want to do it using the Ivy directory layout and obviously publish the Ivy XML file along with the jar. I managed to achieve this by using the following lines in the build.sbt file:
crossPaths := false
publishTo := Some("Artifactory Realm" at "http://<Artifactory IP>:<Artifactory Port>/artifactory/org.project.my")
credentials += Credentials(Path.userHome / ".ivy2" / ".credentials")
publishMavenStyle := false
However it only works when anonymous users are allowed to deploy into Artifactory. I realized that sbt is not really passing my credentials to Artifactory but, instead, logging in as anonymous.
My $HOME/.ivy2/.credentials file looks like this:
realm=Artifactory Realm
host=http://<Artifactory IP>:<Artifactory Port>/artifactory/org.project.my
user=<my user name>
password=<my user name>
However, if I change the Artifactory configuration in order to prevent anonymous users from deploying new Artifacts, when I run "sbt publish" I get the following output:
[error] Unable to find credentials for [Artifactory Realm # <Artifactory IP>].
java.io.IOException: Access to URL http://<Artifactory IP>:<Artifactory Port>/artifactory//org.project.my/org/project/my/project-my/1.0.0/project-my-1.0.0.jar was refused by the server: Unauthorized
The Artifactory request.log file then contains:
20160219011657|319|REQUEST|10.0.2.2|anonymous|PUT|/org.project.my/org/project/my/project-my/1.0.0/project-my-1.0.0.jar|HTTP/1.1|401|24978
I have also tried passing the credentials manually instead of using a file:
credentials += Credentials("Artifactory Realm", "localhost", "<USERNAME>", "<PASS>")
But I am getting the same result.
Any idea what I might be missing?
try:
host=<Artifactory IP>
old answer (doesn't work):
host=<Artifactory IP>:<Artifactory port>
I had a different problem: I had the wrong realm set on my .credentials file.
Looking at the error output from sbt, I was able to figure out that I should use:
realm=Artifactory Realm
Error shows the expected values for realm and host:
[error] Unable to find credentials for [Artifactory Realm # myhost].

Resources