Connect local ReactionCommerce app with remote database - meteor

How can I connect my local running reaction app with a remote mongodb?
I tried creating this settings.json file
{
"env" : {
"MONGO_URL" : "<remote db url >"
}
}
And ran the app with this command
meteor run --inspect-brk --settings settings.json
But it still connects to local meteor mongo. Could someone tell me the correct syntax to configure mongo.

Use a .env file at the root of your reaction directory, defining MONGO_URL (and any other environment variable you may need) this way:
MONGO_URL=<MongoDB URL>

Related

How do I access a Sqlite3 database from an Electron AppImage .mount point?

OS: Linux 5.9.16-1-MANJARO
Electron version: 10.1.5
BetterSqlite version: 7.1.2
I am currently writing an application using Electron and BetterSqlite.
I build the AppImage like this:
npm run build && electron-builder build
This is how I access the database from my code:
db = new Database(
path.join(__dirname, `/${dbName}`).replace("/app.asar", "")
);
I have added the database file to use using:
"extraResources": [
"public/build/Database.db"
],
But when I open the AppImage i get the following error message:
SqliteError: attempt to write a readonly database
The database seems to be inaccessible due to the /tmp/.mountxxx point being readonly.
This behavior does not occur when I open the application in the development folder since it's not a readonly directory.
Is there a way to use the database from the /tmp/.mountxxx directory?
How would I got about accessing the database another way?
Thank you in advance.
I have searched for a way to use the AppImage mount point to read and write but I have not found anything. I will be using the user's home directory to store the database
As the error says when an AppImage is executed the AppDir is mounted as RO filesystem.
To workaround this you need to copy the database file into the user home using an startup script. By example you can copy it to "$HOME/.cache/com.myapp/appdata.db" then use this new copy.

Hosting Symfony 4 app with EasyDeployBundle on server without /usr/local/bin/composer

For a Symfony 4 app I have chosen a Web Cloud plan from the hosting provider OVH.
For the deployment I have decided to use the EasyDeployBundle which looks very promising. This is my config file:
<?php
use EasyCorp\Bundle\EasyDeployBundle\Deployer\DefaultDeployer;
return new class extends DefaultDeployer
{
public function configure()
{
return $this->getConfigBuilder()
->server('ovh')
->deployDir('directory/path/at/server')
->repositoryUrl('git#github.com:foo/bar.git')
->repositoryBranch('master')
;
}
}
I have .ssh/config file with the following entry:
Host ovh
Hostname sshcloud.foobar.hosting.ovh.net
Port 12345
User foobar
Note: all values are dummies, just for illustrational purposes.
When I run:
php bin/console deploy --dry-run -v
everything goes fine, but when I actually try to deploy I get the following error:
The command "ssh ovh 'which /usr/local/bin/composer'" failed.
The problem is that I have no write-access to the directory /usr/local/bin/ on the server. The composer.phar is in my home directory and I can't move it to the provisioned destination.
Is there any possibility to tell EasyDeployBundle to look for composer in another directory?
I should really read the manuals, in particular when I'm linking them in my question.
There is a method remoteComposerBinaryPath that accepts custom path to composer. I have amended the method configure like this:
public function configure()
{
return $this->getConfigBuilder()
->server('ovh')
->deployDir('directory/path/at/server')
->repositoryUrl('git#github.com:foo/bar.git')
->repositoryBranch('master')
->remoteComposerBinaryPath('composer.phar')
;
}
On the server I created .bashrc in my home folder and added the line:
export PATH=$PATH:/home/foobar
and now the deployment is passing this hurdle.
I have now another problem, but at least this one is solved and maybe the answer can help other people too.

Deploy on Meteor galaxy server with bitbucket and deployment token as variable

Hello I want to use the automatic deploymen on bitbucket to the galaxy server with a deployment token.
For this reason I am creating a deployment token that is comitted in the repository.
https://galaxy-guide.meteor.com/deploy-guide.html#deployment-token
To strenghten the security I would like to use Repository variables in bitbucket pipelines:
https://confluence.atlassian.com/bitbucket/environment-variables-794502608.html
And to store the deployment token of meteor in the variables instead in file.
For the deployment we use in the command:
METEOR_SESSION_FILE=deployment_token.json
And my question is - Is there any way so that I use some variable(string) where the token is used like
METEOR_SESSION_DEPLOYMENT_TOKEN=$METEOR_TOKEN
instead to call it from a file?
Some research, after having the same problem, brought me to this article, which simply solves the problem that you can't feed meteor just the json in an env var in the following simple way:
By adding the json file content as an env var and then echoes it out into a file on deploy.
echo $METEOR_TOKEN_FILE > deploy_token.json
METEOR_SESSION_FILE=deploy_token.json
Thanks to this article I figured it out.
Save json settings as env variable and then in deployment procesS:
echo $DEPLOY_SESSION_FILE > deployment_token.json
METEOR_SESSION_FILE=deployment_token.json DEPLOY_HOSTNAME=galaxy.meteor.com meteor deploy --allow-superuser myApp-staging.meteorapp.com --settings config/staging/settings.json --owner username

Meteor: how to load different files based on CLI parameter?

In my Meteor (1.2) app I've separated files for development and production
e.g.
client/lib/appVars.config.PROD.js
client/lib/appVars.config.CONFIG.js
Ideally the "twin" files have the same variables, functions etc. with little differences but (global) variables and functions which are common to debug and production have the same name.
Is there a way to call meteor run with a command line parameter DEBUG_MODE = true | false so that I cad load either one or the other file, depending on the current mode (debug, production)?
Set different environmental variables and run via CLI with meteor run --settings settings.json
Then you just need a development and production (and staging?) settings.json
Example of a settings file:
{
"awsBucket": "my-example-staging",
"awsAccessKeyId": "AABBCCddEEff12123131",
"awsSecretKey": "AABBCCddEEff12123131+AABBCCddEEff12123131",
"public": {
"awsBucketUrl": "https://my-meteor-example.s3.amazonaws.com",
"environment": "staging"
},
"googleApiKey": "AABBCCddEEff12123131"
}
EDIT ADD:
To access your environmental keys, just select
Meteor.settings.awsBucket
Security Update (thanks Dave Weldon)
See https://docs.meteor.com/#/full/structuringyourapp
Re production vs development, you should have two settings.json files, the standard one for production (.config/settings.json) and a development one (.config/development/config.json) and when you boot outside of production you boot meteor --settings .config/development/settings.json
Re client side, note that if you make the key public e.g.
{
"service_id":"...",
"service_secret":"...",
"public":{
"service_name":"..."
}
}
Then only Meteor.settings.public.service_name will be accessible on the client

berks-api will not run on ubuntu in azure - get Permission denied # rb_sysopen - /etc/chef/client.pem

As part of our chef infrastructure I'm trying to set up and configure a berks-api server. I have created an Ubuntu server in azure and i have bootstrapped it and it appears as a node in my chef-server.
I have followed the instructions at github - bekshelf-api installation to install the berks-api via a cookbook. I have run
sudo chef-client
on my node and the cookbook appears to have been run successfully.
The problem is that the berks-api doesn't appear to run. My Linux terminology isn't great so sorry if I'm making mistakes in what I say but it appears as if the berks-api service isn't able to run. If I navigate to /etc/service/berks-api and run this command
sudo berks-api
I get this error
I, [2015-07-23T11:56:37.490075 #16643] INFO -- : Cache manager starting...
I, [2015-07-23T11:56:37.491006 #16643] INFO -- : Cache Builder starting...
E, [2015-07-23T11:56:37.493137 #16643] ERROR -- : Actor crashed!
Errno::EACCES: Permission denied # rb_sysopen - /etc/chef/client.pem
/opt/berkshelf-api/v2.1.1/vendor/bundle/ruby/2.1.0/gems/ridley-4.1.2/lib/ridley/client.rb:144:in `read'
/opt/berkshelf-api/v2.1.1/vendor/bundle/ruby/2.1.0/gems/ridley-4.1.2/lib/ridley/client.rb:144:in `initialize'
If anyone could help me figure out what is going on, I'd really appreciate it. If you need to explain the setup any more let me know.
It turns out I misunderstood the configuration of the berks-api. I needed to get a new private key for my client (berkshelf) from manage.chef.io for our organization. I then needed to upload the new key (berkshelf.pem) to /etc/berkshelf/api-server and reconfigure the berks-api to use the new key. so my config for the berks-api now looks like below:
{
"home_path":"/etc/berkshelf/api-server",
"endpoints":[
{
"type":"chef_server",
"options":
{
"url":"https://api.opscode.com/organizations/my-organization",
"client_key":"/etc/berkshelf/api-server/berkshelf.pem",
"client_name":"berkshelf"
}
}
],
"build_interval":5.0
}
I couldn't upload berkshelf.pem directly to the target location, i had to upload it to my home location, then copy it from within linux.
Having done this, the service starts and works perfectly.

Resources