Some context:
I have a single repo (nuxt application) that is used to deploy to multiple apps/domains
all apps are on the same server
each app is in a separate folder with a clone of the repo, each folder is served on it's own domain (nginx)
each app has a different env file, the most important difference is a domain id (eg: DOMAIN_ID=1 etc..)
before build, I have a node script that does some setup work based on this DOMAIN_ID
I would like to use PM2 to:
use a single dir with the repo for all my domains
upon running pm2 deploy production I would like to be able to deploy all the domains, each domain should run it's setup script before doing the build
each domain should build in a subfolder so I can configure nginx to serve the app for a specific domain from it's folder
I tried to create an ecosystem file like so:
module.exports = {
apps: [
{
name: 'Demo1',
exec_mode: 'cluster',
instances: 'max',
script: './node_modules/nuxt/bin/nuxt.js',
args: 'start',
env: {
DOMAIN_ID: 1,
},
},
{
name: 'Demo2',
exec_mode: 'cluster',
instances: 'max',
script: './node_modules/nuxt/bin/nuxt.js',
args: 'start',
env: {
DOMAIN_ID: 2,
},
},
],
deploy: {
production: {
host: 'localhost',
user: 'root',
ref: 'origin/master',
repo: 'my_repo',
path: 'path_to_repo',
'post-setup': 'npm install && node setup.js',
'post-deploy': 'npm run build:setup && pm2 startOrRestart ecosystem.config.js --env production',
},
},
}
but it doesn't seem to work.
With the above ecosystem file the processes are created but when I access the domain for Demo1, pm2 serves randomly from Demo1 or Demo2 process.
There should be 2 dist folders somewhere, one for each of the apps.
I'm wondering if the above config is good and I'm just having an nginx issue or pm2 can't actually handle my use case.
To achieve what you're after, you'll need the following for each app:
A directory to serve your production build files from.
A node server instance, running on a unique port (eg. 3000, 3001).
A suitable nginx virtual host configuration for each app.
First, the build directories. The nuxt build script will look for a .nuxt.config.js and .env file in the srcDir (project root by default), produce the production build files for your app based on these files, and store the output at /.nuxt (again, by default). Passing the --config-file and --dotenv arguments to the nuxt build command allows you to point to different config and env files, thus enabling you to produce separate builds from a single repo.
Eg:
-- appRoot (the default srcDir, we'll change this in nuxt.config.js)
|__ node_modules
|__ package.json
|__ ...etc
|__ src
|__ commons
| |__ (shared configs, plugins, components, etc)
|__ app1
| |__ nuxt.config.js
| |__ ecosystem.config.js
| |__ .env
|__ app2
|__ nuxt.config.js
|__ ecosystem.config.js
|__ .env
For convenience, you could create the following scripts in package.json. To produce a build for app1, you run npm run app1:build.
"scripts": {
...
"app1:build": "nuxt build --config-file src/app1/nuxt.config.js --dotenv src/app1/.env",
"app2:build": "nuxt build --config-file src/app2/nuxt.config.js --dotenv src/app2/.env",
...
}
Now we're pointing our build scripts to individual app's nuxt.config.js files, we need to update those files and specify a srcDir and buildDir. The buildDir is where the output from each build command will be stored.
nuxt.config.js
...
srcDir: __dirname,
buildDir: '.nuxt/app1' (this path is relative to the project root)
...
That's it for building. For serving...
Each app needs a unique production server instance, running on it's own unique port. By default, nuxt start will launch a server listening on port 3000 and based on the nuxt.config.js file at the root of the project. As with the build command, we can pass arguments to change the default behaviour.
You could add the commands to package.json:
"scripts": {
...
"app1:build": "nuxt build --config-file src/app1/nuxt.config.js --dotenv src/app1/.env",
"app1:start": "nuxt start --config-file src/app1/nuxt.config.js --dotenv src/app1/.env -p=3000",
"app2:build": "nuxt build --config-file src/app2/nuxt.config.js --dotenv src/app2/.env",
"app2:start": "nuxt start --config-file src/app2/nuxt.config.js --dotenv src/app2/.env -p=3001",
...
}
By telling nuxt start to use app specific nuxt.config.jss, and having those point to unique buildDirs, we're telling our servers which directories to serve from for each app.
Important
Make sure when starting a production server you specify unique port numbers. Either add it to the start command (as above) or inside the nuxt.config.js file of each app.
Now you have unique builds, and unique server instances, you need to configure nginx to serve the correct app for each domain (I'm assuming you either know how to configure nginx to support virtual hosts, or someone on your team is handling it for you). Here's a stripped down config example:
server {
...
server_name app1.com;
root /path/to/appRoot;
...
location / {
...
proxy_pass http://localhost:3000;
...
}
...
}
Each app's nginx config can point to the same root- it's the proxy_pass that routes the request to the correct node server, which in turn knows which app to serve as we passed the appropriate arguments in with our nuxt start command.
PM2
I use PM2 to manage the node server instances, so a deployment script for the given example might look like:
Handle your version control/env files, and then...
cd /appRoot
npm install
npm run app1:build
pm2 reload src/app1/ecosystem.config.js
With app1's ecosystem.config.js files setup as so:
module.exports = {
apps: [
{
name: "app1",
exec_mode: "cluster",
instances: "max",
script: "./node_modules/nuxt/bin/nuxt.js",
args: "start --config-file src/app1/nuxt.config.js --dotenv src/app1/.env -p=3000"
}
]
}
Might need some tweaking to suit your needs, but I hope this helps!
Related
In my cypress.json I have baseUrl configured as
{
"baseUrl": "http://localhost:3000"
}
The package.json contains
"scripts": {
"cy:version": "cypress version",
"cy:verify": "cypress verify",
"cy:run": "CYPRESS_baseUrl=http://localhost:3000 cypress run --record --browser chrome",
"start": "serve --listen ${PORT:-3000}"
}
And in semaphore.yml I have these lines
jobs:
-name: Execute E2E
commands:
- npm start & wait-on http://localhost:3000
- npm run cy:run
But for some reason the application doesn't get serve on localhost:3000 and instead I see this
How can I fix this and serve the application on localhost:3000? Thanks.
You can do a couple of things to debug the problem:
Are you using a proxy somewhere, maybe as env variable ..?
Check the Cypress proxy settings and see if there is something there.
Try to change the port to something else. Use a port that you know for sure that its not used by anything else. You can check that with netstat command.
Is your localhost running on http or is it https instead?
If i think of something else, ill update the answer.
You need to build the app before serve some application, basically you don't have any index.html file
I'm running DDEV nginx server on Bedrock wordpress site and trying to load snippet for Browsersync.
gulpfile.js browserSync task:
browserSync.init({
proxy: {
target: "https://web.ddev.site"
},
https: {
key: "/Users/user/Library/Application Support/mkcert/rootCA-key.pem",
cert: "/Users/user/Library/Application Support/mkcert/rootCA.pem"
}, open:false});
Browser doesnt load snippet and print following error:
(index):505 GET https://web.ddev.site:3000/browser-sync/browser-sync-client.js?v=2.26.7 net::ERR_SSL_KEY_USAGE_INCOMPATIBLE
How can I get this two things to work together? Before DDEV I was using MAMP but DDEV has much better performance and I want to switch to this app. Thanks for help.
The problem was bad ssl certificates file. It was necessary to use docker container certificate. Proxy option is not anymore required.
After setup ddev container, you need to copy docker certificate to some location:
docker cp ddev-router:/etc/nginx/certs ~/tmp
After that just update path to correct certificates files. My gulpfile task now looks like this:
browserSync.init({https: {
key: "/Users/username/tmp/master.key",
cert: "/Users/username/tmp/master.crt"
}, open:false});
Thanks #rfay for solution!
I'm trying to host a Vue development server (including the web socket) in a subdirectory of my domain using nginx, but with my current setup it looks like the vue server is responding to requests instead of the webpack development server.
To be clear, I want my app to be hosted on https://xxx.yyy/zzz/, I want assets, etc hosted in https://xxx.yyy/zzz/path/to/asset, and I want the webpack dev server hosted in https://xxx.yyy/zzz/sockjs-node/info?t=.... I'm pretty sure this should be possible without special casing the nginx setup because it works without the subdirectory.
Here's my setup so far:
nginx
server {
# server name, ssl, etc
location /test/ {
proxy_pass http://localhost:8080;
}
}
Create the project
$ vue create -d hello-world
vue.config.js
module.exports = {
publicPath: '/test/',
devServer: {
public: "0.0.0.0/test",
disableHostCheck: true,
}
}
Then running
$ npm run serve
The client makes requests to all the right places, but
$ curl https://xxx.yyy/test/sockjs-node/info
gives back index.html, whereas
$ curl localhost:8080/sockjs-node/info
gives back the expected websocket info. I have also tried changing the nginx setup to proxy_pass http://localhost:8080/;, but that causes index.html to not render when I go to https://xxx.yyy/test/ because it's expecting a path and isn't being forwarded one. When I also change publicPath to /, I can't get the client to look in the right subdirectory for assets.
Is there a correct way to do this?
It is possible to set the socket path using:
module.exports = {
//...
devServer: {
sockPath: 'path/to/socket',
}
};
In this case:
sockPath: '/test/sockjs-node',
I have Ubuntu 14.04, developing on it and want to have a test server on the same computer. It is ran in Virtual Box.
So I followed all the steps on Github for Mupx setup and watched the video that Meteor.js guide told me to watch it. When I get to command:
mupx setup
it shows me the screen with the error:
nejc#nejc-vb:~/Meteor Projects/CSGO/CSGO-deploy$ mupx setup
Meteor Up: Production Quality Meteor Deployments
------------------------------------------------
Configuration file : mup.json
Settings file : settings.json
“ Checkout Kadira!
It's the best way to monitor performance of your app.
Visit: https://kadira.io/mup ”
Started TaskList: Setup (linux)
[my_public_IP] - Installing Docker
events.js:72
throw er; // Unhandled 'error' event
^
Error: Timed out while waiting for handshake
at null._onTimeout (/usr/local/lib/node_modules/mupx/node_modules/nodemiral/node_modules/ssh2/lib/client.js:138:17)
at Timer.listOnTimeout [as ontimeout] (timers.js:121:15)
My mup.json file looks like this:
{
// Server authentication info
"servers": [
{
"host": "my_public_IP",
"username": "nejc",
"password": "123456",
// or pem file (ssh based authentication)
// WARNING: Keys protected by a passphrase are not supported
//"pem": "~/.ssh/id_rsa"
// Also, for non-standard ssh port use this
//"sshOptions": { "port" : 49154 },
// server specific environment variables
"env": {}
}
],
// Install MongoDB on the server. Does not destroy the local MongoDB on future setups
"setupMongo": true,
// Application name (no spaces).
"appName": "CSGO",
// Location of app (local directory). This can reference '~' as the users home directory.
// i.e., "app": "~/Meteor Projects/CSGO",
// This is the same as the line below.
"app": "/home/nejc/Meteor Projects/CSGO",
// Configure environment
// ROOT_URL must be set to your correct domain (https or http)
"env": {
"PORT": 80,
"ROOT_URL": "http://my_public_IP"
},
// Meteor Up checks if the app comes online just after the deployment.
// Before mup checks that, it will wait for the number of seconds configured below.
"deployCheckWaitTime": 30,
// show a progress bar while uploading.
// Make it false when you deploy using a CI box.
"enableUploadProgressBar": true
}
I'm trying to use chef to provision a centos webserver with nginx. I want to use the http_auth_request_module and the headers_more_module. My role looks like this:
{
"name" : "cms-aws",
"description" : "a role to deploy cms to aws",
"default_attributes" : {
"nginx" : {
"source" : {
"modules" : ["nginx::http_auth_request_module","nginx::headers_more_module"]
}
}
},
"run_list" : [
"runit",
"python",
"build-essential",
"gunicorn",
"nginx::source",
"openssl",
"yum",
"git",
"yum-epel",
"my-custom-cookbook",
"supervisor"
]
}
However, when I run nginx -V on the server, those modules aren't listed, and nginx complains when I use the auth_request directive in my conf file.
I've also tried with the following attributes, but chef couldn't find those cookbooks when I ran it:
"default_attributes" : {
"nginx" : {
"source" : {
"modules" : ["http_auth_request_module","headers_more_module"]
}
}
},
Edit:
So I've determined that the AMI I was running this on already had nginx installed. So when systemctl starts nginx it's hitting the preexisting one, and not the one chef installs. I tried modifying my attributes as such:
"default_attributes" : {
"nginx" : {
"source" : {
"modules" : ["nginx::http_auth_request_module","nginx::headers_more_module"]
},
"binary" : "/usr/sbin/nginx"
}
},
but chef still installs nginx at /opt/nginx-1.6.2/sbin/nginx, any idea how to correct this?
Edit edit: Turns out nginx is not installed out of the box on this AMI, so the cookbook installs it at /usr/sbin/nginx, yet when I run nginx -V the desired modules aren't listed. When I run /opt/nginx-1.6.2/sbin/nginx -V it lists the requested modules.
Even if nginx was installed from a package, the nginx cookbook will create an runit service starting nginx from /opt/nginx-1.6.2/sbin/. You can manage it with the runit's sv command
sudo sv status nginx
sudo sv up nginx
When compiling nginx from source in chef, it makes sense to forcefully remove any existing packages in the same recipe:
package("nginx") { action :remove }
Since your nginx appears to be compiled correctly (/opt/nginx-1.6.2/sbin/nginx -V produces the correct result), the above should be enough to fix the issue.