I'm setting up multiple virtual servers using docker and managing the routing with an nginx reverse proxy (https://hub.docker.com/r/jwilder/nginx-proxy/).
I already have a couple other dockers (for mysql and wp)
I would like to use mupx to deploy thanks to it's ease of use (https://github.com/arunoda/meteor-up/tree/mupx#), though it is require to provide a port and defaults to 80 (which conflicts with nginx).
Here are the relevant elements from the mup.json
"servers": [
{
"host": "111.111.111.111",
"username": "root",
"pem": "path/to/key",
"env":{
"VIRTUAL_HOST":"subdomain.domain.com"
}
}
],
...
"env": {
"ROOT_URL": "http://subdomain.domain.com"
}
Anyone have any experience with this?
I think you can change the port of the app to avoid the conflict
"env": {
"PORT": 80, // change to anything?
Related
I am writing unit tests for my Firebase Functions and I want to automatically connect the functions, auth, storage, etc. emulators from my script without having to specify if I am testing in local environment or development environment.
Is there any way I can write a script to see if the Firebase Emulator is running on my local machine from an external node script?
For example, is there a way I can see processes running on specific local ports from a node script?
I tried using
import { exec } from "child_process";
const checkEmulator = exec("lsof -i:5000");
(I am using MacOS)
Then using the output to determine if the Firebase Functions Emulator is running on port 5000, but the output of the exec function does not make any sense to me.
Is there a more efficient way to check if the emulator is running on your local machine?
Thanks for any help!
You can use an HTTP Get request with something like curl:
curl localhost:4400/emulators
This will print out a JSON object listing all running emulators. Example:
{
"hub":{
"name": "hub",
"host": "localhost",
"port": 4400
},
"functions": {
"name": "functions",
"host": "localhost",
"port": 5001
}
"firestore": {
"name": "firestore",
"host": "localhost",
"port": 8080
}
}
Taken from https://firebase.google.com/docs/emulator-suite/install_and_configure
Alternatively, psaux can list active processes. Each emulator is a different process and is run from .../.cache/firebase/emulators.
https://www.npmjs.com/package/psaux
I'm running DDEV nginx server on Bedrock wordpress site and trying to load snippet for Browsersync.
gulpfile.js browserSync task:
browserSync.init({
proxy: {
target: "https://web.ddev.site"
},
https: {
key: "/Users/user/Library/Application Support/mkcert/rootCA-key.pem",
cert: "/Users/user/Library/Application Support/mkcert/rootCA.pem"
}, open:false});
Browser doesnt load snippet and print following error:
(index):505 GET https://web.ddev.site:3000/browser-sync/browser-sync-client.js?v=2.26.7 net::ERR_SSL_KEY_USAGE_INCOMPATIBLE
How can I get this two things to work together? Before DDEV I was using MAMP but DDEV has much better performance and I want to switch to this app. Thanks for help.
The problem was bad ssl certificates file. It was necessary to use docker container certificate. Proxy option is not anymore required.
After setup ddev container, you need to copy docker certificate to some location:
docker cp ddev-router:/etc/nginx/certs ~/tmp
After that just update path to correct certificates files. My gulpfile task now looks like this:
browserSync.init({https: {
key: "/Users/username/tmp/master.key",
cert: "/Users/username/tmp/master.crt"
}, open:false});
Thanks #rfay for solution!
Cross-Post from serverfault
Since I have seen similar issues posted here, I will X-post my question from ServerFault (https://serverfault.com/questions/855120/multi-container-docker-on-aws-nginx-use-host-machine-etc-hosts-resolver). I hope this
is permitted.
I have a multi-container docker environment on Amazon Elastic Beanstalk with the following Dockerrun.aws.json file:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "web",
"memoryReservation": 256,
"image": "my/nginx/repo/image",
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
],
"links": [
"api"
],
"essential": true
},
{
"name": "api",
"memoryReservation": 256,
"image": "my-api/repo",
"essential": true,
"portMappings": [
{
"hostPort": 3000,
"containerPort": 80
}
]
}
]
}
Ultimately I want the node app served by nginx to resolve requests to named addresses from linked containers, so in my web image (node app) I'd like to make a request to http://api/some/resource and let nginx resolve that to the api container.
Now, since docker adds a host entry for the api container due to the specified link, I want the nginx server to resolve addresses from the hosts etc/hosts file, however as I found out, nginx uses it's own resolver. After researching the issue a bit I found out that in non-Elastic Beanstalk multi-container solutions and with user-defined networks, the resolver would be provided by docker on 127.0.0.11, however since it is currently not possible to define user-defined networks in the Dockerrun.aws.json, I keep looking for a different solution. The links can be resolved inside the container, pinging api does work, however, nginx does it's own thing there.
I have read about dnsmasq as well, however, I wanted to get this running without installing this package, do I even have a choice here ?
This is a service discovery topic, and I don't think this solution is in a right direction.
It is valid that links can be resolved to the App's IP; however, it must follow the starting order - App first and then Nginx. If the App recreates, or scales after Nginx started, /etc/hosts in Nginx won't be changed. It is possible to recreate the Nginx for updating the /etc/hosts, but then all the connections to the Nginx will be disconnected.
Since you are using Amazon Elastic Beanstalk, I think a better solution is to use Consul (Service Discovery) + Registrator (Service Registration ) + Nginx (with SRV support, Plus or third party).
Personally, I have done a similar thing without AEB. I am using Docker Swarm (Service Discovery + Service Registration) + Nginx (with a modified HAproxy-SRV)
I hope this answer can help your decision.
When you start up lite-server, you can specify port for example
lite-server -- port 8000
Which gives you the following result:
[BS] Access URLs:
------------------------------------
Local: http://localhost:8000
External: http://192.168.0.5:8000
------------------------------------
UI: http://localhost:3001
UI External: http://192.168.0.5:3001
How can I change the port for UI which is 3001 by default (either command line and/or in bs-config.json file), to like 8001?
Since lite-server uses browsersync, it can be changed via BrowserSync options
Not sure about command line parameter, but bs-config.json works like this:
{
"port": 8000,
"files": ["./dist/**/*.{html,htm,css,js}"],
"server": { "baseDir": "./" },
"ui": {
"port": 8001
}
}
BrowserSync command line options (that also work with lite-server)
Just to add, for slow thinkers like me, to run lite-server on different port, create file bs-config.json in root of your project (or wherever you are running lite-server from) and add this into your bs-config.json
{
"port": 8080
}
this will run lite server on port 8080
alternatively you can just pass path of the bs-config.json on running lite-server
lite-server -c configs/my-bs-config.json
source: https://github.com/johnpapa/lite-server#custom-configuration
I don't quite get the difference between the two. From the description, seems like both are for opening webserver.
If i used the grunt-serve plugin with the following configurations on my gruntfile.js
serve: {
options: {
port: 9000
}
}
I can open a webserver at the specified port, though i have to open the webserver manually at the browser (not sure how to make it open automatically on my default browser). The webserver is working fine, and can load JSON files without any problem.
However when i tried to do it with grunt connect plugin, with the following configurations
connect: {
server: {
options: {
port: 9000,
livereload: 35729,
hostname: 'localhost',
keepalive:true,
open:true
}
}
},
open: {
dev: {
url: 'http://localhost:<%= connect.server.options.port %>/index.html'
}
}
grunt.registerTask('serve', function (target) {
grunt.task.run([
'connect',
'open:dev'
]);
});
I could automatically opened a webserver at the specified port on my default browser, but the catch is, it couldn't load the JSON data like how grunt serve did.
I'd like to make the webserver works like Yeoman, where when running the command grunt serve, it would connect to the webserver and automatically open it on my default browser, and can load all my PHP/json files. Seems like grunt-serve plugin is the right plugin for this, but i'm sure grunt-connect can do the same thing as grunt-serve too.
according to https://github.com/gruntjs/grunt-contrib-connect the connect task makes the server available for a limited amount of time in order to run other tasks such as unit testing. Once the tasks are complete the server stops. As you have shown there is a keepalive option to prevent the server from stopping. Connect is also useful for connecting to resources on another domain such as a REST API. Typically this would be denied by the browser due to the same origin policy - see https://github.com/drewzboto/grunt-connect-proxy.
So for development I would use the standard pattern "grunt serve" and connect for testing and proxying to resources on another domain :-)