Cross-Post from serverfault
Since I have seen similar issues posted here, I will X-post my question from ServerFault (https://serverfault.com/questions/855120/multi-container-docker-on-aws-nginx-use-host-machine-etc-hosts-resolver). I hope this
is permitted.
I have a multi-container docker environment on Amazon Elastic Beanstalk with the following Dockerrun.aws.json file:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "web",
"memoryReservation": 256,
"image": "my/nginx/repo/image",
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
],
"links": [
"api"
],
"essential": true
},
{
"name": "api",
"memoryReservation": 256,
"image": "my-api/repo",
"essential": true,
"portMappings": [
{
"hostPort": 3000,
"containerPort": 80
}
]
}
]
}
Ultimately I want the node app served by nginx to resolve requests to named addresses from linked containers, so in my web image (node app) I'd like to make a request to http://api/some/resource and let nginx resolve that to the api container.
Now, since docker adds a host entry for the api container due to the specified link, I want the nginx server to resolve addresses from the hosts etc/hosts file, however as I found out, nginx uses it's own resolver. After researching the issue a bit I found out that in non-Elastic Beanstalk multi-container solutions and with user-defined networks, the resolver would be provided by docker on 127.0.0.11, however since it is currently not possible to define user-defined networks in the Dockerrun.aws.json, I keep looking for a different solution. The links can be resolved inside the container, pinging api does work, however, nginx does it's own thing there.
I have read about dnsmasq as well, however, I wanted to get this running without installing this package, do I even have a choice here ?
This is a service discovery topic, and I don't think this solution is in a right direction.
It is valid that links can be resolved to the App's IP; however, it must follow the starting order - App first and then Nginx. If the App recreates, or scales after Nginx started, /etc/hosts in Nginx won't be changed. It is possible to recreate the Nginx for updating the /etc/hosts, but then all the connections to the Nginx will be disconnected.
Since you are using Amazon Elastic Beanstalk, I think a better solution is to use Consul (Service Discovery) + Registrator (Service Registration ) + Nginx (with SRV support, Plus or third party).
Personally, I have done a similar thing without AEB. I am using Docker Swarm (Service Discovery + Service Registration) + Nginx (with a modified HAproxy-SRV)
I hope this answer can help your decision.
Related
I'm running DDEV nginx server on Bedrock wordpress site and trying to load snippet for Browsersync.
gulpfile.js browserSync task:
browserSync.init({
proxy: {
target: "https://web.ddev.site"
},
https: {
key: "/Users/user/Library/Application Support/mkcert/rootCA-key.pem",
cert: "/Users/user/Library/Application Support/mkcert/rootCA.pem"
}, open:false});
Browser doesnt load snippet and print following error:
(index):505 GET https://web.ddev.site:3000/browser-sync/browser-sync-client.js?v=2.26.7 net::ERR_SSL_KEY_USAGE_INCOMPATIBLE
How can I get this two things to work together? Before DDEV I was using MAMP but DDEV has much better performance and I want to switch to this app. Thanks for help.
The problem was bad ssl certificates file. It was necessary to use docker container certificate. Proxy option is not anymore required.
After setup ddev container, you need to copy docker certificate to some location:
docker cp ddev-router:/etc/nginx/certs ~/tmp
After that just update path to correct certificates files. My gulpfile task now looks like this:
browserSync.init({https: {
key: "/Users/username/tmp/master.key",
cert: "/Users/username/tmp/master.crt"
}, open:false});
Thanks #rfay for solution!
I have achieved a small test cloud on 3 pieces of hardware. It works fine when in EDGE mode but when I try to configure it for VPCMIDO, new instances begin to launch but then timeout and move to a terminated state. I can also see the instances' initial volume and config data appear in the NC and CC data directories. Below is my system layout and network.json.
HOST 1 : CLC/UFS/WALRUS/MIDO CLUSTER/MIDO GATEWAY/MIDOLMAN AGENT:
em1 (All Services including Mido Cluster): 10.0.0.21
em3 (Target VPCMIDO Adapter): 10.0.0.22
HOST 2 : CC/SC
em1 : 10.0.0.23
HOST 3 : NC/MIDOLMAN AGENT
em1 : 10.0.0.24
{
"Mido": {
"Gateways": [
{
"Ip": "10.0.0.22",
"ExternalDevice": "em3",
"ExternalCidr": "192.168.0.0/16",
"ExternalIp": "192.168.0.2",
"ExternalRouterIp": "192.168.0.1"
}
]
},
"Mode": "VPCMIDO",
"PublicIps": [
"10.0.100.1-10.0.100.254"
]
}
I may be misunderstanding the intent of reserving an interface just for the mido gateway. All of my eucalyptus/zookeeper/cassandra/midonet configs use the 10.0.0.21 interface and seem to communicate fine. The midonet tunnel zone reports my CLC host and NC host successfully in the tunnel zone. The only part of my config that references the interface I intend to use for the midonet gateway is the network.json. No errors were returned at any time during my config so I think I may be missing something conceptual.
You may need to start eucanetd as described here:
https://docs.eucalyptus.cloud/eucalyptus/4.4.5/index.html#install-guide/starting_euca_clc.html
The eucanetd component in vpcmido mode runs on the cloud controller and is responsible for controlling midonet.
When eucanetd is not running instances will fail to start as the required network resources will not be created.
I configured a bridge on the NC and instances were able to launch and I no longer got an error in my nc.log. Docs and the eucalyptus.conf file comments tell me I shouldn't need to do this in VPCMIDO netowrking mode: https://docs.eucalyptus.cloud/eucalyptus/4.4.5/index.html#install-guide/configuring_bridge.html
Despite all that adding the bridge fixed this issue.
I'm setting up multiple virtual servers using docker and managing the routing with an nginx reverse proxy (https://hub.docker.com/r/jwilder/nginx-proxy/).
I already have a couple other dockers (for mysql and wp)
I would like to use mupx to deploy thanks to it's ease of use (https://github.com/arunoda/meteor-up/tree/mupx#), though it is require to provide a port and defaults to 80 (which conflicts with nginx).
Here are the relevant elements from the mup.json
"servers": [
{
"host": "111.111.111.111",
"username": "root",
"pem": "path/to/key",
"env":{
"VIRTUAL_HOST":"subdomain.domain.com"
}
}
],
...
"env": {
"ROOT_URL": "http://subdomain.domain.com"
}
Anyone have any experience with this?
I think you can change the port of the app to avoid the conflict
"env": {
"PORT": 80, // change to anything?
I don't quite get the difference between the two. From the description, seems like both are for opening webserver.
If i used the grunt-serve plugin with the following configurations on my gruntfile.js
serve: {
options: {
port: 9000
}
}
I can open a webserver at the specified port, though i have to open the webserver manually at the browser (not sure how to make it open automatically on my default browser). The webserver is working fine, and can load JSON files without any problem.
However when i tried to do it with grunt connect plugin, with the following configurations
connect: {
server: {
options: {
port: 9000,
livereload: 35729,
hostname: 'localhost',
keepalive:true,
open:true
}
}
},
open: {
dev: {
url: 'http://localhost:<%= connect.server.options.port %>/index.html'
}
}
grunt.registerTask('serve', function (target) {
grunt.task.run([
'connect',
'open:dev'
]);
});
I could automatically opened a webserver at the specified port on my default browser, but the catch is, it couldn't load the JSON data like how grunt serve did.
I'd like to make the webserver works like Yeoman, where when running the command grunt serve, it would connect to the webserver and automatically open it on my default browser, and can load all my PHP/json files. Seems like grunt-serve plugin is the right plugin for this, but i'm sure grunt-connect can do the same thing as grunt-serve too.
according to https://github.com/gruntjs/grunt-contrib-connect the connect task makes the server available for a limited amount of time in order to run other tasks such as unit testing. Once the tasks are complete the server stops. As you have shown there is a keepalive option to prevent the server from stopping. Connect is also useful for connecting to resources on another domain such as a REST API. Typically this would be denied by the browser due to the same origin policy - see https://github.com/drewzboto/grunt-connect-proxy.
So for development I would use the standard pattern "grunt serve" and connect for testing and proxying to resources on another domain :-)
I'm unsure of the best way to set ports in PM2. I don't see this documented anywhere. I'm using a front-facing nginx server which listens on proxys to specific ports on the backend that represent the node servers. How do I best set this type of configuration up?
One method is with env in process.json
{
"name" : "MyApp",
"script" : "./MyApp/app.js",
"instances" : "1",
"exec_mode" : "cluster_mode",
"env": {"PORT": 3030}
}