DockerCompose + Hostname - .net-core

Hi I'm am creating 3 webApi's a GateWay and I'm using docker in visualStudio0217 (.netCore).
The projects compile fine and I see the images were created.
But whe I try to go to the Url's http://LocalHost:9002 or http://LocalHost:9000 these dont work
I have this docker compose:
Do I need to do something else?

instead of http://LocalHost:9002 use http://localhost:57978
instead of http://LocalHost:9000 use http://localhost:46429
Explanation
0.0.0.0:57978->8041/tcp means that host port 57978 is mapped to container port 8041
0.0.0.0:46429->8043/tcp means that host port 46429 is mapped to container port 8043
You can use this command to inspect your connections
docker inspect container_name

Maybe you can try to add the "ports" in your docker-compose for each service.
Example:
ports:
- "9002:80"

Related

Add new port in running docker compose

I am trying to add a SSL certificate to a wordpress container but the default compose configuration only redirects port 80.
How can I add a new port in the running container? I tried to modify the docker-compose.yml file and restart the container but this doesn't solve the problem.
Thank you.
You should re-create container, when listening new port, like this
docker-compose up -d --force-recreate {CONTAINER}
Expose ports.
Either specify both ports (HOST:CONTAINER), or just the container port (an ephemeral host port is chosen).
Note: When mapping ports in the HOST:CONTAINER format, you may experience erroneous results when using a container port lower than 60, because YAML parses numbers in the format xx:yy as a base-60 value. For this reason, we recommend always explicitly specifying your port mappings as strings.
ports:
- "3000"
- "3000-3005"
- "8000:8000"
- "9090-9091:8080-8081"
- "49100:22"
- "127.0.0.1:8001:8001"
- "127.0.0.1:5000-5010:5000-5010"
- "6060:6060/udp"
https://docs.docker.com/compose/compose-file/#pid
After you add the new port to the docker-compose file, what I did that works is:
Stop the container
docker-compose stop <service name>
Run the docker-compose up command (NOTE: docker-compose start did not work)
docker-compose up -d
According to the documentation the 'docker-compose' command:
Builds, (re)creates, starts, and attaches to containers for a service
... Unless they are already running
That started up the stopped service, WITH the exposed ports I had configured.
Have you tried like in this example:
https://docs.docker.com/compose/compose-file/#ports
Should work like this:
my-services:
ports:
- "80:80"
- "443:443"
you just add the new port in the port section of the docker-compose.yml and then you must do
docker-compose up -d
because it will read the .yml file again and recreate the container. If you do just restart it will not read the new config from the .yml and just restart the same container.

How to expose a Docker network to the host machine?

Consider the following docker-compose.yml
version: '2'
services:
serv1:
build: .
ports:
- "8080:8080"
links:
- serv2
serv2:
image: redis
ports:
- "6379:6379"
I am fowarding the ports to the host in order to manage my services, but the services can access each other simply using the default docker network. For example, a program running on serv1 could access redis:6379 and some DNS magic will make that work. I would like to add my host to this network so that i can access container's ports by their hostname:port.
You can accomplish this by running a dns proxy (like dnsmasq) in a container that is on the same network as the application. Then point your hosts dns at the container ip, and you'll be able to resolve hostnames as if you were in the container on the network.
https://github.com/hiroshi/docker-dns-proxy is one example of this.
If you need a quick workaround to access a container:
Get the container IP:
$ docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
172.19.0.9
If you need to use the container name, add it to your /etc/hosts.
# /etc/hosts
172.19.0.9 container_name
I am not sure if I understand you correctly. You want e.g. your redis server be accessible not only from containers that are in the same network, but also from outside the container using your host ip address?
To accomplish that you have to use the expose command as described here https://docs.docker.com/compose/compose-file/#/expose
expose:
- "6379"
So
ports:
- "6379:6379"
expose:
- "6379"
should do the trick.
The EXPOSE instruction informs Docker that the container listens on
the specified network ports at runtime. EXPOSE does not make the ports
of the container accessible to the host. To do that, you must use
either the -p flag to publish a range of ports or the -P flag to
publish all of the exposed ports. You can expose one port number and
publish it externally under another number.
from https://docs.docker.com/engine/reference/builder/#expose
Just modify the hosts file on your host machine to add the container entries.
Example:
127.0.0.1 container1
127.0.0.1 container2
127.0.0.1 container3
Assuming that the binding of the ports has been done.

Docker Container to Host Routing

I need a better up-to-date solution the following problem:
Problem: I have to manually create an iptable rule in order to allow a route from a dynamically docker bridge to the host. Otherwise container a cannot connect to container b because there is by default no route from a docker network to the docker host itself.
I have the following setup:
container-nginx (docker)
|
|-container-jira (docker) (https://jira.example.com)
|-container-confluence (docker) (https://confluence.example.com)
In order to have properly functioning Atlassian application links between Jira and Confluence:
Jira accesses Confluence over https://confluence.example.com
Confluence accesses Jira over https://jira.example.com
I use docker-compose for the whole setup and all container are inside the same network. By default this will not work i will get "no route to host" in both containers for hosts confluence.example.com and jira.example.com. Because every container inside the docker network have no route to the docker host itself.
Currently, each time the setup is initialized I manually create an iptable rule from the dynamically created docker bridge with id "br-wejfiweji" to the host.
This is cumbersome, is there "a new way" or "better way" to do this in Docker 1.11.x?
docker-compose version 2 does create a network which allows all containers to see each other. See "Networking in Compose" (since docker 1.10)
If your containers are created with the right hostname, that is jira.example.com and confluence.example.com (see docker-compose.yml hostname directive), nginx can proxy-pass directly to jira.example.com and confluence.example.com.
Those two hostname will resolve to the right IP address within the network created by docker-compose for those 3 (nginx, jira and confluence) containers.
I suggest in the comment to use an alias in order for jira to see confluence as nginx (nginx being aliases to confluence), in order for jira to always use nginx when accessing confluence.
version: '2'
services:
# HTTPS-ReverseProxy
nginx:
image: blacklabelops/nginx
container_name: nginx
networks:
default:
aliases:
- 'crucible.example.com'
- 'confluence.example.com'
- 'crowd.example.com'
- 'bitbucket.example.com'
- 'jira.example.com'
ports:
- '443:443'

How to run 'ionic serve' in global WEB?

I'm new in Ionic Framework, so I need your help. When I'm running ionic serve on localhost everything is great. But now I'm trying to work with Cloud9, it prints:
The port 8100 was taken on the host 172.17.12.3 - using port 8101 instead
The port 35729 was taken on the host 172.17.12.3 - using port 35730 instead
Running live reload server: http://172.17.12.3:35730
Watching : [ 'www/**/*', '!www/lib/**/*' ]
Running dev server: http://172.17.12.3:8101
But this adresses don't work at all. And i get an error from Cloud9:
Error: you may be using the wrong PORT & IP for your server app. Try passing $PORT and $IP to properly launch your application.
So how can I set $PORT and $IP in Ionic?
Since Cloud9 forwards port 8080 (which is the value of $PORT), you need to tell ionic to use that instead. With the recent change of allowing multiple ports, port 8081 and 8082 are also allowed, so you need to tell ionic to use 8081 (or 8082) as the livereload ports. The command that should work is:
ionic serve -p 8080 -l 8081
I also think that adding -a would help since with that option it appears to bind to IP 0.0.0.0 which you should be binding to in the first place. For more information about Ionic cli options, please check out the Ionic CLI github page
simple solution is to close the terminal in which you are running the serve request and open new terminal then give ionic serve request it will take the 8100 port (which you gave in your code)

Restarting Containers When Using Docker and Nginx proxy_pass

I have an nginx docker container and a webapp container successfully running and talking to eachother.
The nginx container listens on port 80, and uses proxy_pass to direct traffic to the IP of the webapp container.
upstream app_humansio {
server humansio:8080 max_fails=3 fail_timeout=30s;
}
"humansio" is set in the /etc/hosts file by docker because I've started nginx with --link humansio:humansio. The webapp container (humansio) is always exposing 8080.
The problem is, when I reload the webapp container, the link to the nginx container breaks and I need to restart that as well. Is there any way I can do this differently so I don't need to restart the nginx container when the webapp container reloads?
--
I've tried to do something like connecting them manually by using a common port (8001 on both), but since they actually reserve that port, the 2nd container cannot use it as well.
Thanks!
I prefer to run the proxy (nginx of haproxy) directly on the host for this reason.
But an option is to "Link via an Ambassador Container" https://docs.docker.com/articles/ambassador_pattern_linking/
https://www.digitalocean.com/community/tutorials/how-to-use-the-ambassador-pattern-to-dynamically-configure-services-on-coreos
If you don't want to restart your proxy container whenever you have to restart one of the proxied ones (e.g. fig), you could take a look at the autoupdated proxy configuration approach: http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/
if u use some modern version of docker the links in nginx container to your web service probably get updated (u can check it with docker exec -ti nginx bash - then cat /etc/hosts) - problem is nginx doesnt' use /etc/hosts every time - it caches the ip and when it changes - he gets lost. 'docker kill -s HUP nginx' which makes nginx reload its config without restart helps too.
I have the same problem. I used to start my services with systemd unit files - and when u make one service (nginx) dependant on other (webapp) and then restart the webapp - systemd is smart enough to restart the nginx as well. Now I'm trying my luck with docker-compose and restarting webapp container confuses nginx.

Resources