Is there a way to read docker logs from a container if i don't have ssh access to the host machine? Could i for example map the docker log command to a http port
So i could read the docker logs simply by do a Get request to
http://[dockerhost]:5234/logs
Docker container's log is located at /var/lib/docker/containers.
E.g.
If your container's id is ef80f1a75417a7933912c14fd8b86ecd828cf844e9793aae81ccebbc3120c774, then the log of the container is /var/lib/docker/containers/ef80f1a75417a7933912c14fd8b86ecd828cf844e9793aae81ccebbc3120c774/ef80f1a75417a7933912c14fd8b86ecd828cf844e9793aae81ccebbc3120c774-json.log.
So, you can just set a folder access for /var/lib/docker/containers in apache, then user can view it from browser.
Related
I'm trying to gain access to a netcore app in a docker container hosted in my vps remotely, but even locally I'm not able to access it.
As you can see, the app is running in a container and listenning to port 5000 (used default Kestrel config). Why can't I access it ?
What your output above is showing is that port 5000 is open, but you have not mapped anything on your local system to that port. This means that when you ping localhost on port 5000 it will not forward to the container.
Try running the container again with docker run -p 5000:5000 The output of docker ps should show something like 0.0.0.0:5000->5000/tcp.
Docker noob here. Have setup a dev server with docker containers. I am able to run a basic containers.
For example
docker run --name node-test -it -v "$(pwd)":/src -p 3000:3000 node bash
Works as expected. As soon as I have many small projects, I would like to bind/listen to actual http localhost path instead of port. Something like that
docker run --name node-test -it -v "$(pwd)":/src -p 3000:80/node-test node bash
Is it possible? Thanks.
EDIT. Basically I want to type localhost/node-test instead of localhost:3000 in my browser window
It sounds like what you want is for your Docker container to respond to a URL like http://localhost/some/random/path by somehow specifying that path in the Docker --port option.
The short answer to that is no, that is not possible. The reason is that a port is not related to a path in any way - an HTTP server listens on a port, and serves resources that are found at a path. Note that there are many different types of servers and all of them listen on some port, but many (most?) of them have no concept of a path at all. For example, consider an SMTP (mail transfer) server - it often listens on port 25, but what does a path mean to it? All it does is transfer mail from one server to another.
There are two ways to accomplish what you're trying to do:
write your application to respond to particular paths. For example, if you're using the Express framework in your node application, create a route for the path you want.
use a proxy server to accept requests on one path and relay them to a server that's listening to another path.
Note that this has nothing to do with Docker - you'd be faced with the same two options if you were running your application on any server.
I am pretty much new to the docker concept and know basics of it.
I just wanted to know how can we build multi tenant application using docker.
Where the containers will use the local hosted database with different schema.With the nginx we can do reverse proxy but how we can achieve it?
because every container will be accessed by localhost:8080 and how we can add upstream and server part.
It will be very helpful if some one explains it to me.
If I understand correctly you want processes in containers to connect to resources on the host.
From you containers perspective in bridge mode (the default), the host's IP is the gateway. Unfortunate the gateway IP address may vary and can only be determinate at runtime.
Here are a few ways to get it:
From the host using docker inspect: docker inspect <container name or ID>. The gateway will be available under NetworkSettings.Networks.Gateway.
From the container you can execute route | awk '/^default/ { print $2 }'
One other possibility is to use --net=host when running your container.
This will run you processes on the same network as your processes on your host. Doing so will make your database accessible from the container on localhost.
Note that using --net=host will not work on Docker for mac/windows.
I am using docker-compose to run an application on the bluemix container service. I am using nginx as a proxy webserver and load balancer.
I have found an image that uses docker events to automatically detect new web servers and adds those to the nginx configuration dynamically:
https://github.com/jwilder/nginx-proxy
But for this to work, I think the container needs to connect to a docker socket. I am not very familiar with docker and I dont know exactly what this does, but essentially it is necessary so that the image can listen to docker events.
The run command from the image documentation is the following:
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
I have not been able to run this in the container service, as it does not find the /var/run/docker.sock file on the host.
The bluemix documentation has a tutorial explaining how to do load balancing with nginx. But it requires a "hard coded" list of web servers in the nginx configuration.
I was wondering how I could run the nginx-proxy image so that web instances are detected automatically?
The containers service on Bluemix doesn't expose that docker socket (not surprising, it would be a security risk to the compute host). A couple of alternate ways to accomplish what you want:
something like amalgam8 or consul, which is basically doing just that
similar, but self written - have a shared volume, and then each
container on startup adds a file to that shared volume saying what it
is, plus its private ip. nginx container has a watch on the shared
volume, and reloads when those change. (more work than amalgam8 or
consul, but perhaps more control)
I'm trying to "dockerize" an LAMP application and I have a problem to send email.
I have 2 containers, one for apache/php and another for mysql.
Everything works fine but I can't send any email.
I've installed sendmail on the apache container but it needs to connect to a smtp server.
I've google a bit, and most answer are "setup your own MTA container", however, I'm running docker on Ubuntu, and there is already an MTA setup ( I can send email and use sendmail out of the box). So the idea is to use the host smtp server.
It should be possible to setup a "tunnel" or a "route" (I'm not sure of the term) to forward connection to the port 25 from inside the container to the port 25 of the host (basically the reverse of what docker does with -p). I've read the docker advanced networking and the 'ip' command manual but I can't figure how to do it.
At the moment my solution is to create all the container with --net=host. This way sendmail can see the smpt server of the host. The problem with this method is: you can't use --link and --net=host at the same time, therefore mean all the containers have to use --net=host.
You want to reach the host from within the container. You can already do this. For example, if the host that's running Docker is docker.mb14.com then you can hit that address from within the container.
But that would give you an external-facing interface, and you probably don't want to listen on that. Instead, you can use an internal-facing interface and give it a friendly name inside the container with --add-host <alias>:<ip>. This will add an /etc/hosts entry just like --link
The documentation for this includes an example of adding an entry for your host system:
Note: Sometimes you need to connect to the Docker host, which means getting the IP address of the host. You can use the following shell commands to simplify this process:
$ alias hostip="ip route show 0.0.0.0/0 | grep -Eo 'via \S+' | awk '{ print \$2 }'"
$ docker run --add-host=docker:$(hostip) --rm -it debian
(And there's an open issue that might help if you need an IPv6 address.)
Edit: After that, if you want to port forward so that you're talking to localhost on the container, you need to handle that part yourself. There are lots of ways to do this (firewall rule, netcat, proxy) and they're independent of Docker. There is no built-in equivalent of Docker's -p flag that goes in the other direction.
Use docker links. Docker links exposes the envrionment variables as well as make updates to /etc/hosts.
https://docs.docker.com/userguide/dockerlinks/