can't access to netcore api in docker image - .net-core

I'm trying to gain access to a netcore app in a docker container hosted in my vps remotely, but even locally I'm not able to access it.
As you can see, the app is running in a container and listenning to port 5000 (used default Kestrel config). Why can't I access it ?

What your output above is showing is that port 5000 is open, but you have not mapped anything on your local system to that port. This means that when you ping localhost on port 5000 it will not forward to the container.
Try running the container again with docker run -p 5000:5000 The output of docker ps should show something like 0.0.0.0:5000->5000/tcp.

Related

Plumber in container on Azure how to map ports

I build container with plumber API inside. It runs on port 80 in container and I also exposed port 80. I also mapped ports
az container create ... --ports 80:80
Since Azure only support symmetrical port mapping.
But I still cannot reach my API from container FQDN and I do not know how to troubleshoot. I already confirmed that API is running fine within the container with curl command.
Any suggestions.
did you run your plumber server with host 0.0.0.0?
Take a look at plumber official Docker image
https://github.com/rstudio/plumber/blob/master/Dockerfile

Easy way for read docker logs without ssh access

Is there a way to read docker logs from a container if i don't have ssh access to the host machine? Could i for example map the docker log command to a http port
So i could read the docker logs simply by do a Get request to
http://[dockerhost]:5234/logs
Docker container's log is located at /var/lib/docker/containers.
E.g.
If your container's id is ef80f1a75417a7933912c14fd8b86ecd828cf844e9793aae81ccebbc3120c774, then the log of the container is /var/lib/docker/containers/ef80f1a75417a7933912c14fd8b86ecd828cf844e9793aae81ccebbc3120c774/ef80f1a75417a7933912c14fd8b86ecd828cf844e9793aae81ccebbc3120c774-json.log.
So, you can just set a folder access for /var/lib/docker/containers in apache, then user can view it from browser.

Bind docker container port to path

Docker noob here. Have setup a dev server with docker containers. I am able to run a basic containers.
For example
docker run --name node-test -it -v "$(pwd)":/src -p 3000:3000 node bash
Works as expected. As soon as I have many small projects, I would like to bind/listen to actual http localhost path instead of port. Something like that
docker run --name node-test -it -v "$(pwd)":/src -p 3000:80/node-test node bash
Is it possible? Thanks.
EDIT. Basically I want to type localhost/node-test instead of localhost:3000 in my browser window
It sounds like what you want is for your Docker container to respond to a URL like http://localhost/some/random/path by somehow specifying that path in the Docker --port option.
The short answer to that is no, that is not possible. The reason is that a port is not related to a path in any way - an HTTP server listens on a port, and serves resources that are found at a path. Note that there are many different types of servers and all of them listen on some port, but many (most?) of them have no concept of a path at all. For example, consider an SMTP (mail transfer) server - it often listens on port 25, but what does a path mean to it? All it does is transfer mail from one server to another.
There are two ways to accomplish what you're trying to do:
write your application to respond to particular paths. For example, if you're using the Express framework in your node application, create a route for the path you want.
use a proxy server to accept requests on one path and relay them to a server that's listening to another path.
Note that this has nothing to do with Docker - you'd be faced with the same two options if you were running your application on any server.

How to build multi tenant application using docker

I am pretty much new to the docker concept and know basics of it.
I just wanted to know how can we build multi tenant application using docker.
Where the containers will use the local hosted database with different schema.With the nginx we can do reverse proxy but how we can achieve it?
because every container will be accessed by localhost:8080 and how we can add upstream and server part.
It will be very helpful if some one explains it to me.
If I understand correctly you want processes in containers to connect to resources on the host.
From you containers perspective in bridge mode (the default), the host's IP is the gateway. Unfortunate the gateway IP address may vary and can only be determinate at runtime.
Here are a few ways to get it:
From the host using docker inspect: docker inspect <container name or ID>. The gateway will be available under NetworkSettings.Networks.Gateway.
From the container you can execute route | awk '/^default/ { print $2 }'
One other possibility is to use --net=host when running your container.
This will run you processes on the same network as your processes on your host. Doing so will make your database accessible from the container on localhost.
Note that using --net=host will not work on Docker for mac/windows.

How can I use nginx as a dynamic load balancing proxy server on Bluemix?

I am using docker-compose to run an application on the bluemix container service. I am using nginx as a proxy webserver and load balancer.
I have found an image that uses docker events to automatically detect new web servers and adds those to the nginx configuration dynamically:
https://github.com/jwilder/nginx-proxy
But for this to work, I think the container needs to connect to a docker socket. I am not very familiar with docker and I dont know exactly what this does, but essentially it is necessary so that the image can listen to docker events.
The run command from the image documentation is the following:
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
I have not been able to run this in the container service, as it does not find the /var/run/docker.sock file on the host.
The bluemix documentation has a tutorial explaining how to do load balancing with nginx. But it requires a "hard coded" list of web servers in the nginx configuration.
I was wondering how I could run the nginx-proxy image so that web instances are detected automatically?
The containers service on Bluemix doesn't expose that docker socket (not surprising, it would be a security risk to the compute host). A couple of alternate ways to accomplish what you want:
something like amalgam8 or consul, which is basically doing just that
similar, but self written - have a shared volume, and then each
container on startup adds a file to that shared volume saying what it
is, plus its private ip. nginx container has a watch on the shared
volume, and reloads when those change. (more work than amalgam8 or
consul, but perhaps more control)

Resources