Meteor DDP call between containers on same host - meteor

This Meteor App server code tries to use the method of another Meteor worker. both the app and the worker are in a separate docker containers on the same server EC2. The worker is running on port 9000.
When the App fires a method appCallingWorker, I expected to see the worker container logs out the string 'worker called from App' but all docker logs containerID gives is many lines looking like this:
stream error Network error: ws://localhost:9000/websocket: connect ECONNREFUSED 127.0.0.1:9000
How can I use the methods of the worker from the App? thx
//App/server/main.js
let workerConn = DDP.connect('http://localhost:9000');
Meteor.methods({
'appCallingWorker': () => {
workerConn.call('workerMethod');
}
});
//Worker/server/main.js
Meteor.methods({
'workerMethod': function () {
console.log('worker called from App');
}
})
edit
The EC2 is a Container Instance in AWS ECS. and the containerDefinitions.portMapings.containerPort and hostPort are both set to 9000.
edit 2
iptables -L -n on the docker host shows the ip of the container listening on 9000, I replaced localhost in the code with said ip and now it works. But Said ip can change if host reboot or container restarts... another problem to find a solution for.

i had this same problem trying to communicate between docker containers. you're going to have to use the external ip:port address of the server your containers are on.

Related

can't access to netcore api in docker image

I'm trying to gain access to a netcore app in a docker container hosted in my vps remotely, but even locally I'm not able to access it.
As you can see, the app is running in a container and listenning to port 5000 (used default Kestrel config). Why can't I access it ?
What your output above is showing is that port 5000 is open, but you have not mapped anything on your local system to that port. This means that when you ping localhost on port 5000 it will not forward to the container.
Try running the container again with docker run -p 5000:5000 The output of docker ps should show something like 0.0.0.0:5000->5000/tcp.

How to send requests to a web server running inside a docker container inside an AWS EC2 machine from the outside world?

I have a Python Flask web server running inside a docker container that is running in an AWS EC2 Ubuntu machine. The container is running on a default network setting (docker0). Within the host EC2, I can send requests (Get, Post) to this web server using docker-machine ip (172.x.x.x) and the forwarded ports (3000: 3000) of the host.
url: http:// 172.x.x.x:3000 / <api address>
How can I send requests (GET, POST) to this web server from the outside world? For example from another web server running in another EC2 machine. Or even from the web using my web browser?
Do I need to get a public IP Address for my docker host?
Is there is another way to interact with such web server within another web server running in another EC2?
If you have a solution please explain with as many details as you can for me to understand it.
The only way that I can think of is to write a web server on the main EC2 that listens to the requests and forward them to the appropriate docker container webservers?! But that would be too many redundant codes and I would rather just request to the web server running on the container directly!
The IP address of the docker is not public. Your EC2 instance usually has a public IP address though. You need an agent listening on a port on your EC2 instance and pass it to your docker/Flask server. Then you would be able to call it from outside using ec2-instance-ip:agent-port.
It's still not a long-term solution as EC2 IPs change when they are stopped. You'd better use a load-balancer or an elastic IP if you want the ip/port to be reliable.
That's right, it makes a lot of redundant code and an extra failure point. That's why it's better to use Amazon's managed docker service (https://aws.amazon.com/ecs/). This way you just launch an EC2 instance which is a docker and has a public IP address. It still allows you to SSH into your EC2 instance and change stuff.

How to communicate with Kafka server running inside a docker

I am using apache KafkaConsumer in my Scala app to talk to a Kafka server wherein the Kafka and Zookeeper services are running in a docker container on my VM (the scala app is also running on this VM). I have setup the KafkaConsumer's property "bootstrap.servers" to use 127.0.0.1:9092.
The KafkaConsumer does log, "Sending coordinator request for group queuemanager_testGroup to broker 127.0.0.1:9092". The problem appears to be that the Kafka client code is setting the coordinator values based on the response it receives which contains responseBody={error_code=0,coordinator={node_id=0,host=e7059f0f6580,port=9092}} , that is how it sets the host for future connections. Subsequently it complains that it is unable to resolve address: e7059f0f6580
The address e7059f0f6580 is the container ID of that docker container.
I have tested using telnet that my VM is not detecting this as a hostname.
What setting do I need to change such that the Kafka on my docker returns localhost/127.0.0.1 as the host in its response ? Or is there something else that I am missing / doing incorrectly ?
Update
advertised.host.name is deprecated, and --override should be avoided.
Add/edit advertised.listeners to be the format of
[PROTOCOL]://[EXTERNAL.HOST.NAME]:[PORT]
Also make sure that PORT is also listed in property for listeners
After investigating this problem for hours on end, found that there is a way to
set the hostname while starting up the Kafka server, as follows:
kafka-server-start.sh --override advertised.host.name=xxx (in my case: localhost)

docker registry on localhost with nginx proxy_pass

I'm trying to setup a private docker registry to upload my stuff but I'm stuck. The docker-registry instance is running on port 5000 and I've setup nginx in front of it with a proxy pass directive to pass requests on port 80 back to localhost:5000.
When I try to push my image I get this error:
Failed to upload metadata: Put http://localhost:5000/v1/images/long_image_id/json: dial tcp localhost:5000: connection refused
If I change localhost with my server's ip address in nginx configuration file I can push allright. Why would my local docker push command would complain about localhost when localhost is being passed from nginx.
Server is on EC2 if it helps.
I'm not sure the specifics of your traffic, but I spent a lot of time using mitmproxy to inspect the dataflows for Docker. The Docker registry is actually split into two parts, the index and the registry. The client contacts the index to handle metadata, and then is forwarded on to a separate registry to get the actual binary data.
The Docker self-hosted registry comes with its own watered down index server. As a consequence, you might want to figure out what registry server is being passed back as a response header to your index requests, and whether that works with your config. You may have to set up the registry_endpoints config setting in order to get everything to play nicely together.
In order to solve this and other problems for everyone, we decided to build a hosted docker registry called Quay that supports private repositories. You can use our service to store your private images and deploy them to your hosts.
Hope this helps!
Override X-Docker-Endpoints header set by registry with:
proxy_hide_header X-Docker-Endpoints;
add_header X-Docker-Endpoints $http_host;
I think the problem you face is that the docker-registry is advertising so-called endpoints through a X-Docker-Endpoints header early during the dialog between itself and the Docker client, and that the Docker client will then use those endpoints for subsequent requests.
You have a setup where your Docker client first communicates with Nginx on the (public) 80 port, then switch to the advertised endpoints, which is probably localhost:5000 (that is, your local machine).
You should see if an option exists in the Docker registry you run so that it advertises endpoints as your remote host, even if it listens on localhost:5000.

Can't access port 8080 on EC2 (AWS)

I just started a new AWS EC2 instance. In the instance's security group I added a new rule to open port 8080. I also stopped the iptables service on the instance, per another post. So in theory this port should be wide open.
I started my RESTful service on 8080 and was able to access it locally via curl.
When I come in with curl remotely I get an error saying it couldn't connect to the host.
What else should I check to see if 8080 is truly open?
I started my RESTful service on 8080 and was able to access it locally via curl.
What kind of technology is your RESTful service based upon?
Many frameworks nowadays listen on localhost (127.0.0.1) only, be it by default or by means of their examples, see e.g. the canonical Node.js one (I realize that port 8080 hints towards Java/Tomcat, anyway):
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(1337, '127.0.0.1');
console.log('Server running at http://127.0.0.1:1337/');
The log message generated by starting this is Server running at http://127.0.0.1:1337/ - the emphasized part is key here, i.e. the server has been configured to listen on the IP address 127.0.0.1 only, whereas you are trying to connect to it via your public Amazon EC2 IP Address.

Resources