I am running a docker container containing a node server that needs to connect to a psql db in a local network, with a private ip address, I'll show my current configuration:
The container exposes the 3000 port to connect to the node server
But when I run it I get a connection refused error:
Unhandled rejection SequelizeBaseError: connect ECONNREFUSED 10.9.0.0:5432
I know that the db is accepting connections from other ips in the network since this is not the only app using this db server.
What is the docker way to achieve this?
I am running:
Docker version 1.13.1, build 092cba3
OS: Win10 but this also needs to work on MacOS
Thank you!
Related
Hello I am facing a kubeadm join problem on a remote server.
I want to create a multi-server, multi-node Kubernetes Cluster.
I created a vagrantfile to create a master node and N workers.
It works on a single server.
The master VM is a bridge Vm, to make it accessible to the other available Vms on the network.
I choose Calico as a network provider.
For the Master node this's what I've done:
Using ansible :
Initialize Kubeadm.
Installing a network provider.
Create the join command.
For Worker node:
I execute the join command to join the running master.
I created successfully the cluster on one single hardware server.
I am trying to create regular worker nodes on another server on the same LAN, I ping to the master successfully.
To join the Master node using the generated command.
kubeadm join 192.168.0.27:6443 --token ecqb8f.jffj0hzau45b4ro2
--ignore-preflight-errors all
--discovery-token-ca-cert-hash
sha256:94a0144fe419cfb0cb70b868cd43pbd7a7bf45432b3e586713b995b111bf134b
But it showed this error:
error execution phase preflight: couldn't validate the identity of the API Server: Get https://192.168.0.27:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s: dial tcp 192.168.0.27:6443: connect: connection refused
I am asking if there is any specific network configuration to join the remote master node.
Another issue I am facing: I cannot assign a public Ip to the Vm using the bridge adapter, so I remove the static ip to let the dhcp server choose one for it.
Thank you.
Environment
ASPNET MVC App running on docker
Docker image: microsoft/aspnet:4.7.2-windowsservercore-1803 running on Docker-for-Windows on Win10Ent host
SQL Server running on AWS EC2 in a private subnet
VPN Connection to subnet
Background
The application is able to connect to database when VPN is activated and everything works fine. However when app runs on docker, the underlying connection to database is refused. Since the database is in a private subnet, VPN is needed to connect.
I am able to ping the database server as well as the general internet successfully from the command prompt launched inside the container, thus underlying networking is working fine.
Configuration
Dockerfile
FROM microsoft/aspnet:4.7.2-windowsservercore-1803
ARG source
WORKDIR /inetpub/wwwroot
COPY ${source:-obj/Docker/publish} .
Docker Compose
version: '3.4'
services:
myWebApp:
image: ${DOCKER_REGISTRY}myWebApp
build:
context: .
dockerfile: Dockerfile
The network entry is removed as NAT is mapped to Ethernet and I am running on WiFi thus having it disabled.
SQL Connection string (default instance on def port)
"Data Source=192.168.1.100;Initial Catalog=Admin;Persist Security Info=True;User ID=admin;Password=WVU8PLDR" providerName="System.Data.SqlClient"
Local network configuration
Ping status
Let me know what needs to be fixed. Any environment or configuration-specific information can be provided
After multiple iterations of different ways to address this issue, we finally figured out the solution which we incorporated in our production environment.
The SQL Server primary instance was in a private subnet, hence it cannot be accessed from any application outside the subnet. The SQL Enterprise manager and other apps living on local machines are able to access it via VPN as the OS tunnels that traffic to the private network. However, since docker cannot join the VPN network easily (would be too complicated, may not be actually impossible), we need to figure out a possible solution.
For this, we set up a Reverse Proxy on the private subnet, which lives on the public IP, hence accessible via the public Internet. This server has permission granted in the underlying security group setting to talk to the SQL Server (port 1433 being opened to the private IP).
So the application running on docker calls the IP of the Reverse Proxy, which in turn routes it to the SQL Server. There's a cost of one additional hop involved here, but that's something we gotta live with.
Let me know if anyone can figure out a better design. Thanks
Basically my question is: How do I connect to a docker host on the network?
Background:
We have a Windows Server 2012 machine that I would like to run a docker engine from.
I've managed to get it running with docker-machine and the hyperv driver. I've also successfully gotten a docker host to work on my computer locally using VirtualBox, and have been using it.
To ease access to docker for other people on the network on a perpetual set-up, I'd like to use the docker host instance on the server with Hyper-V.
In my search for answers, I've not been able to find any mention of provisioning hosts on the network, only on the local and cloud.
I'd like to know what commands do I have to use to connect my local docker-machine to the server's docker host, and use it as the active docker host?
There's a blog post explaining how to add a docker engine with an IP with the generic driver, as well as some extra steps you need to go through.
ADDING AN EXISTING DOCKER HOST TO DOCKER MACHINE : A FEW TIPS
SSH Keys
The bottom section on certs explains how to get working on the remote docker engine after connecting with the create command
Old answer
To create/connect successfully the local machine must be able to ssh into the remote docker engine, and not just the server hosting the docker engine. This means a public key was generated and added (using puttygen or ssh-keygen) on the local machine and the OpenSSH RSA public key was added to the list of authorized keys in ~/.ssh/authorized_keys on the remote docker engine.
An example of an OpenSSH RSA public key (because I get confused by these formats):
ssh-rsa AAAAB3NzaC1kc3MAAACBAJ3hB5SAF6mBXPlZlRoJEZi0KSIN+NU2iGiaXZXi9CDrgVxTp6/sc56UcYCp4qjfrZ2G3+6PWbxYso4P4YyUC+61RU5KPy4EcTJske3O+aNvec/20cW7PT3TvH1+sxwGrymD50kTiXDgo5nXdqFvibgM61WW2DGTKlEUsZys0njRAAAAFQDs7ukaTGJlZdeznwFUAttTH9LrwwAAAIAMm4sLCdvvBx9WPkvWDX0OIXSteCYckiQxesOfPvz26FfYxuTG/2dljDlalC+kYG05C1NEcmZWSNESGBGfccSYSfI3Y5ahSVUhOC2LMO3JNjVyYUnOM/iyhzrnRfQoWO9GFMaugq0jBMlhZA4UO26yJqJ+BtXIyItaEEJdc/ghIwAAAIBFeCZynstlbBjP648+mDKIvzNSS+JYr5klGxS3q8A56NPcYhDMxGn7h1DKbb2AV4pO6y+6hDrWo3UT4dLVuzK01trwpPYp6JXTSZZ12ZaXNPz7sX9/z6pzMqhX4UEfjVsLcuF+ZS6aQCPO0ZZEa1z+EEIZSD/ykLQsDwPxGjPBqw= rsa-key-20160224
Not having this key in the remote docker engine gave me a exit status 255 when I attempted to docker ssh into it. At this point, only regular ssh docker#192.168.1.165 worked. Be prepared to repeat the above process.
The article also mentions sudo, but the boot2docker image used by the Hyper-V driver already allows password-less sudo so that part is already done.
Ports
Make sure TCP port 2376 is allowed connection to the remote docker engine, through the server's firewall rules, physical firewall etc.
The Command to Run
Then this command connects the remote engine to docker-machine:
> docker-machine create --driver generic --generic-ip-address 192.168.1.165 --generic-ssh-user %USERNAME% vm
> docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default - virtualbox Running tcp://192.168.99.101:2376 v1.10.1
vm - generic Running tcp://192.168.1.165:2376 Unknown
vm is the newly added docker engine from the network, and 192.168.1.165 is the IP of the docker engine on the server.
Certs
If this works, just copying over the certs (ca.pem, ca-key.pem, cert.pem, key.pem) from the remote server directory %USERPROFILE%\.docker\machine\machines\<server's local docker engine name> to the same location on the local machine should keep it connected. Do not use docker-machine regenerate-certs since this disables any connections that other computers might have to that docker engine, including the server itself.
Active
Then finally making the engine active completes the connection.
> IF /F "tokens=*" %G ('docker-machine env vm') do %G
Note: This issue points out that the command docker-machine create --driver none --url=tcp://192.168.1.165:2376 <name> should add a remote machine's docker engine as well, should the "none" driver be working in a future version.
Being new to Docker and VM's, I have run into a blocker. I have a node app that needs to send a POST request from a Docker container to a Virtual Machine or to my local machine.
I have read through the Docker documentation, but still don't understand what I need to do in order to accomplish this.
So how can I send an http request from my node app running in a Docker Container to my Vagrant Box?
By default, Docker creates a virtual interface (docker0) in your host machine with IP 172.17.42.1. Each of the container launched will have an IP of the network 172.17.42.1/16, and they will be able to connect to host machine connecting to IP 172.17.42.1.
If you want to connect a docker container with another service running in a virtual machine running with other provider (e.g.: virtualbox, vmware), the easiest way is forwarding the ports needed by the service to you host machine and then, from your docker container, connecting to IP 172.17.42.1. You should check your virtual machine provider documentation to see details about this. And if you are using libvirt/KVM (or with any other provider), you can use iptables to enable port forwarding.
Host: Windows 7 running lastest VBox + Extension pack
Vm1: lubuntu 3.10
Vm2: Ubuntu server 12.04.3
Problem: Can't get VMs talk/ping each other AND ping the internet at the same time
NAT: VMs have same IP, using ping/ssh is like checking connectivity/connecting to self, lol; can ping internet, can't ping each other
Bridged: VMs get unique IP; can ping each other, not the internet
Host-only: VMs get unique IP; can ping each other; not the internet
Internal network:
intnet, needs to be defined/added to windows 7, however, window 7 not accepting VBOXMANAGE add command, giving errors. VMs wait for network configuration, another 60 seconds and start without a network.
What else can I do?
Change VM to use NAT Network, generic driver... ???
edit /etc/network/interfaces?
change route?
use squid?
Following Lubuntu Networking Message pops up in Lunbutu GUI:
network service discovery disabled
your cuurent network has a .local domain which is not recommended and incompatible with the avahi network service discovery the service has been disabled.
Can anyone help?
Refresh your MAC address using Virtual Box machine settings and remove the kernel’s networking interface rules file so that it can be regenerated:
sudo rm -f /etc/udev/rules.d/70-persistent-net.rules
sudo reboot
It will work for your clone VM.