we use Mesibo on Azure Container Instance since 2 years.
In the last week we have restart the container and it pulled the latest image from docker hub. Now we have 2 issue:
We can enable only 5 Ports, but Mesibo on Prem require 10 port, there are all necessary?
In the Log console i see that the Public IP is the egress IP but the ingress is different, if i try to connect to a port on Public IP/DNS i cannot establish any connection. In your documentation i see: If you are running a VM instance having the only private address, pass -p parameter before the token but before i never added this parameter on Azure Container. It's necessary, how i can add it?
Thanks
Related
I have a Virtual Box VM hosted on my desktop, using bridged mode.
On that VM I have installed a one node Service Fabric cluster (secured with a self-signed x509 cert).
I have setup my router to send ports 19000-19100 to that guest machine IP Address.
I am on AT&T Fiber so I am forwarding those ports to a router and then the router forwards them on to the guest OS at a specific IP address.
From my host machine I am able to get to the service fabric explorer and I can deploy services to it from visual studio.
I am not able to deploy to it from azure devops. My friend is not able to see the explorer either.
In DevOps I have configured a service connection, put the certificate in it, etc. In my pipeline I am writing to the hosts file (my public IP and the host name I need sit.mysite.com as an example). One thing to note is that I was previously able to deploy to SF when I had the cluster running on my main machine (as opposed to in a VM as it currently is)
A friend (living in another state) is not able to view my service fabric explorer. I provided the cert to him, he's imported it. He has an entry in his hosts file also. When he goes to https://sit.mysite.com:19080 (the SF explorer address), he gets a 403, not authorized. But it is correctly picking up the cert. He can also ping my IP address so we have connectivity.
Whatever is stopping him from hitting my SF is likely what is preventing me from the ability to deploy from azure devops, but I have no idea what it would be...
Any ideas?
Figured it out. Turns out my cluster config file was referencing localhost for the node as opposed to the IP (or a dns name) and that made the fabric not respond to requests from outside.
"nodes": [
{
"nodeName": "vm0",
"iPAddress": "IP_ADDRESS_HERE",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r0",
"upgradeDomain": "UD0"
}
],
Environment
ASPNET MVC App running on docker
Docker image: microsoft/aspnet:4.7.2-windowsservercore-1803 running on Docker-for-Windows on Win10Ent host
SQL Server running on AWS EC2 in a private subnet
VPN Connection to subnet
Background
The application is able to connect to database when VPN is activated and everything works fine. However when app runs on docker, the underlying connection to database is refused. Since the database is in a private subnet, VPN is needed to connect.
I am able to ping the database server as well as the general internet successfully from the command prompt launched inside the container, thus underlying networking is working fine.
Configuration
Dockerfile
FROM microsoft/aspnet:4.7.2-windowsservercore-1803
ARG source
WORKDIR /inetpub/wwwroot
COPY ${source:-obj/Docker/publish} .
Docker Compose
version: '3.4'
services:
myWebApp:
image: ${DOCKER_REGISTRY}myWebApp
build:
context: .
dockerfile: Dockerfile
The network entry is removed as NAT is mapped to Ethernet and I am running on WiFi thus having it disabled.
SQL Connection string (default instance on def port)
"Data Source=192.168.1.100;Initial Catalog=Admin;Persist Security Info=True;User ID=admin;Password=WVU8PLDR" providerName="System.Data.SqlClient"
Local network configuration
Ping status
Let me know what needs to be fixed. Any environment or configuration-specific information can be provided
After multiple iterations of different ways to address this issue, we finally figured out the solution which we incorporated in our production environment.
The SQL Server primary instance was in a private subnet, hence it cannot be accessed from any application outside the subnet. The SQL Enterprise manager and other apps living on local machines are able to access it via VPN as the OS tunnels that traffic to the private network. However, since docker cannot join the VPN network easily (would be too complicated, may not be actually impossible), we need to figure out a possible solution.
For this, we set up a Reverse Proxy on the private subnet, which lives on the public IP, hence accessible via the public Internet. This server has permission granted in the underlying security group setting to talk to the SQL Server (port 1433 being opened to the private IP).
So the application running on docker calls the IP of the Reverse Proxy, which in turn routes it to the SQL Server. There's a cost of one additional hop involved here, but that's something we gotta live with.
Let me know if anyone can figure out a better design. Thanks
I am trying to setup a Kaa cluster with 3 kaa-node servers. I would like to know whether each node (bootstrap service & operations_service) must have its own public IP address? Otherwise the endpoint will not be able to access them?
But I have only one Public IP address & one Domain Name. Each node has it's own local ip address. How can I setup this kaa-cluster?
on each node:
open kaa-node.properties file at /etc/kaa-node/conf directory.
change thrift_host and transport_public_interface properties onto the local IP address.
Then you need to integrate kaa-node with the following services:
Zookeeper, SQL and NoSQL databases.
For more information, refer to the following documentation page.
As alternative, you are able to setup kaa cluster using docker environment. Also, look to the documentation page.
Please, take into account that docker extension is supported from kaa 0.10 version.
I have a AutoScaling Group Setup and AWS Code Deploy Setup for VPC having 1 public subnet. The VPC instance is capable of accessing all AWS services through IAM Role.
The base AMI is ubuntu with CodeDeploy Agent installed on it.
Whenever the scaling event triggers, the AutoScaling Group launches an instance and the instance goes into "Waiting for Lifecycle Event"
AWS Code Deploy triggers deployment and is "In Progress" state, it remains in that state for more than an hour and then it fails.
If, within that hour, I manually assign Elastic IP, the Code deploy succeeds immediately.
Is having public/Elastic IP a requirement for CodeDeploy to succeed on VPC instances?
How can I get Code Deploy succeeded without the need of Public IP.
Have you set up a NAT instance so that the instances can access the internet without a public facing IP address? The EIP doesn't matter if the instance has access to the internet otherwise. Your code is deployed from the CodeDeploy agent polling the endpoint, thus if it can't hit the end point, it will never work.
The endpoint that CodeDeploy agent talks to is not the public domain name like codedeloy.amazonaws.com. Agent talks to command control endpoint, which is "https://codedeploy-commands.#{cfg.region}.amazonaws.com", according to https://github.com/aws/aws-codedeploy-agent/blob/29d4ff4797c544565ccae30fd490aeebc9662a78/vendor/gems/codedeploy-commands-1.0.0/lib/aws/plugins/deploy_control_endpoint.rb#L9. So you'll need to make sure private instance can access to this command control endpoint.
To connect your VPC to CodeDeploy, you define an interface VPC endpoint for CodeDeploy. An interface endpoint is an elastic network interface with a private IP address that serves as an entry point for traffic destined to a supported AWS service. The endpoint provides reliable, scalable connectivity to CodeDeploy without requiring an internet gateway, network address translation (NAT) instance, or VPN connection.
https://docs.aws.amazon.com/codedeploy/latest/userguide/vpc-endpoints.html
Being new to Docker and VM's, I have run into a blocker. I have a node app that needs to send a POST request from a Docker container to a Virtual Machine or to my local machine.
I have read through the Docker documentation, but still don't understand what I need to do in order to accomplish this.
So how can I send an http request from my node app running in a Docker Container to my Vagrant Box?
By default, Docker creates a virtual interface (docker0) in your host machine with IP 172.17.42.1. Each of the container launched will have an IP of the network 172.17.42.1/16, and they will be able to connect to host machine connecting to IP 172.17.42.1.
If you want to connect a docker container with another service running in a virtual machine running with other provider (e.g.: virtualbox, vmware), the easiest way is forwarding the ports needed by the service to you host machine and then, from your docker container, connecting to IP 172.17.42.1. You should check your virtual machine provider documentation to see details about this. And if you are using libvirt/KVM (or with any other provider), you can use iptables to enable port forwarding.