Deploying Azure VM Bicep overwrites VNET and deletes subnets I have added - subnet

I started deploying a VM from a bicep template.
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/quick-create-bicep?tabs=CLI
It creates a RG, VM, etc as expected
After adding a new subnet to the VNET the subnet get's deleted when I run the VM deployment again.
I want to add other VMs to the resource group without deleting the other subnets in the VNET.
Please advise.
I tried using the --mode Incremental option but it still deletes the other subnets

Related

Migrating cloud VMs while maintaining internal IPs

I'm working on a migration plan in GCP where we have some VMs in a project that has its own VPC. We are setting up a Shared VPC and want to move the VMs to the new VPC. However, the system owners want to maintain the existing IPs (i.e. the VPCs each have the same subnet IP ranges). There are about 30 machines that need to be migrated so shutting everything off and migrating them would be challenging. The owners want us to migrate some of the VMs each day.
Of course, the current project has a VPN configured to connect the On-prem. When we stand up the VPN in the Shared VPC I believe that, alone, will cause problems, because the routes that are exchanged will cause the On-Prem to have two routes to the same subnet IP range.
Are there ways to configure the routes to tightly restrict this? For example, define routes for each IP as we move it from one VPC to another?
Scenario: The VMs are located in a Shared VPC.
Shared VPCs cannot have overlapping subnets. Therefore, you cannot migrate VMs between subnets and maintain the same private IP address.
Scenario: The VMs are located in independent VPCs.
You can allocate a private IP address when creating a new VM instance. Shut down the existing VM, create an image of the VM. Then create a new VM, reserve a static private IP address (under Primary Internal IP), and specify the image for the source boot disk.
However, you cannot specify overlapping or duplicate addresses for your VPN. This means that the migrated VMs will not be accessible to the VPN until you reconfigure the VPN.
My recommendation is to not even try to maintain the same private IP address. Migrate the VMs to the new VPC and reconfigure name resolution to use the new IP addressses.

Kubernetes: unable to get connected to a remote master node "connection refused"

Hello I am facing a kubeadm join problem on a remote server.
I want to create a multi-server, multi-node Kubernetes Cluster.
I created a vagrantfile to create a master node and N workers.
It works on a single server.
The master VM is a bridge Vm, to make it accessible to the other available Vms on the network.
I choose Calico as a network provider.
For the Master node this's what I've done:
Using ansible :
Initialize Kubeadm.
Installing a network provider.
Create the join command.
For Worker node:
I execute the join command to join the running master.
I created successfully the cluster on one single hardware server.
I am trying to create regular worker nodes on another server on the same LAN, I ping to the master successfully.
To join the Master node using the generated command.
kubeadm join 192.168.0.27:6443 --token ecqb8f.jffj0hzau45b4ro2
--ignore-preflight-errors all
--discovery-token-ca-cert-hash
sha256:94a0144fe419cfb0cb70b868cd43pbd7a7bf45432b3e586713b995b111bf134b
But it showed this error:
error execution phase preflight: couldn't validate the identity of the API Server: Get https://192.168.0.27:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s: dial tcp 192.168.0.27:6443: connect: connection refused
I am asking if there is any specific network configuration to join the remote master node.
Another issue I am facing: I cannot assign a public Ip to the Vm using the bridge adapter, so I remove the static ip to let the dhcp server choose one for it.
Thank you.

Connect webapp running in docker (windows) container to SQL Server running on AWS private subnet

Environment
ASPNET MVC App running on docker
Docker image: microsoft/aspnet:4.7.2-windowsservercore-1803 running on Docker-for-Windows on Win10Ent host
SQL Server running on AWS EC2 in a private subnet
VPN Connection to subnet
Background
The application is able to connect to database when VPN is activated and everything works fine. However when app runs on docker, the underlying connection to database is refused. Since the database is in a private subnet, VPN is needed to connect.
I am able to ping the database server as well as the general internet successfully from the command prompt launched inside the container, thus underlying networking is working fine.
Configuration
Dockerfile
FROM microsoft/aspnet:4.7.2-windowsservercore-1803
ARG source
WORKDIR /inetpub/wwwroot
COPY ${source:-obj/Docker/publish} .
Docker Compose
version: '3.4'
services:
myWebApp:
image: ${DOCKER_REGISTRY}myWebApp
build:
context: .
dockerfile: Dockerfile
The network entry is removed as NAT is mapped to Ethernet and I am running on WiFi thus having it disabled.
SQL Connection string (default instance on def port)
"Data Source=192.168.1.100;Initial Catalog=Admin;Persist Security Info=True;User ID=admin;Password=WVU8PLDR" providerName="System.Data.SqlClient"
Local network configuration
Ping status
Let me know what needs to be fixed. Any environment or configuration-specific information can be provided
After multiple iterations of different ways to address this issue, we finally figured out the solution which we incorporated in our production environment.
The SQL Server primary instance was in a private subnet, hence it cannot be accessed from any application outside the subnet. The SQL Enterprise manager and other apps living on local machines are able to access it via VPN as the OS tunnels that traffic to the private network. However, since docker cannot join the VPN network easily (would be too complicated, may not be actually impossible), we need to figure out a possible solution.
For this, we set up a Reverse Proxy on the private subnet, which lives on the public IP, hence accessible via the public Internet. This server has permission granted in the underlying security group setting to talk to the SQL Server (port 1433 being opened to the private IP).
So the application running on docker calls the IP of the Reverse Proxy, which in turn routes it to the SQL Server. There's a cost of one additional hop involved here, but that's something we gotta live with.
Let me know if anyone can figure out a better design. Thanks

Single Instance OpenStack IP Network Configuration

Am curious about how OpenStack handles IP configuration, i have a complete working openstack dashboard with a static IP of 192.168.1.73/24 and i want to change it to something else. Running as a VM using RHEL\Scientific Linux\Centos 7.5 as the Guest Host.
Am running openstack-queens (repo) -- /etc/yum.repos.d
What i've tried and failed...
1.Changing static IP in /etc/sysconfig/network-scripts/ifcfg-eth0
2.Made sure in /etc/resolv.conf reflects my new configuration.
2.Replacing IP configuration in packstack-answerfile for the compute node and the rest of the services i've configured.
What i have noted!!!
1.systemctl status -l redis.service --- fails when i change the IP configuration, this is active (running) with its initial configuration.
2.Virtualization daemon also fails during boot--(running as KVM)
How "deep" does Networking go for OpenStack and how do i achieve my goals of setting a different IP and still have my dashboard up and running?
This was Easy. What I missed to do is to only re-run my packstack answerfile.
First, change the IP address on the machine in /etc/sysconfig/network-scripts/ifcfg-br-ex thats if you already gone ahead in setting up networking for your OpenStack Env.
If you have done a backup of your ifcfg-eth0, revert to it and change to new IP configuration.
Second, Replace new IP configuration in packstack-answerfile for the compute node and the rest of the services configured.
Last But not Least: Requires Steady Internet Connection!!!
Last Step is to re-run your packstack-answerfile with the new IP configuration.

How to Access Openstack on my local machine from External network?

I deployed a private cloud in openstack with the help of packstack, Everything is working fine, I can create new instances, Launch it, use it to install software from internet and delete it, All the set up is running on my Local machine as virtual machine in vmware, I created a router, a public and a private network. I can access Internet from my instance as well as from my main server. Basically everything is working as expected. But I can only access my cloud from the network in which I am using it.
I want to Access my horizon dashboard and my instance from an external network, how can I do this? currently I can only access my cloud from ip as http://10.0.5.2/dashboard but I want to assign a public ip to my cloud.
From the dashboard/horizon " http://10.0.5.2/dashboard " link it means you are using the NAT/NAT network/any other internal network IP for OpenStack setup. So you can't access it outside the VMware VM.
If you need to access the horizon from outside VMware:
Create two interfaces in VM, one with NAT and other with Host-Only networking
Use the NAT IP for internet and Host-Only networking IP as HOST_IP for openstack setup.
Install the openstack and then you will have horizon link as http://Host-only_network_IP/dashboard
Then you can able to access the openstack from outside of VMware VM

Resources