We want to setup association between the existing ec2 instance profile with ec2 instance using saltstack.
Related
I started deploying a VM from a bicep template.
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/quick-create-bicep?tabs=CLI
It creates a RG, VM, etc as expected
After adding a new subnet to the VNET the subnet get's deleted when I run the VM deployment again.
I want to add other VMs to the resource group without deleting the other subnets in the VNET.
Please advise.
I tried using the --mode Incremental option but it still deletes the other subnets
Hello I am facing a kubeadm join problem on a remote server.
I want to create a multi-server, multi-node Kubernetes Cluster.
I created a vagrantfile to create a master node and N workers.
It works on a single server.
The master VM is a bridge Vm, to make it accessible to the other available Vms on the network.
I choose Calico as a network provider.
For the Master node this's what I've done:
Using ansible :
Initialize Kubeadm.
Installing a network provider.
Create the join command.
For Worker node:
I execute the join command to join the running master.
I created successfully the cluster on one single hardware server.
I am trying to create regular worker nodes on another server on the same LAN, I ping to the master successfully.
To join the Master node using the generated command.
kubeadm join 192.168.0.27:6443 --token ecqb8f.jffj0hzau45b4ro2
--ignore-preflight-errors all
--discovery-token-ca-cert-hash
sha256:94a0144fe419cfb0cb70b868cd43pbd7a7bf45432b3e586713b995b111bf134b
But it showed this error:
error execution phase preflight: couldn't validate the identity of the API Server: Get https://192.168.0.27:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s: dial tcp 192.168.0.27:6443: connect: connection refused
I am asking if there is any specific network configuration to join the remote master node.
Another issue I am facing: I cannot assign a public Ip to the Vm using the bridge adapter, so I remove the static ip to let the dhcp server choose one for it.
Thank you.
I've setup a bare metal cluster and want to provide different types of shared storage to my applications, one of which is an s3 bucket I mount via goofys to a pod that exports if via NFS. I then use the NFS client provisioner to mount the share to automatically provide volumes to pods.
Letting aside the performance comments, the issue is that the nfs client provisioner mounts the NFS share via the node's OS, so when I set the server name to the NFS pod, this is passed on to the node and it cannot mount because it has no route to the service/pod.
The only solution so far has been to configure the service as NodePort, block external connections via ufw on the node, and configure the client provisioner to connect to 127.0.0.1:nodeport.
I'm wondering if there is a way for the node to reach a cluster service using the service's dns name?
I've managed to get around my issue buy configuring the NFS client provisioner to use the service's clusterIP instead of the dns name, because the node is unable to resolve it to the IP, but it does have a route to the IP. Since the IP will remain allocated unless I delete the service, this is scalable, but of course can't be automated easily as a redeployment of the nfs server helm chart will change the service's IP.
I'd suggest you config a domain name for the NFS service ip at the external dns server, then point your node to that domainname to access NFS service. And for the cluster ip of NFS service, you can pin the ip in your helm chart with a customized values file.
I deployed a private cloud in openstack with the help of packstack, Everything is working fine, I can create new instances, Launch it, use it to install software from internet and delete it, All the set up is running on my Local machine as virtual machine in vmware, I created a router, a public and a private network. I can access Internet from my instance as well as from my main server. Basically everything is working as expected. But I can only access my cloud from the network in which I am using it.
I want to Access my horizon dashboard and my instance from an external network, how can I do this? currently I can only access my cloud from ip as http://10.0.5.2/dashboard but I want to assign a public ip to my cloud.
From the dashboard/horizon " http://10.0.5.2/dashboard " link it means you are using the NAT/NAT network/any other internal network IP for OpenStack setup. So you can't access it outside the VMware VM.
If you need to access the horizon from outside VMware:
Create two interfaces in VM, one with NAT and other with Host-Only networking
Use the NAT IP for internet and Host-Only networking IP as HOST_IP for openstack setup.
Install the openstack and then you will have horizon link as http://Host-only_network_IP/dashboard
Then you can able to access the openstack from outside of VMware VM
I have a AutoScaling Group Setup and AWS Code Deploy Setup for VPC having 1 public subnet. The VPC instance is capable of accessing all AWS services through IAM Role.
The base AMI is ubuntu with CodeDeploy Agent installed on it.
Whenever the scaling event triggers, the AutoScaling Group launches an instance and the instance goes into "Waiting for Lifecycle Event"
AWS Code Deploy triggers deployment and is "In Progress" state, it remains in that state for more than an hour and then it fails.
If, within that hour, I manually assign Elastic IP, the Code deploy succeeds immediately.
Is having public/Elastic IP a requirement for CodeDeploy to succeed on VPC instances?
How can I get Code Deploy succeeded without the need of Public IP.
Have you set up a NAT instance so that the instances can access the internet without a public facing IP address? The EIP doesn't matter if the instance has access to the internet otherwise. Your code is deployed from the CodeDeploy agent polling the endpoint, thus if it can't hit the end point, it will never work.
The endpoint that CodeDeploy agent talks to is not the public domain name like codedeloy.amazonaws.com. Agent talks to command control endpoint, which is "https://codedeploy-commands.#{cfg.region}.amazonaws.com", according to https://github.com/aws/aws-codedeploy-agent/blob/29d4ff4797c544565ccae30fd490aeebc9662a78/vendor/gems/codedeploy-commands-1.0.0/lib/aws/plugins/deploy_control_endpoint.rb#L9. So you'll need to make sure private instance can access to this command control endpoint.
To connect your VPC to CodeDeploy, you define an interface VPC endpoint for CodeDeploy. An interface endpoint is an elastic network interface with a private IP address that serves as an entry point for traffic destined to a supported AWS service. The endpoint provides reliable, scalable connectivity to CodeDeploy without requiring an internet gateway, network address translation (NAT) instance, or VPN connection.
https://docs.aws.amazon.com/codedeploy/latest/userguide/vpc-endpoints.html