How do i specify a new vnet with 2 subnets when creating VM using Azure management Fluent API - azure-management-api

Using Azure Management Fluent API how do I create a new VNet with 2 subnets and create 2 VMs with each vm is a separate subnet?
var windowsVM = azure.VirtualMachines.Define("myWindowsVM")
.WithRegion(Region.USWest)
.WithNewResourceGroup(rgName)
.WithNewPrimaryNetwork("10.0.0.0/28")
.WithPrimaryPrivateIPAddressDynamic()
.WithNewPrimaryPublicIPAddress("mywindowsvmdns")
.WithPopularWindowsImage(KnownWindowsVirtualMachineImage.WindowsServer2012R2Datacenter)
.WithAdminUsername("tirekicker")
.WithAdminPassword(password)
.WithSize(VirtualMachineSizeTypes.StandardD3V2)
.Create();

Related

Will HttpClient api gateway calls work on kubernetes cluster?

Hello I have worked on API gateway for a identity service. I used HttpClient and currently call the identity service with localhost. My worry is when we deploy them we plan to have identity service in a cluster in azure there will be a need to use DNS name then. Will the calls still work like they do now using the localhost just by switching to production it will use the DNS name of the cluster? Or are there any other configurations that need to be done?

Unable to access newly created Airflow UI MWAA

I am trying to create MWAA as root user and I have all AWS services (s3 and EMR )in North California. MWAA doesn't exist in North California. Hence created this in Oregon.
I am creating this in a private network, it also required a new s3 bucket in that region for my dags folder.
I see that it also needed a new vpc and private subnet as we dont have anything in that region created by clicking on "Create VPC ".
Now when I click on airflow UI. It says
"This site can’t be reached". Do I need to add my Ip to the security group here to access Airflow UI?
Someone, please guide.
Thanks,
Xi
From AWS MWAA documentation:
3. Enable network access. You'll need to create a mechanism in your Amazon VPC to connect to the VPC endpoint (AWS PrivateLink) for your Apache Airflow Web server. For example, by creating a VPN tunnel from your computer using an AWS Client VPN.
Apache Airflow access modes (AWS)
The AWS documentation suggests 3 different approaches for accomplishing this (tutorials are linked in the documentation).
Using an AWS Client VPN
Using a Linux Bastion Host
Using a Load Balancer (advanced)
Accessing the VPC endpoint for your Apache Airflow Web server (private network access)

Azure key vault values inside AKS

I've been building Azure Bicep files with the goal of having our infrastructure codified.
All is going well except for AKS. Reading and experimenting I think I have two options.
AKS has pods with Nodejs or .net services running in them which need environment variables like database connection strings. These can be passed in at the deploy stage of each node/.net or they can be 'included' in each AKS instance.
Am I on the right track and does one have advantages over the other?
The IaC Code for your AKS should not be mixed with deployment code of workload (your Nodejs or .Net Pods).
I would also not recommend to use ENV variables for secrets and connections strings. Kubernetes upstream decided that CSI (Container Storage Interface) is the way to go.
With that being said, you can write a Bicep deployment that deploys AKS & Azure Key Vault. Enable the azureKeyvaultSecretsProvider add-on for the AKS to sync secrets from Azure KeyVault to Kubernetes secrets or directly as files into pods.
After this you write you workload deployment of Nodejs and .Net Pods and refer the AZURE KEY VAULT PROVIDER FOR SECRETS STORE CSI DRIVER. This also make you more independent if you create more cluster etc.

Amazon DAX client throws "No endpoints available" exception

I am trying to connect to DAX from a localhost using the following code:
ClientConfig daxConfig = new ClientConfig()
.withEndpoints("dax-cluster.yhdqu5.clustercfg.dax.use1.cache.amazonaws.com:8111");
AmazonDaxClient client = new ClusterDaxClient(daxConfig);
The cluster is up and running, I've created it in a public subnet and opened port 8111 in the security group, but despite this I receive the following exception:
Caused by: java.io.IOException: No endpoints available
at com.amazon.dax.client.cluster.Cluster.leaderClient(Cluster.java:560)
at com.amazon.dax.client.dynamodbv2.ClusterDaxClient$3.getClient(ClusterDaxClient.java:154)
at com.amazon.dax.client.dynamodbv2.ClusterDaxClient$RetryHandler.makeRequestWithRetries(ClusterDaxClient.java:632)
... 10 more
Suppressed: java.io.IOException: No endpoints available
... 13 more
Suppressed: java.io.IOException: No endpoints available
... 13 more
Other answers on StackOverflow suggest that this may be caused by incorrectly configured security group and to test it I've launched an instance in the same VPC/subnet and used the same security group and I was able to ssh to this host (both 22nd and 8111-st ports are opened in the security group). So there should be some other DAX related reason.
The firewall on my machine is turned off.
But if I ssh to a machine in EC2, then I can connect to the DAX cluster:
[ec2-user#ip-10-0-0-44 ~]$ nc -z dax-cluster.yhdqu5.clustercfg.dax.use1.cache.amazonaws.com 8111
Connection to dax-cluster.yhdqu5.clustercfg.dax.use1.cache.amazonaws.com 8111 port [tcp/*] succeeded!
You can only connect to DAX from an EC2 machine in the same VPC as the DAX cluster. Unless your localhost is an EC2 instance in the same VPC, it won't be able to connect to the DAX cluster.
If you are making call from your lambda, make sure you have the lambda running with in the same vpc, it has granted iam role to access dax and it has opened the dax port for the security group
There is a way to access it from outside the VPC, You will have to create a NLB which fronts the dax replicas. Then you need to use VPC endpoint service to provide a link which can access this. You can then use the endpoints provided to make calls.
VPCEndpoint -> NLB -> Dax replica 1
-> Dax replica 2
You can then used code sample below to connect to DAX
import com.amazon.dax.client.dynamodbv2.DaxClient;
AmazonDynamoDB amazonDynamoDb = new DaxClient(
"vpce-XXX-YYY.vpce-svc-ZZZ.us-west-2.vpce.amazonaws.com",
8111, region, credentials);

Allow requests to SF endpoints only from several ec2 instances

I have a public API running on EC2 instance (through AWS ELB) built with Symfony3. However, I have several background tasks which have to consume this API but only on dedicated endpoints. I have to ensure that it is only the workers that consume these endpoints.
I was wondering how can I implement such a structure with AWS. I'm looking at API gateway, VPCs, but I'm kind of lost.
Do you have an idea?
If both the API server and the API consumers are running on EC2 instances, then you can easily configure the security group assigned to your API server to restrict access to only those API consumer instances. Just create a rule in the security group that opens the inbound port for your API, and use the security group(s) assigned to your API consumer instances as the source.

Resources