Not able to connect AWS MSK connect cluster to MSK cluster - aws-msk

I am unable to use my msk cluster when creating the connect app using aws msk connect cluster. I get error
You must choose an authentication method from your selected cluster.
I have enabled IAM based authentication on the cluster along with sasl
But when creating the msk connect plugin i am not able to select the auth method the filed is disabled as shown below. Does any one have success in using msk-connect-cluster

The issue was because of mismatched version, msk conenct cluster needs 2.7.1 as minimum kafka version and the current instance of MSK clister which we were running was 2.7.0. Once after upgrade we were able to connect msk-connect cluster to msk cluster.
And the other issue we had is that our MSK cluster doesnt had any outbound rules, but MSK connect uses same SG as MSK cluster so need to add outbound rules on port 9098

Related

How can I add EFS to an Airflow deployment on Amazon-EKS?

Kubernetes and EKS newbie here.
I've set up an Elastic Kubernetes Service (EKS) cluster and added an Airflow deployment on top of it using the official HELM chart for Apache Airflow. I configured gitsync and can successfully run my DAGS. For some of the DAGs, I need to save the data to an Amazon EFS. I installed the Amazon EFS CSI driver on eks following the instruction on the amazon documentation.
Now, I can create a new pod with access to the NFS but the airflow deployment broke and stay in a state of Back-off restarting failed container. I also got the events with kubectl -n airflow get events --sort-by='{.lastTimestamp} and I get the following messages:
TYPE REASON OBJECT MESSAGE
Warning BackOff pod/airflow-scheduler-599fc856dc-c4pgz Back-off restarting failed container
Normal FailedBinding persistentvolumeclaim/redis-db-airflow-redis-0 no persistent volumes available for this claim and no storage class is set
Warning ProvisioningFailed persistentvolumeclaim/ebs-claim storageclass.storage.k8s.io "ebs-sc" not found
Normal FailedBinding persistentvolumeclaim/data-airflow-postgresql-0 no persistent volumes available for this claim and no storage class is set
I have tried this on EKS version 1.22.
I understand from this that airflow is expecting to get an EBS volume for its pods but the NFS driver changed the configuration of the pvs.
The pvs before I install the driver are this:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-###### 100Gi RWO Delete Bound airflow/logs-airflow-worker-0 gp2 1d
pvc-###### 8Gi RWO Delete Bound airflow/data-airflow-postgresql-0 gp2 1d
pvc-###### 1Gi RWO Delete Bound airflow/redis-db-airflow-redis-0 gp2 1d
After I install the EFS CSI driver, I see the pvs have changed.
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
efs-pvc 5Gi RWX Retain Bound efs-storage-claim efs-sc 2d
I have tried deploying airflow before or after installing the EFS driver and in both cases I get the same error.
How can I get access to the NFS from within Airflow without breaking the Airflow deployment on EKS. Any help would be appreciated.
As stated in the error above no persistent volumes available for this claim and no storage class is set and storageclass.storage.k8s.io "ebs-sc" not found, you have to deploy a storage class called efs-sc using the EFS CSI driver as a provisioner.
Further documentation could be found here
An example of creating your missing storage class and persistent volume could be found here
These steps are also described in the AWS EKS user guide

Unable to access newly created Airflow UI MWAA

I am trying to create MWAA as root user and I have all AWS services (s3 and EMR )in North California. MWAA doesn't exist in North California. Hence created this in Oregon.
I am creating this in a private network, it also required a new s3 bucket in that region for my dags folder.
I see that it also needed a new vpc and private subnet as we dont have anything in that region created by clicking on "Create VPC ".
Now when I click on airflow UI. It says
"This site can’t be reached". Do I need to add my Ip to the security group here to access Airflow UI?
Someone, please guide.
Thanks,
Xi
From AWS MWAA documentation:
3. Enable network access. You'll need to create a mechanism in your Amazon VPC to connect to the VPC endpoint (AWS PrivateLink) for your Apache Airflow Web server. For example, by creating a VPN tunnel from your computer using an AWS Client VPN.
Apache Airflow access modes (AWS)
The AWS documentation suggests 3 different approaches for accomplishing this (tutorials are linked in the documentation).
Using an AWS Client VPN
Using a Linux Bastion Host
Using a Load Balancer (advanced)
Accessing the VPC endpoint for your Apache Airflow Web server (private network access)

Corda Nodes Monitoring

I have Corda network (3 nodes + 1 Notary Node) running locally on my windows system.
I am reading through this document # https://docs.corda.net/node-administration.html
Node Statistics are exposed through JMX beans to Jolokia agent running at the start of each node. I see jolokia agents tarting for each of the node at different ports. Ex - Jolokia: Agent started with URL http://127.0.0.1:xxxx/jolokia/
I am using Hawtio dashboard to see Corda node JVM statistics exposed through Jolokia agents storage. While hawtio is smart enough to discover jolokia agents started at different port for each Corda Node, i am not able to see required statistics displayed on the dashboard.
I have tried setting up jmxMonitoringHttpPort in each of the node.conf with jokia port for each node. But Node is not starting because Jolokia agent is not running at target port message.
I have downloaded binaries of Jolokia agent and ran it on a unused port in system, configured node.conf for each file pointing to this port. But i am still not seeing statistics for any of the node.
i think you can try with the obligation CorDapp project https://github.com/corda/obligation-cordapp#instructions-for-setting-up with ".runnodes" script, because the nodes are running with jolokia agent enabled.
alternatively you can run a single node with java -Dcapsule.jvm.args="-javaagent:drivers/jolokia-jvm-1.3.7-agent.jar=port=7033" -jar corda.jar and see if this works.

Connectivity error while connecting from "Aginity workbench for Redshift" tool to AWS Redshift cluster

I am trying to connect to a redshift cluster using aginity tool but see this below error.
Error-message
I am able to connect to other cluster within the same aws account. The cluster to which I am able to connect is in "us-east-1" region. The cluster to which I am not able to connect is in "us-west-2" region. That is the only difference. All other parameters/configurations are same.
I verified inbound rules in security group, ssl-mode in redshift cluster parameter group and if redshift role was attached to the cluster. They are fine.
I tried googling the error message but it didn't help. I am stuck with this since a day. Any help is highly appreciated. Thanks in advance.
Typically, 08S01 is a network communications error. You've confirmed that the AWS side is properly configured, but do you have an on-prem firewall that could be causing an issue? One way you can test network connectivity is telneting to the instance port to confirm that it's reachable.
Have you tried using Aginity Pro, which uses the Redshift JDBC driver. One thing that's nice is you can copy the JDBC connection screen to the cli and isolate that the issue isn't with with application.

Amazon DAX client throws "No endpoints available" exception

I am trying to connect to DAX from a localhost using the following code:
ClientConfig daxConfig = new ClientConfig()
.withEndpoints("dax-cluster.yhdqu5.clustercfg.dax.use1.cache.amazonaws.com:8111");
AmazonDaxClient client = new ClusterDaxClient(daxConfig);
The cluster is up and running, I've created it in a public subnet and opened port 8111 in the security group, but despite this I receive the following exception:
Caused by: java.io.IOException: No endpoints available
at com.amazon.dax.client.cluster.Cluster.leaderClient(Cluster.java:560)
at com.amazon.dax.client.dynamodbv2.ClusterDaxClient$3.getClient(ClusterDaxClient.java:154)
at com.amazon.dax.client.dynamodbv2.ClusterDaxClient$RetryHandler.makeRequestWithRetries(ClusterDaxClient.java:632)
... 10 more
Suppressed: java.io.IOException: No endpoints available
... 13 more
Suppressed: java.io.IOException: No endpoints available
... 13 more
Other answers on StackOverflow suggest that this may be caused by incorrectly configured security group and to test it I've launched an instance in the same VPC/subnet and used the same security group and I was able to ssh to this host (both 22nd and 8111-st ports are opened in the security group). So there should be some other DAX related reason.
The firewall on my machine is turned off.
But if I ssh to a machine in EC2, then I can connect to the DAX cluster:
[ec2-user#ip-10-0-0-44 ~]$ nc -z dax-cluster.yhdqu5.clustercfg.dax.use1.cache.amazonaws.com 8111
Connection to dax-cluster.yhdqu5.clustercfg.dax.use1.cache.amazonaws.com 8111 port [tcp/*] succeeded!
You can only connect to DAX from an EC2 machine in the same VPC as the DAX cluster. Unless your localhost is an EC2 instance in the same VPC, it won't be able to connect to the DAX cluster.
If you are making call from your lambda, make sure you have the lambda running with in the same vpc, it has granted iam role to access dax and it has opened the dax port for the security group
There is a way to access it from outside the VPC, You will have to create a NLB which fronts the dax replicas. Then you need to use VPC endpoint service to provide a link which can access this. You can then use the endpoints provided to make calls.
VPCEndpoint -> NLB -> Dax replica 1
-> Dax replica 2
You can then used code sample below to connect to DAX
import com.amazon.dax.client.dynamodbv2.DaxClient;
AmazonDynamoDB amazonDynamoDb = new DaxClient(
"vpce-XXX-YYY.vpce-svc-ZZZ.us-west-2.vpce.amazonaws.com",
8111, region, credentials);

Resources