I am trying to connect to DAX from a localhost using the following code:
ClientConfig daxConfig = new ClientConfig()
.withEndpoints("dax-cluster.yhdqu5.clustercfg.dax.use1.cache.amazonaws.com:8111");
AmazonDaxClient client = new ClusterDaxClient(daxConfig);
The cluster is up and running, I've created it in a public subnet and opened port 8111 in the security group, but despite this I receive the following exception:
Caused by: java.io.IOException: No endpoints available
at com.amazon.dax.client.cluster.Cluster.leaderClient(Cluster.java:560)
at com.amazon.dax.client.dynamodbv2.ClusterDaxClient$3.getClient(ClusterDaxClient.java:154)
at com.amazon.dax.client.dynamodbv2.ClusterDaxClient$RetryHandler.makeRequestWithRetries(ClusterDaxClient.java:632)
... 10 more
Suppressed: java.io.IOException: No endpoints available
... 13 more
Suppressed: java.io.IOException: No endpoints available
... 13 more
Other answers on StackOverflow suggest that this may be caused by incorrectly configured security group and to test it I've launched an instance in the same VPC/subnet and used the same security group and I was able to ssh to this host (both 22nd and 8111-st ports are opened in the security group). So there should be some other DAX related reason.
The firewall on my machine is turned off.
But if I ssh to a machine in EC2, then I can connect to the DAX cluster:
[ec2-user#ip-10-0-0-44 ~]$ nc -z dax-cluster.yhdqu5.clustercfg.dax.use1.cache.amazonaws.com 8111
Connection to dax-cluster.yhdqu5.clustercfg.dax.use1.cache.amazonaws.com 8111 port [tcp/*] succeeded!
You can only connect to DAX from an EC2 machine in the same VPC as the DAX cluster. Unless your localhost is an EC2 instance in the same VPC, it won't be able to connect to the DAX cluster.
If you are making call from your lambda, make sure you have the lambda running with in the same vpc, it has granted iam role to access dax and it has opened the dax port for the security group
There is a way to access it from outside the VPC, You will have to create a NLB which fronts the dax replicas. Then you need to use VPC endpoint service to provide a link which can access this. You can then use the endpoints provided to make calls.
VPCEndpoint -> NLB -> Dax replica 1
-> Dax replica 2
You can then used code sample below to connect to DAX
import com.amazon.dax.client.dynamodbv2.DaxClient;
AmazonDynamoDB amazonDynamoDb = new DaxClient(
"vpce-XXX-YYY.vpce-svc-ZZZ.us-west-2.vpce.amazonaws.com",
8111, region, credentials);
Related
I have a Python gRPC server running on AWS Fargate (configured very similar to this AWS guide here), and another AWS Fargate task (call it the "client") that attempts to make a connection to my gRPC server (also using Python gRPC). However, the client is unable to make a call to my server, with the following error:
<_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses"
debug_error_string = "{"created":"#1619057124.216955000","description":"Failed to pick subchannel",
"file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":5397,
"referenced_errors":[{"created":"#1619057124.216950000","description":"failed to connect to all addresses",
"file":"src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc",
"file_line":398,"grpc_status":14}]}"
Based on my reading online, it seems like there are myriad situations in which this error is thrown, and I'm having trouble figuring out which one pertains to my case. Here is some additional information:
When running client and server locally, I am able to successfully connect by having the client connect to localhost:[PORT]
I have configured an application load balancer target group following the guide from AWS here that makes health check requests to the / route of my gRPC server, using the gRPC protocol, and expect gRPC response code 12 (UNIMPLEMENTED); these health check requests are coming back as expected, which I believe implies the load balancer is able to successfully communicate with the server (although I could be misunderstanding)
I configured a service discovery system (following this guide here) that should allow me to reach my gRPC server within my VPC via the name service-name.dev.co.local. I can confirm that the corresponding DNS record exists in Route 53, and when I SSH into my VPC, I am indeed able to ping service-name.dev.co.local successfully.
Anyone have any ideas? Would appreciate any and all advice, and I'm happy to answer any further questions.
Thank you for your help!
on your grpc server use 0.0.0.0:[port] and expose this port with TCP on your container.
I have setup a cluster within kubernetes using jgroups and the cluster appears to form correctly, each node has a local ip and a public ip, when I connect to one of the nodes using the public ip all is fine but the list of available nodes that is returned to the client (wildfly instance) contains the local ips of the nodes rather than their public ones, I have defined the connector with the public ip
<connectors>
<connector name="netty-connector">tcp://{public ip}:61616</connector>
</connectors>
and then configured the broadcast as
<broadcast-groups>
<broadcast-group name="my-broadcast-group">
<broadcast-period>5000</broadcast-period>
<jgroups-file>jgroups-file_ping.xml</jgroups-file>
<jgroups-channel>activemq_broadcast_channel</jgroups-channel>
<connector-ref>netty-connector</connector-ref>
</broadcast-group>
</broadcast-groups>
and then configured the discvery as
<discovery-groups>
<discovery-group name="my-discovery-group">
<jgroups-file>jgroups-file_ping.xml</jgroups-file>
<jgroups-channel>activemq_broadcast_channel</jgroups-channel>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
and finally the cluster as
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>netty-connector</connector-ref>
<retry-interval>500</retry-interval>
<use-duplicate-detection>true</use-duplicate-detection>
<message-load-balancing>STRICT</message-load-balancing>
<max-hops>1</max-hops>
<discovery-group-ref discovery-group-name="my-discovery-group"/>
</cluster-connection>
</cluster-connections>
Whenever I force a node to shutdown the client reconnects but fails and reports the local ip of the node, I was under the impression that the connector defined in the broker was used to broadcast to other members of the cluster but it uses the local ip is that correct?
Wildfly runs and send and receives messages but every few minutes I get the following log
14:27:31,463 WARN [org.apache.activemq.artemis.service.extensions.xa.recovery] (Periodic Recovery) AMQ122015: Can not connect to XARecoveryConfig [transportConfiguration=[TransportConfiguration(name=, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?trustStorePassword=****&port=61616&sslEnabled=true&host=x-x-x-x&trustStorePath=client-ts], discoveryConfiguration=null, username=username, password=****, JNDI_NAME=java:/RemoteJmsXA] on auto-generated resource recovery: ActiveMQNotConnectedException[errorType=NOT_CONNECTED message=AMQ119007: Cannot connect to server(s). Tried with all available servers.]at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:797)
at org.apache.activemq.artemis.service.extensions.xa.recovery.ActiveMQXAResourceWrapper.connect(ActiveMQXAResourceWrapper.java:311)
at org.apache.activemq.artemis.service.extensions.xa.recovery.ActiveMQXAResourceWrapper.getDelegate(ActiveMQXAResourceWrapper.java:239)
at org.apache.activemq.artemis.service.extensions.xa.recovery.ActiveMQXAResourceWrapper.recover(ActiveMQXAResourceWrapper.java:69)
at org.apache.activemq.artemis.service.extensions.xa.ActiveMQXAResourceWrapperImpl.recover(ActiveMQXAResourceWrapperImpl.java:106)
at com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.xaRecoveryFirstPass(XARecoveryModule.java:634)
at com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.periodicWorkFirstPass(XARecoveryModule.java:226)
at com.arjuna.ats.internal.jta.recovery.arjunacore.XARecoveryModule.periodicWorkFirstPass(XARecoveryModule.java:171)
at com.arjuna.ats.internal.arjuna.recovery.PeriodicRecovery.doWorkInternal(PeriodicRecovery.java:770)
at com.arjuna.ats.internal.arjuna.recovery.PeriodicRecovery.run(PeriodicRecovery.java:382)
This is the expected behavior as you are connecting through a load balancer. You can work around that by setting useTopologyForLoadBalancing=false and specifying servers explicitly in your connection URL.
When using WildFly, the connection factory or pooled connection factory must be configured with the attribute use-topology-for-load-balancing set to false. This is how to set this from the CLI (replace remote-artemis with your actual name):
/subsystem=messaging-activemq/pooled-connection-factory=remote-artemis:write-attribute(name=use-topology-for-load-balancing, value=false)
Got it working eventually by creating a service per pod and putting public ip in the connector definition for each node
I'm trying to run Wordpress on my Ec2 instance with RDS MySQL database.
Here's the tutorial I've followed https://cristianocastro.net/instalando-wordpress-em-um-servidor-amazon-aws-pt-2-3/
But when I try to access the Ec2 instance that connect to the RDS on the browser it shows this message
Error establishing a database connection
I put in the security group inbound and outbound all traffic anywhere.
Here's the print from my security group rule.
Appreciate the help
PS: Saw this question, with no anwser Can't Connect to RDS mysql DB from Wordpress on Amazon linux EC2 Instance
Edit: Rules from security group of the ec2 instance that will access RDS.
Based on the info you provided it seems your RDS is publicly accessible via the internet and connections are going to be accepted from any host (you might want to restrict these later)
Assuming that your software setup (OS, php, MySQL libraries, Wordpress) is okay and you’re using the right parameters in the WP configuration (host, port, user, password, db).
One of the remaining issues might be that the Security Group attached to your EC2 instance doesn’t allow connectivity on port 3306 (MySQL) towards the IP Address of your RDS instance.
It appears that you have:
An Amazon EC2 instance
An Amazon RDS instance
The typical security configuration would be:
The Amazon RDS database configured as Publicly Accessible = No
A Security Group (EC2-SG) on the Amazon EC2 instance that permits inbound traffic from your desired locations (eg port 22 for SSH), and default Outbound rules permitting all traffic outbound
A Security Group (RDS-SG) on the Amazon RDS database instance that permits inbound traffic on port 3306 from EC2-SG
That is, RDS-SG should specifically reference EC2-SG to permit the inbound connection.
The app running on the EC2 instance should reference the RDS database via the DNS Name shown in the RDS console.
My Shiny application connects to my Redshift instance locally, but I get the following error when I try to run the application once I publish it to shinyapps.io
Warning: Error in connection_create: could not connect to server:
Connection timed out
Is the server running on host "redshift.bi.tmmp.io" (23.23.70.97)
and accepting TCP/IP connections on port 5439?
How would I allow connection to pass once published online?
A time-out normally indicates that the application could not connect to the server (as opposed to the server rejecting the connection).
You should start by checking the Security Group associated with the Redshift instance to confirm that it will accept incoming connections from the application.
The best configuration is:
A Security Group on your application instance (App-SG), which is associated with the application instance(s)
A Security Group on the Redshift instance (DB-SG) that permits inbound Redshift (port 5439) connections from App-SG, which is associated with the Redshift instance
That is, the Redshift security group refers to the application security group.
Set Up
Laptop with:-
Kafka in a virtualbox vm : vagrant 9092 port forwarded from laptop's localhost
Kubernetes Cluster in a virtualbox VM : miniKube
Desired Outcome
Microservices on my miniKube cluster can fire messages to Kafka VM.
Note that this works in Google Container Engine (GKE)
Actual Outcome
From the laptop I can use a console producer to send messages to Kafka VM and it happily obliges adding these to the topic. But when a microservice from the kubernetes cluster sends a message, the message is received but it is not added to the topic.
Instead I get the error on the microservice ...
Batch containing 1 record(s) expired due to timeout while requesting metadata from brokers for generated-test-script-0
If I tail kafka-request.log I see ...
[2017-02-08 21:57:05,891] TRACE Completed request:{api_key=3,api_version=1,correlation_id=0,client_id=producer-5} -- {topics=[generated-test-script]} from connection 10.0.2.15:9092-10.0.2.2:50124;totalTime:0,requestQueueTime:0,localTime:0,remoteTime:0,responseQueueTime:0,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (kafka.request.logger)
While in the "success" case when I simply use a console producer on the laptop I see 2 lines. 1 the same as above but I guess another ACK ...
[2017-02-08 22:08:12,764] TRACE Completed request:{api_key=3,api_version=2,correlation_id=0,client_id=console-producer} -- {topics=[test]} from connection 10.0.2.15:9092-10.0.2.2:50748;totalTime:6,requestQueueTime:0,localTime:6,remoteTime:0,responseQueueTime:0,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (kafka.request.logger)
[2017-02-08 22:08:13,799] TRACE Completed request:{api_key=0,api_version=2,correlation_id=1,client_id=console-producer} -- {acks=1,timeout=1500,topic_data=[{topic=test,data=[{partition=0,record_set=java.nio.HeapByteBuffer[pos=0 lim=39 cap=39]}]}]} from connection 10.0.2.15:9092-10.0.2.2:53696;totalTime:22,requestQueueTime:1,localTime:21,remoteTime:0,responseQueueTime:0,sendTime:0,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS (kafka.request.logger)
Conclusion And Thoughts
So there is no ERROR as such on the kafka server side, just on the client. My guess is that this a a network issue setup ( NAT? ) whereby the microserice in the virtual Kubernetes cluster can talk to my Kafka VM but the reply route is dropped?
The metadata is required to be returned by Kafka on the first sent message so making the batch size == 0, or "acks" = 0 doesn't really help as a hack because of the initial requirement to send this metadata back.
Any thoughts or pointers would be great as I really want to run this cluster and Kafka VM locally for dev work.