I'm in my EC2 instance in the same VPC as my DAX cluster....the cluster's SG is default (allow all) and I still can't connect
Here's an abbreviated code sample:
from amazondax import AmazonDaxClient
dax = AmazonDaxClient(
endpoint_url="mycluster.i5cagb.clustercfg.dax.use1.cache.amazonaws.com:8111"
)
After waiting a bit, I get this error:
Failed to configure cluster endpoints from
[('mycluster.i5cagb.clustercfg.dax.use1.cache.amazonaws.com', 8111)]
I tried diagnosing with nc too:
$ nc -zv mycluster.i5cagb.clustercfg.dax.use1.cache.amazonaws.com 8111
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connection to 172.31.43.69 failed: Connection timed out.
Ncat: Trying next address...
Ncat: Connection to 172.31.58.224 failed: Connection timed out.
Ncat: Trying next address...
Ncat: Connection timed out.
As #AbdelrahmanElhaddad pointed out in his comment, this ended up being a security group issue.
Related
I am trying to integrate an Apache Atlas instance I have running with Apache Airflow. Once I set up the connection in airflow.cfg I tried running a DAG from the Airflow scheduler. I get the following error in the log.
[2021-02-02 20:50:47,958] {connectionpool.py:752} WARNING - Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f464b856950>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/atlas/v2/types/typedefs
[2021-02-02 20:50:47,960] {taskinstance.py:1150} ERROR - HTTPConnectionPool(host='localhost', port=21000): Max retries exceeded with url: /api/atlas/v2/types/typedefs (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f464b8650d0>: Failed to establish a new connection: [Errno 111] Connection refused'))
My airflow.cfg is configured as the following:
[lineage]
backend = airflow.lineage.backend.atlas.AtlasBackend
[atlas]
username = <username>
password = <password>
host = localhost
port = 21000
I have tried changing the host to http://localhost as well. I am not sure where to investigate in Atlas to identify why the connection is being refused.
Connection Refused means either service is not listening on the configured port or the wrong hostname.
Try to replace localhost to fqdn,
A good way to configure it correctly is to access atlas ui and simply put the hostname from the url to config.
I was able to solve the problem by adding the --hostname flag when starting the docker container for atlas. I then used the hostname I provided as the host in airflow.cfg
I am tying to connect to sftp server via testapp using curl_7.74 and libssh2_1.8.2. this working in ubuntu 16.04 desktop but not on my embedded environment.
following error is thrown
root#phyboard-segin-imx6ul-2:~# ./sftptest
st.st_size:[0]
Trying 115.249.3.130:2222...
Connected to sftp-xxxxxxxx.mydomain.net (xxx.249.3.130) port 2222 (#0)
Failure establishing ssh session: -43, Failed getting banner
Closing connection 0
curl told us 2
somebody pls throw some light on it :)
I am still using Corda 1.0 version. when i try to redeploy nodes with existing data, getting below error while start-up but able to access the nodes . If i clear the data and redeploy the nodes, i didn't face these error message.
Logs can be found in :
C:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\kotlin-
source\build\nodes\xxxxxxxx\logs
Database connection url is : jdbc:h2:tcp://xxxxxxxxx/node
E 18:38:46+0530 [main] core.client.createConnection - AMQ214016: Failed to
create netty connection
javax.net.ssl.SSLException: handshake timed out
at io.netty.handler.ssl.SslHandler.handshake(...)(Unknown Source) ~[netty
all-4.1.9.Final.jar:4.1.9.Final]
Incoming connection address : xxxxxxxxxxxx
Listening on port : 10014
RPC service listening on port : 10015
Loaded CorDapps : corda-finance-1.0.0, kotlin-
source-0.1, corda-core-1.0.0
Node for "xxxxxxxxxxx" started up and registered in 213.08 sec
Welcome to the Corda interactive shell.
Useful commands include 'help' to see what is available, and 'bye' to shut
down the node.
Wed May 23 18:39:20 IST 2018>>> E 18:39:24+0530 [Thread-6 (ActiveMQ-server-
org.apache.activemq.artemis.core.server.impl.ActiveMQServerImp
l$3#4a532271)] core.client.createConnection - AMQ214016: Failed to create
netty connection
javax.net.ssl.SSLException: handshake timed out
at io.netty.handler.ssl.SslHandler.handshake(...)(Unknown Source) ~[netty-
all-4.1.9.Final.jar:4.1.9.Final]
This looks like the Artemis failed to connect to the node which means the node fails to start.
You should look at the log and if there are any other previous Corda node started which occupy the node.
If there are any legacy Corda nodes that have not been killed, try ps -ef |grep java to see if there is any other java still alive. Especially look for the port number and check if they are overlapped
I am having trouble creating an SSL connection using RPostgreSQL to an AWS hosted PostgreSQL database.
Here is what I've tried so far:
Created the PostgreSQL database on AWS.
Set the database parameter "rds.force_ssl" to 1.
Downloaded the AWS public key from https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem
Test the connection from a windows command prompt with psql (it works).
Executed the following in R:
library(RPostgreSQL)
cert <- paste0("C:/Users/johnr/Downloads/", "rds-combined-ca-bundle.pem")
dbname <- paste0("dbname=", "flargnog", " ", "sslrootcert=", cert, " ", "sslmode=verify-full")
host <- "xxxxxx.xxxxx.us-region-2.rds.amazonaws.com"
con <- dbConnect(dbDriver("PostgreSQL"), user="username", host=host, port=5432, dbname=dbname, password="abcd1234!")
I receive an error message after executing the last statement:
Error in postgresqlNewConnection(drv, ...) :
RS-DBI driver: (could not connect username#xxxxxx.xxxxx.us-region-2.rds.amazonaws.com on dbname "flargnog"
If I change the rds.force_ssl setting to 0 (and remove the ssl stuff from dbname) the connection works just fine.
I have looked at other posts on Stackoverflow related to this issue. This and this seem to indicate an SSL connection is not possible due to issues with RPostgreSQL. However, this post indicates that you can.
Any guidance would be appreciated!
You can try to ssh to the rds instance using e.g. putty and port-forward your local port 5432 to the remote port 5432. Once the ssh connection is open in R just connect to localhost:5432...
Here is how to port-forward using putty:
http://www.akadia.com/services/ssh_putty.html
Here is how this works via command-line:
https://gist.github.com/magnetikonline/3d239b82265398568f31
P.S.: Make sure your instance is in a security-group that accepts ssh connections - port 22
Got an EndpointWriter error:
14/10/30 23:12:29 ERROR EndpointWriter: AssociationError [akka.tcp://sparkWorker#node001:35249] -> [akka.tcp://sparkExecutor#node001:7088]: Error [Association failed with [akka.tcp://sparkExecutor#node001:7088]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://sparkExecutor#node001:7088]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection refused: node001/10.69.144.56:7088
the node001 and 10.69.144.56 are both the node itself. my understanding is that akka was trying to connect to a port in local but got rejected. The executor port was fixed to be '7087'.
Thanks for your help!
The usual reason for connection refused is that there is nothing listening on the port. If the port that executor is listening on is 7087, akka is trying to make a connection to port 7088 and there's probably nothing listening there. Check your code or configuration to see if you got 7088 instead of 7087.