Aurora failover leaves connections open as read-only state - mariadb

We are using MySQL in Aurora cluster
We have 2 instances - master and slave.
We are working with spring transactions on top of a c3po connection pool.
We are using mariadb jdbc driver (version 2.2.3).
Our url looks like this -
jdbc:mysql:aurora:myclaster-cluster.cluster-xxxxxx.us-east-1.rds.amazonaws.com:3306/db?rewriteBatchedStatements=true
When testing failover; every few failovers we get into a state of using a read only connection -
Caused by: org.springframework.jdbc.UncategorizedSQLException: PreparedStatementCallback; uncategorized SQLException for SQL [INSERT INTO a (a1, a2, a3, a4) VALUES (?, ?, ?, ?) on duplicate key update ]; SQL state [HY000]; error code [1290]; (conn=7) The MySQL server is running with the --read-only option so it cannot execute this statement; nested exception is java.sql.SQLException: (conn=7) The MySQL server is running with the --read-only option so it cannot execute this statement
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:84)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:645)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:866)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:927)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:937)
at com.persistence.impl.MyDao.insert(MyDao.java:52)
at org.springframework.scheduling.quartz.QuartzJobBean.execute(QuartzJobBean.java:75)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
... 1 common frames omitted
Caused by: java.sql.SQLException: (conn=7) The MySQL server is running with the --read-only option so it cannot execute this statement
at org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.get(ExceptionMapper.java:198)
at org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.getException(ExceptionMapper.java:110)
at org.mariadb.jdbc.MariaDbStatement.executeExceptionEpilogue(MariaDbStatement.java:228)
at org.mariadb.jdbc.MariaDbPreparedStatementClient.executeInternal(MariaDbPreparedStatementClient.java:216)
at org.mariadb.jdbc.MariaDbPreparedStatementClient.execute(MariaDbPreparedStatementClient.java:150)
at org.mariadb.jdbc.MariaDbPreparedStatementClient.executeUpdate(MariaDbPreparedStatementClient.java:183)
at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeUpdate(NewProxyPreparedStatement.java:410)
at org.springframework.jdbc.core.JdbcTemplate$2.doInPreparedStatement(JdbcTemplate.java:873)
at org.springframework.jdbc.core.JdbcTemplate$2.doInPreparedStatement(JdbcTemplate.java:866)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:629)
... 9 common frames omitted
How can we force the driver to return connections only to the master instance? Is there a way to force aurora to close all open connection upon failover?
Thanks

We solved the problem by implementing an ExceptionInterceptor, in which we closed the connection, forcing the pool to create a new one.
This workround is relevant for mysql-connector-java 5.1.47
#Override
public SQLException interceptException(SQLException sqlEx, Connection conn) {
if (sqlEx.getErrorCode() == READ_ONLY_ERROR_CODE) {
log.warn("Got read only exception closing the connection {} ", sqlEx.getMessage());
closeConnection(conn);
}
return sqlEx;
}

Related

Connect AWS Dax Client to localstack DynamoDB?

I am attempting to use the AWS DaxClient to connect to a localstack setup running DynamoDB.
I start localstack from the docker-compose file in their github repo here.
Then I try creating a Dax Client in Java code and pointing it to that
public class PlainJavaClass {
static AmazonDynamoDB daxClient;
public static void main(String[] args) throws Exception {
AmazonDaxClientBuilder daxClientBuilder = AmazonDaxClientBuilder.standard();
daxClientBuilder.withRegion("us-east-1").withEndpointConfiguration("localhost:4566");
daxClient = daxClientBuilder.build();
}
}
However I'm getting the following exception:
12:36:53.713 [main] WARN com.amazon.dax.client.dynamodbv2.ClusterDaxClient - exception starting up cluster client: java.io.IOException: failed to configure cluster endpoints from hosts: [localhost:4566]
java.io.IOException: failed to configure cluster endpoints from hosts: [localhost:4566]
at com.amazon.dax.client.cluster.Source$AutoconfSource.pull(Source.java:127)
at com.amazon.dax.client.cluster.Source$AutoconfSource.update(Source.java:59)
at com.amazon.dax.client.cluster.Source$AutoconfSource.refresh(Source.java:50)
at com.amazon.dax.client.cluster.Cluster.refresh(Cluster.java:426)
at com.amazon.dax.client.cluster.Cluster.refresh(Cluster.java:409)
at com.amazon.dax.client.cluster.Cluster.startup(Cluster.java:330)
at com.amazon.dax.client.cluster.Cluster.startup(Cluster.java:263)
at com.amazon.dax.client.dynamodbv2.ClusterDaxClient.<init>(ClusterDaxClient.java:148)
at com.amazon.dax.client.dynamodbv2.ClusterDaxClient.<init>(ClusterDaxClient.java:119)
at com.amazon.dax.client.dynamodbv2.AmazonDaxClientBuilder.build(AmazonDaxClientBuilder.java:34)
at com.example.crudspringbootdynamodb.PlainJavaClass.createDaxClient(PlainJavaClass.java:30)
at com.example.crudspringbootdynamodb.PlainJavaClass.main(PlainJavaClass.java:50)
Caused by: com.amazon.cbor.EndOfStreamException: null
at com.amazon.cbor.CborInputStream.readObject(CborInputStream.java:1340)
at com.amazon.dax.client.exceptions.DaxServiceException.pickException(DaxServiceException.java:44)
at com.amazon.dax.client.generated.DaxClientStubs.handleResponse(DaxClientStubs.java:963)
at com.amazon.dax.client.generated.DaxClientStubs.endpoints_455855874_1(DaxClientStubs.java:479)
at com.amazon.dax.client.dynamodbv2.DaxClient.endpoints(DaxClient.java:2375)
at com.amazon.dax.client.cluster.Source$AutoconfSource.pullFrom(Source.java:137)
at com.amazon.dax.client.cluster.Source$AutoconfSource.pull(Source.java:105)
... 11 common frames omitted
Suppressed: com.amazon.cbor.EndOfStreamException: null
... 18 common frames omitted
I know I'm able to connect to localstack using a normal DynamoDb client created like this:
AmazonDynamoDBClientBuilder
.standard()
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration("http://localhost:4566/", "us-east-1"))
.build();
So not really sure where to go next. Is it possible to use DAX client to connect to DynamoDB localstack?
No, localstack does not support it yet. There is an open issue about it on github (https://github.com/localstack/localstack/issues/3707).

R: postgres connection keeps timing out or breaking

I am running a code in R, which connects to the postgresql database. The connection is defined outside the loop, but it times out and keeps breaking. If I put the connection inside the loop, and kill it each time I use it. We reach the limit on the connections.
Additionally, when we run the r code in a loop, the answers/outputs are stored it in a db, it works for first 15 minutes but then the connection breaks saying it cannot connect.
I get the following errors:
RS-DBI driver: (could not connect ------ on dbname "abc": could not connect to server: Connection timed out (0x0000274C/10060)
    Is the server running on host "123.456.567.890" and accepting
    TCP/IP connections on port 5432?
)Error in diagnosticTestsPg(project_path, modelbank, modelproduct, modelwaterfall,  :
  object 'conn' not found
In addition: There were 50 or more warnings (use warnings() to see the first 50)
Here, conn is the connection to the database
Is there a way to fix this or a workaround to have the connection in place until the loop runs?
id <- tryCatch(withCallingHandlers(
id <- f(), error=function(e) {
write.to.log(sys.calls())
},
warning=function(w) {
write.to.log(sys.calls())
invokeRestart("muffleWarning")
}
)
, error = function(e) { print("recovered from error") }
)
Where f() has the db connection details

Error when using ORDER BY with TCL/sqlite3 on server

On my virtual server (Win Server 2012R2) I run some TCL progams with sqlite3 database.
All parts of TCL, sqlite, database (~25.000 items) and code are located on one shared folder C:\data that I can also access from my local PC (Win10) as drive S:. Some more programs work in the same manner, they are scheduled to run in the night.
The code, now competed:
console show
set auto_path [file join [file dirname [info script]] "../lib"]
package require sqlite3
set datasource "./compdata.db3"
wm withdraw .
sqlite3 db $datasource
db eval {SELECT * FROM master_data;} {
puts "$x1, $x2, $x3, $x4" ; update
}
db close
puts stderr ">>>Ok<<<"
exit
runs perfect locally on the server and started from PC.
Start command on server:
call c:\data\tcl858\bin\wish85.exe databasetest.t85
Start command on local PC:
call s:\tcl858\bin\wish85.exe databasetest.t85
If I add ORDER BY it runs only when I start it from PC.
db eval {SELECT * FROM master_data ORDER BY x1;} {
puts "$x1, $x2, $x3, $x4" ; update
}
Started on server it gets an error message:
Unable to open database file while executing "db eval {SELECT * FROM
master_data ORDER BY x1;} { puts "$x1, $x2, $x3, $x4" ; update }".

openedge db startup error

proenv>proserve dbname -S 2098 -H hostname -B 10000
OpenEdge Release 11.6 as of Fri Oct 16 19:02:26 EDT 2015
11:00:35 BROKER This broker will terminate when session ends. (5405)
11:00:35 BROKER The startup of this database requires 46Mb of shared memory. Maximum segment size is 1024Mb.
11:00:35☻ BROKER 0: dbname is a void multi-volume database. (613)
11:00:35 BROKER : Removed shared memory with segment_id: 39714816 (16869)
11:00:35 BROKER ** This process terminated with exit code 1. (8619)
I am getting the above error when I tried to start the progress database...
This is the problem:
11:00:35☻ BROKER 0: dbname is a void multi-volume database. (613)
My guess is, you have just created the DB using prostrct create. You need to procopy an empty db into your db so that if has the schema tables.
procopy empty yourdbname
See: http://knowledgebase.progress.com/articles/Article/P7713
The database is void it means, it does not have any metaschema.
First create your database using .st file (use prostrct create) than
copy meta schema table using emptyn.
Ex : procopy emptyn urdbname.
Then try to start your database.

ORA-12505, TNS:listener does not currently know of SID given in connect descriptor DB Error

i installed oracle weblogic server and couldent configure it, when i tried to set the connections some errors occured.
i could trace out there is some issue with the database connection.
i have installed it on a single lenovo-pc, with windows professional x64.
IN WEB LOGIC SERVER:
i have given the jdbc name as "cmdemo" and the jndi name as "jdbc/cmdemo".
i have selected the oracle's driver as "(Thin) Instance Connections: Version: 9:0.1 and later".
i have selected the Supports Global Transactions options with one-phase commit in transaction options.
in the connection properties i gave database Name as "cmdemo", host name - "lenovo-pc", port - "1521", Db user name - "exp" Db Password - "exp".
when i try to "test configuration", the following error message is prompted...
Error Message:
Listener refused the connection with the following error: ORA-12505, TNS:listener does not currently know of SID given in connect descriptor oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70)oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:133)oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:199)oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:480)oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:413)oracle.jdbc.driver.PhysicalConnection.(PhysicalConnection.java:508)oracle.jdbc.driver.T4CConnection.(T4CConnection.java:203)oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:33)oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:510)com.bea.console.utils.jdbc.JDBCUtils.testConnection(JDBCUtils.java:705)com.bea.console.actions.jdbc.datasources.createjdbcdatasource.CreateJDBCDataSource.testConnectionConfiguration(CreateJDBCDataSource.java:458)sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)java.lang.reflect.Method.invoke(Method.java:597)org.apache.beehive.netui.pageflow.FlowController.invokeActionMethod(FlowController.java:870)org.apache.beehive.netui.pageflow.FlowController.getActionMethodForward(FlowController.java:809)org.apache.beehive.netui.pageflow.FlowController.internalExecute(FlowController.java:478)org.apache.beehive.netui.pageflow.PageFlowController.internalExecute(PageFlowController.java:306)org.apache.beehive.netui.pageflow.FlowController.execute(FlowController.java:336)...
a solution for this will be a big favour...
regards,
Syed Hidayat
Look at the listener status, if it is down pls ask the dba to get the listener up
lsnrctl Status ( shows the status)
if down use the command : lsnrctl start ( to start)
if the issue still persists the check the listener file to have the instance entry if it is not present.
eg: SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = CLRExtProc)
(ORACLE_HOME = F:\oracle\product\11.2.0\dbhome_1)
(PROGRAM = extproc)
(ENVS = "EXTPROC_DLLS=ONLY:F:\oracle\product\11.2.0\dbhome_1\bin\oraclr11.dll")
)
(SID_DESC =
(ORACLE_HOME =F:\oracle\product\11.2.0\dbhome_1)
(SID_NAME = example)
)
)
edit the instance name from "example" to your instance name and restart the listener. You should see the instance in ready state in status output.
test by connecting userid/pwd#instace_name to check if the listener is up and connections are being picked via the service name. This should hopefully solve your problem.

Resources