sqoop to teradata - column length issue - teradata

I am trying to sqoop a table's data from HIVE to Teradata and got the error
Error: com.teradata.connector.common.exception.ConnectorException: java.sql.SQLException: [Teradata JDBC Driver] [TeraJDBC 15.00.00.20] [Error 1186] [SQLState HY000] Parameter 8 length is 67618 bytes, which is greater than the maximum 64000 bytes that can be set.
Can anyone please suggest exactly what change I have to do here? Column-8 is too long string in HIVE table and that is why I have defined the data type in TERADATA as VARCHAR(50000), but still it is failing.
Error: com.teradata.connector.common.exception.ConnectorException: java.sql.SQLException: [Teradata JDBC Driver] [TeraJDBC 15.00.00.20] [Error 1186] [SQLState HY000] Parameter 8 length is 67618 bytes, which is greater than the maximum 64000 bytes that can be set.
at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeDriverJDBCException(ErrorFactory.java:94)
at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeDriverJDBCException(ErrorFactory.java:74)
at com.teradata.jdbc.jdbc_4.TDPreparedStatement.internalSetString(TDPreparedStatement.java:1121)
at com.teradata.jdbc.jdbc_4.TDPreparedStatement.setString(TDPreparedStatement.java:1095)
at com.teradata.jdbc.jdbc_4.TDPreparedStatement.setObject(TDPreparedStatement.java:1631)
at com.teradata.connector.teradata.TeradataObjectArrayWritable.write(TeradataObjectArrayWritable.java:232)
at com.teradata.connector.teradata.TeradataBatchInsertOutputFormat$TeradataRecordWriter.write(TeradataBatchInsertOutputFormat.java:142)
at com.teradata.connector.teradata.TeradataBatchInsertOutputFormat$TeradataRecordWriter.write(TeradataBatchInsertOutputFormat.java:114)
at com.teradata.connector.common.ConnectorOutputFormat$ConnectorFileRecordWriter.write(ConnectorOutputFormat.java:107)
at com.teradata.connector.common.ConnectorOutputFormat$ConnectorFileRecordWriter.write(ConnectorOutputFormat.java:65)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at com.teradata.connector.common.ConnectorMMapper.map(ConnectorMMapper.java:129)
at com.teradata.connector.common.ConnectorMMapper.run(ConnectorMMapper.java:117)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
at com.teradata.connector.teradata.TeradataBatchInsertOutputFormat$TeradataRecordWriter.write(TeradataBatchInsertOutputFormat.java:151)
at com.teradata.connector.teradata.TeradataBatchInsertOutputFormat$TeradataRecordWriter.write(TeradataBatchInsertOutputFormat.java:114)
at com.teradata.connector.common.ConnectorOutputFormat$ConnectorFileRecordWriter.write(ConnectorOutputFormat.java:107)
at com.teradata.connector.common.ConnectorOutputFormat$ConnectorFileRecordWriter.write(ConnectorOutputFormat.java:65)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at com.teradata.connector.common.ConnectorMMapper.map(ConnectorMMapper.java:129)
at com.teradata.connector.common.ConnectorMMapper.run(ConnectorMMapper.java:117)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

String column in Hive has 67618 characters and you are mapping it with VARCHAR(50000) of Teradata.
So the error is expected.
You should use Clob(70000) for this.
Sqoop Export should work for this.

Related

AcquireJobsRunnableImpl trows PSQLException: SSL error: readHandshakeRecord

With a successfully migrated Alfresco (5.2 to) 7.0 repository, I get PSQLException: SSL error: readHandshakeRecord (see stracktrace below) every morning at 4:00, which then causes the repository to stop responding.
Could someone please help me decipher this stack trace? Why is this job running around 4am? I can't find a suitable quartz job. Does anyone know how to manually force this call to fix this problem? At first I thought it might be related to the contentStoreCleaner running at 4:00 am, but disabling this job doesn't change anything.
The only work around I found so far was to disable the activity workflow engine.
2021-08-08 04:35:39,396 ERROR [org.activiti.engine.impl.jobexecutor.AcquireJobsRunnableImpl] [Thread-46] exception during job acquisition: Could not open JDBC Connection for transaction; nested exception is org.postgresql.util.PSQLException: SSL error: readHandshakeRecord
org.springframework.transaction.CannotCreateTransactionException: Could not open JDBC Connection for transaction; nested exception is org.postgresql.util.PSQLException: SSL error: readHandshakeRecord
at org.springframework.jdbc.datasource.DataSourceTransactionManager.doBegin(DataSourceTransactionManager.java:309)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.startTransaction(AbstractPlatformTransactionManager.java:400)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:373)
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:137)
at org.activiti.spring.SpringTransactionInterceptor.execute(SpringTransactionInterceptor.java:45)
at org.activiti.engine.impl.interceptor.LogInterceptor.execute(LogInterceptor.java:31)
at org.activiti.engine.impl.cfg.CommandExecutorImpl.execute(CommandExecutorImpl.java:40)
at org.activiti.engine.impl.cfg.CommandExecutorImpl.execute(CommandExecutorImpl.java:35)
at org.activiti.engine.impl.jobexecutor.AcquireJobsRunnableImpl.run(AcquireJobsRunnableImpl.java:54)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.postgresql.util.PSQLException: SSL error: readHandshakeRecord
at org.postgresql.ssl.MakeSSL.convert(MakeSSL.java:43)
at org.postgresql.core.v3.ConnectionFactoryImpl.enableSSL(ConnectionFactoryImpl.java:534)
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:149)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:213)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:51)
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:223)
at org.postgresql.Driver.makeConnection(Driver.java:465)
at org.postgresql.Driver.connect(Driver.java:264)
at org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38)
at org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1188)
at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106)
at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
at org.springframework.jdbc.datasource.DataSourceTransactionManager.doBegin(DataSourceTransactionManager.java:265)
... 9 more
Caused by: javax.net.ssl.SSLException: readHandshakeRecord
at java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1335)
at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:440)
at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:411)
at org.postgresql.ssl.MakeSSL.convert(MakeSSL.java:41)
... 22 more
Suppressed: java.net.SocketException: Broken pipe (Write failed)
at java.base/java.net.SocketOutputStream.socketWrite0(Native Method)
at java.base/java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:110)
at java.base/java.net.SocketOutputStream.write(SocketOutputStream.java:150)
at java.base/sun.security.ssl.SSLSocketOutputRecord.encodeAlert(SSLSocketOutputRecord.java:81)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:380)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:292)
at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:450)
... 24 more
Caused by: java.net.SocketException: Broken pipe (Write failed)
at java.base/java.net.SocketOutputStream.socketWrite0(Native Method)
at java.base/java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:110)
at java.base/java.net.SocketOutputStream.write(SocketOutputStream.java:150)
at java.base/sun.security.ssl.SSLSocketOutputRecord.flush(SSLSocketOutputRecord.java:251)
at java.base/sun.security.ssl.HandshakeOutStream.flush(HandshakeOutStream.java:89)
at java.base/sun.security.ssl.Finished$T13FinishedProducer.onProduceFinished(Finished.java:679)
at java.base/sun.security.ssl.Finished$T13FinishedProducer.produce(Finished.java:658)
at java.base/sun.security.ssl.SSLHandshake.produce(SSLHandshake.java:436)
at java.base/sun.security.ssl.Finished$T13FinishedConsumer.onConsumeFinished(Finished.java:1011)
at java.base/sun.security.ssl.Finished$T13FinishedConsumer.consume(Finished.java:874)
at java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:392)
at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:443)
at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:421)
at java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:182)
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:171)
at java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1418)
at java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1324)
... 25 more
The stacktrace was misleading - the jdbc connection problem was caused by memory filling up by the Alfresco trashcan cleaner module: OutOfMemoryError: Java heap space due to no limit on getChildAssocs.
We have ~ 4 million nodes in the trashcan and the module retrieves in every batch run all nodes again and again until the memory has been filled up ...

How can I solve this problem loading Grakn schema and data at localhost:48555?

I'm using grakn core 1.8.4 with Windows 10. The grakn server and grakn storage are starting up normally, but when trying to load a schema, Grakn returns the following error message:
Unable to create connection to Grakn instance at localhost:48555
Cause: io.grpc.StatusRuntimeException
UNKNOWN: ID block allocation on partition(30)-namespace(0) timed out in 2.000 min. Please check server logs for the stack trace.
I already checked and there are no other processes listening to the same port. I also disabled the Firewall and that did not solve the problem. Does anyone have any indication of how I should proceed?
Below is part of the log:
2021-01-05 14:19:24,387 [JanusGraphID(30)(0)[0]] WARN g.c.g.d.i.ConsistentKeyIDAuthority - Temporary storage exception while acquiring id block - retrying in PT0.32S: {}
grakn.core.graph.diskstorage.TemporaryBackendException: Wrote claim for id block [1, 10001) in PT0.016S => too slow, threshold is: PT0.01S
at grakn.core.graph.diskstorage.idmanagement.ConsistentKeyIDAuthority.getIDBlock(ConsistentKeyIDAuthority.java:320)
at grakn.core.graph.graphdb.database.idassigner.StandardIDPool$IDBlockGetter.call(StandardIDPool.java:262)
at grakn.core.graph.graphdb.database.idassigner.StandardIDPool$IDBlockGetter.call(StandardIDPool.java:232)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
2021-01-05 14:19:24,688 [grpc-request-handler-1] ERROR grakn.core.server.rpc.SessionService - An error has occurred
grakn.core.graph.core.JanusGraphException: ID block allocation on partition(30)-namespace(0) timed out in 2.000 min
at grakn.core.graph.graphdb.database.idassigner.StandardIDPool.waitForIDBlockGetter(StandardIDPool.java:146)
at grakn.core.graph.graphdb.database.idassigner.StandardIDPool.nextBlock(StandardIDPool.java:165)
at grakn.core.graph.graphdb.database.idassigner.StandardIDPool.nextID(StandardIDPool.java:185)
at grakn.core.graph.graphdb.database.idassigner.VertexIDAssigner.assignID(VertexIDAssigner.java:334)
at grakn.core.graph.graphdb.database.idassigner.VertexIDAssigner.assignID(VertexIDAssigner.java:184)
at grakn.core.graph.graphdb.database.idassigner.VertexIDAssigner.assignID(VertexIDAssigner.java:154)
at grakn.core.graph.graphdb.database.StandardJanusGraph.assignID(StandardJanusGraph.java:416)
at grakn.core.graph.graphdb.transaction.StandardJanusGraphTx.addVertex(StandardJanusGraphTx.java:636)
at grakn.core.graph.graphdb.transaction.StandardJanusGraphTx.addVertex(StandardJanusGraphTx.java:653)
at grakn.core.graph.graphdb.transaction.StandardJanusGraphTx.addVertex(StandardJanusGraphTx.java:649)
at grakn.core.concept.structure.ElementFactory.addVertexElement(ElementFactory.java:104)
at grakn.core.concept.manager.ConceptManagerImpl.addTypeVertex(ConceptManagerImpl.java:188)
at grakn.core.server.session.TransactionImpl.createMetaConcepts(TransactionImpl.java:1297)
at grakn.core.server.session.SessionImpl.initialiseMetaConcepts(SessionImpl.java:123)
at grakn.core.server.session.SessionImpl.<init>(SessionImpl.java:85)
at grakn.core.server.session.SessionFactory.session(SessionFactory.java:115)
at grakn.core.server.rpc.ServerOpenRequest.open(ServerOpenRequest.java:40)
at grakn.core.server.rpc.SessionService.open(SessionService.java:122)
at grakn.protocol.session.SessionServiceGrpc$MethodHandlers.invoke(SessionServiceGrpc.java:339)
at io.grpc.stub.ServerCalls$UnaryServerCallHandler$UnaryServerCallListener.onHalfClose(ServerCalls.java:172)
at io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerCallImpl.java:331)
at io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed.runInContext(ServerImpl.java:814)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.util.concurrent.TimeoutException: null
at java.util.concurrent.FutureTask.get(Unknown Source)
at grakn.core.graph.graphdb.database.idassigner.StandardIDPool.waitForIDBlockGetter(StandardIDPool.java:126)
... 26 common frames omitted
If you get this error, you can try to increase the wait time with the following two parameters in the grakn.properties file:
# The number of milliseconds that the JanusGraph id pool manager will wait before
# giving up on allocating a new block of ids
root.ids.renew-timeout=10
# The number of milliseconds the system waits for an ID block reservation to be acknowledged by the storage backend
ids.authority.wait-time=10
I solved this by increasing the below parameters from 10 o 100 in the .\grakn\server\conf\grakn.properties configuration file.
Not sure if this affects performances.
# The number of milliseconds that the JanusGraph id pool manager will wait before
# giving up on allocating a new block of ids
root.ids.renew-timeout=100
# The number of milliseconds the system waits for an ID block reservation to be acknowledged by the storage backend
ids.authority.wait-time=100

Unable to connect cassandra through R

I am trying to follow an example given on "http://www.datastax.com/dev/blog/big-analytics-with-r-cassandra-and-hive" to connect R with Cassandra. Following is my code:
library(RJDBC)
#Load in the Cassandra-JDBC diver
cassdrv <- JDBC("org.apache.cassandra.cql.jdbc.CassandraDriver", list.files("D:/cassandra/lib",pattern="jar$",full.names=T))
#Connect to Cassandra node and Keyspace
casscon <- dbConnect(cassdrv, "jdbc:cassandra://127.0.0.1:9042/demodb")
When I run above code in R, I get following error:
Error in .jcall(drv#jdrv, "Ljava/sql/Connection;", "connect", as.character(url)[1], :
java.sql.SQLNonTransientConnectionException: org.apache.thrift.transport.TTransportException: Read a negative frame size (-2113929216)!
On the Cassandra server window get the following error for the above code:
ERROR 14:41:26,671 Unexpected exception during request
java.lang.ArrayIndexOutOfBoundsException: 34
at org.apache.cassandra.transport.Message$Type.fromOpcode(Message.java:1
06)
at org.apache.cassandra.transport.Frame$Decoder.decode(Frame.java:168)
at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDeco
der.java:425)
at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(Fram
eDecoder.java:303)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:26
8)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:25
5)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(Abstract
NioWorker.java:109)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNi
oSelector.java:312)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioW
orker.java:90)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
I tried to change port from 9042 to 9160 then request won't reach server in that case.
I also tried to increase the size of thrift_framed_transport_size_in_mb from 15 to 500 but the error is same.
The Cassandra is otherwise running fine and database is connected/updated easily through "devcenter".
R version: R-3.1.0,
Cassandra version: 2.0.8,
Operating System: Windows,
XP Firewall: off
Finally I was able to connect to cassandra through R. I followed the following steps:
I updated my java 7 and R to the latest version.
Then, I reinstalled RJDBC, rJava, DBI
Then, I used the following code, and successfully got connected:
library(RJDBC)
drv <- JDBC("org.apache.cassandra.cql.jdbc.CassandraDriver", list.files("D:/cassandra/lib/",pattern="jar$",full.names=T))
.jaddClassPath("D:/mysql-connector-java-3.1.14/cassandra-clientutil-1.0.2.jar")
conn <- dbConnect(drv, "jdbc:cassandra://127.0.0.1:9160/demodb")
res <- dbGetQuery(conn, "select * from emp")
# print values
res

RHive: Only simple `select` works?

I am running RHive (https://github.com/nexr/RHive) with Hadoop 2.2.0.2.0.6.0-101 on CentOS (Linux 2.6.32-431.5.1.el6.x86_64)
RHive can do basicselect query:
rhive.query("select * from simple")
And RHive fails to perform queries with condition. For example:
rhive.query("select * from simple order by rating")
Error: java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
rhive.query("select * from simple where name == 'Bond'")
Error: java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
Any way to make it support Hive QL in full?
Thanks!
While connecting to RHive, provide the Hive user's username and password, or credentials of any HDFS user as below:
rhive.connect(host="IP of Hive server", port=10000, hiveServer2=TRUE, user="hive",
password="")

Query is being executed infinitely in IntelliJ IDEA's Database console

Do you guys know why SQL query might be being executed infinitely in IntelliJ IDEA's Database console unless I hit "Terminate" button ?
I have Intellj IDEA 11 and HSQLDB. When I run simple query it keeps printing something like this in output
[2011-12-15 00:19:32] 9 row(s) affected in 2724 ms
[2011-12-15 00:19:32] 9 row(s) affected in 2724 ms
[2011-12-15 00:19:32] 9 row(s) affected in 2724 ms
[2011-12-15 00:19:32] 9 row(s) affected in 2724 ms
and after I hit "Terminate" it spits
java.lang.RuntimeException: java.rmi.ConnectException: Connection refused to host: localhost; nested exception is:
java.net.ConnectException: Connection refused: connect
Caused by: java.rmi.ConnectException: Connection refused to host: localhost; nested exception is:
java.net.ConnectException: Connection refused: connect
at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:601)
at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:198)
at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:184)
at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:110)
at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(RemoteObjectInvocationHandler.java:178)
at java.rmi.server.RemoteObjectInvocationHandler.invoke(RemoteObjectInvocationHandler.java:132)
at $Proxy149.getWarnings(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
... 11 more
Caused by: java.net.ConnectException: Connection refused: connect
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
at java.net.Socket.connect(Socket.java:529)
at java.net.Socket.connect(Socket.java:478)
at java.net.Socket.<init>(Socket.java:375)
at java.net.Socket.<init>(Socket.java:189)
at sun.rmi.transport.proxy.RMIDirectSocketFactory.createSocket(RMIDirectSocketFactory.java:22)
at sun.rmi.transport.proxy.RMIMasterSocketFactory.createSocket(RMIMasterSocketFactory.java:128)
at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:595)
... 22 more
It's not super-big deal but it's little bit annoying.
Any ideas why it's happening and how to fix it ?

Resources