I have a coherence 3.7 cluster. I am trying to connect to it using a simple java application client. if I try this with java serialization implementation and POF disabled it works fine for me. But when i enable POF I start getting exception. below is the stack trace. I have my pof-config at both client and cluster side. any hints about what could be causing this issue would be very helpful.
2012-09-04 13:40:04.811/1.531 Oracle Coherence GE 3.7.1.4 <Error> (thread=ExtendTcpCacheService:TcpInitiator, member=n/a): An exception occurred while encoding a OpenConnectionRequest for Service=ExtendTcpCacheService:TcpInitiator: java.lang.IllegalArgumentException: unknown user type: com.tangosol.util.UUID
at com.tangosol.io.pof.ConfigurablePofContext.getUserTypeIdentifier(ConfigurablePofContext.java:430)
at com.tangosol.io.pof.ConfigurablePofContext.getUserTypeIdentifier(ConfigurablePofContext.java:419)
at com.tangosol.coherence.component.net.extend.Channel.getUserTypeIdentifier(Channel.CDB:7)
at com.tangosol.io.pof.PofBufferWriter.writeUserType(PofBufferWriter.java:1671)
at com.tangosol.io.pof.PofBufferWriter.writeObject(PofBufferWriter.java:1623)
According to message you're getting, tcp initiator is failing because it cannot serialize Coherence type (com.tangosol.util.UUID). Did you include the default pof config in your pof config file?
Related
I have a project using Spring cloud stream with Kafka Streams binder. For the output of a stream, I am using Avro, with the Serde provided by Confluent(io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde).
I am able to use it with the Confluent Schema Registry. Serialization and Deserialization takes place correctly.
However, I wanted to see if we can use the Spring Cloud Schema Registry Server instead of the Confluent one. I configured a standalone Schema Registry server and set the schema registry in my project to it (changed the schemaRegistryClient.endpoint and schema.registry.url properties).
When I tried it out, it seems Spring Cloud is able to work with the standalone server. It registers the schema available in the resources folder as a .avsc file. However, when I send a message, it seems the Confluent serializer continues to approach it as a Confluent Schema Registry (which has different REST endpoints from Spring Schema Registry). As a result, it gets a 405 response code.
We get the following exception(partial stack-trace)
org.apache.kafka.common.errors.SerializationException: Error registering Avro schema: <my-avro-schema>
Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Unexpected character ('<' (code 60)): expected a valid value (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
at [Source: (sun.net.www.protocol.http.HttpURLConnection$HttpInputStream); line: 1, column: 2]; error code: 50005
at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:230)
It seems to me that there are two possibilities:
Spring Schema Registry Server can work only with the content-type provided by Spring (specified as content-type: application/*+avro) and not with the native Serde provided by Confluent, or
There is an issue with the project configuration.
Can someone help me figure out which one is it? If it is the second one, can someone point out what is wrong?
Each schema registry provider requires a proprietary SerDe library. For example, if you would like to integrate AWS Glue Schema Registry with Kafka, then you would need Amazon's SerDe stuff. Hence, the Confluent's SerDe library expects Confluent's Schema Registry at the address specified in the schema.registry.url property.
I deployed a new cosmosDB with mongo and I have an appservice also in the azure. The appservice is a simple php application which connect to a collection (which is in cosmosDB). It works perfect except a connection problem. Cosmosdb throws sometimes this: No suitable servers found (serverSelectionTryOnce set): [connection closed calling ismaster on 'http://apidb.documents.azure.com:10250 '].
What could be the problem? Need to increase RU or need to change consistance settings?
PHP: 7.0.18, mongodb driver: 1.2.8, libmongoc version: 1.5.5 (as I read this problem should be fixed in 1.2.0 mongodb driver version)
Thanks in advance!
UPDATE:
If I remove repliceSet option from the connection string (azure said this option is recommanded), throws this error much less.
Can you try setting serverSelectionTryOnce = false as per http://php.net/manual/en/mongodb-driver-manager.construct.php and retrying your use case. As per your current setup, a single failure in isMaster request will cause the application to fail with the above mentioned error.
If you are still hitting the same error, please send the exact error message (and preferably the MongoLog http://php.net/manual/en/class.mongolog.php) to askcosmosmongoapi [at] microsoft [dot] com
I have created two liberty instances on my local machine. I deployed a war module which contains a remote ejb in server X and deployed another war in server Y which has to remote lookup the ejb from server X.
Below is the code to lookup the ejb from a restful webservice.
Properties p = new Properties();
p.setProperty("java.naming.provider.url", "corbaname:iiop:localhost:2809");
InitialContext context = new InitialContext (p);
context.lookup("corbaname:rir:#ejb/global/caching/CachingServiceBean!com%5c.ejb%5c.CachingService");
When I try to call the web service I get below exception
DII operation not supported by local object
P.S.
I have enabled ejbRemote feature on both the servers with different port numbers.
I changed my lookup string to "corbaloc:iiop:localhost:2809/NameService#ejb/global/caching/CachingServiceBean!com%5c.ejb%5c.CachingService" and then I get the below error
Then I changed my lookup string to "corbaname:iiop:localhost:2809/NameService#ejb/global/caching/CachingServiceBean!com%5c.ejb%5c.CachingService" and then I got the below error
After checking apache geronimo-yoko implementation on GitHub, I understood that I have to use corbaloc:iiop:localhost:2809. But still I am getting exceptions caused by
org.omg.CORBA.OBJECT_NOT_EXIST: unable to dispatch - servant or POA not found
I used the following urls with no luck:
corbaloc:iiop:localhost:2809/global/caching/CachingServiceBean!com.ejb.CachingService
corbaloc:iiop:localhost:2809/#ejb/global/caching/CachingServiceBean!com.ejb.CachingService
3.corbaloc:iiop:localhost:2809/caching/CachingServiceBean!com.ejb.CachingService
4.corbaloc:iiop:localhost:2809/ejb/global/caching/CachingServiceBean!com.ejb.CachingService
I think the problem is with the packaging. I packed my ejb in a war module.
I followed the steps described in the PDF mentioned in this page http://www.redbooks.ibm.com/redpieces/abstracts/sg248076.html and everything is working now.
I used corbaname::host:port syntax to lookup the remote ejb instead of corbaloc:iiop:host:port
After packing my ejb in an ear then it started working.
We have migrated our Oracle database to 12c from 11g.
We have a legacy application running in Java 1.5 and using ojdbc14.jar.
Our application is not able to create connection to database error saying :
java.sql.SQLException: ORA-28040: No matching authentication protocol
I reffered to answer ORA-28040: No matching authentication protocol exception, and tried to upgrade my ojdbc14.jar to ojdbc6.jar.
I now have a different error message saying :
error: OracleCallableStatement is not public in oracle.jdbc.driver; cannot be accessed from outside package
import oracle.jdbc.driver.OracleCallableStatement;
^
error: OracleTypes is not public in oracle.jdbc.driver; cannot be accessed from outside package
cstmt.registerOutParameter(3,oracle.jdbc.driver.OracleTypes.CURSOR);
^
Ant build file :
<javac srcdir="${src}" destdir="${classes}" source="1.5" target="1.5">
<classpath refid="cpath" />
</javac>
Not sure what exactly we should do to get the application working.
I had the same error with 2 different applications recently:
a Java 7 app on Tomcat 7 using odbc6.jar with Oracle 12 c database.
a legacy ASP application with Oracle 12 c database.
The second solution mentioned in the
same post you referred to - worked well for us.
Workaround: Set SQLNET.ALLOWED_LOGON_VERSION=8 in the oracle/network/admin/sqlnet.ora file.
We worked with our DBAs to set the above option on the sqlnet.ora on the database server. This resolved our issue. I hope it helps someone.
I faced the same error.
Got it resolved by without removing ojdbc14.jar.
step 1 : set SQLNET.ALLOWED_LOGON_VERSION=8
Step 2 : change
Connection conn = (Connection) DriverManager.getConnection("jdbc:oracle:thin:#server:port:sid", "username", "passwrd");
to
java.sql.Connection conn = DriverManager.getConnection("jdbc:oracle:thin:##server:port:sid", "username", "passwrd");
It will works!
After migrating from Oracle 11 to Oracle 12.
In my case lib directory had both OJDBC14.jar & OJDBC8.jar.
After removing older OJDBC14.jar it worked for me.
I had a problem connecting to DB after migration to ORACLE 12c.
The error was:java.sql.SQLException: ORA-28040: No matching authentication protocol.
After setting this parametar SQLNET.ALLOWED_LOGON_VERSION=8 it was solved.
Thank you very much
If you are migrating your application to ojdbc6, one probable reason would be your old classes (compatible to old ojdbc version) might not be getting the class OracleTypes. Easiest way would be changing import statement from import oracle.jdbc.driver.OracleTypes; to import oracle.jdbc.OracleTypes; from the classes where you are getting the error.
I was getting ORA-28040 in Sqldeveloper (ver. 1.5.5) for all connections. I set SQLNET.ALLOWED_LOGON_VERSION=8 but the error didn't go away. Then I enabled "Use OCI/Thick driver" under Tools->Preferences->Database->Advanced Parameters. That did the trick.
The main problem is that the JDBC thin client of the 10g uses the
SHA-1 authentication protocol, this protocol is not allowed in the
12c, so it gives the error.
In my Oracle I could not find the sqlnet.ora file so I had to create it with the command:
vi $ORACLE_HOME/network/admin/sqlnet.ora
And I added the following:
SQLNET.ALLOWED_LOGON_VERSION=10
SQLNET.ALLOWED_LOGON_VERSION_CLIENT=10
SQLNET.ALLOWED_LOGON_VERSION_SERVER=10
SQLNET.ALLOWED_LOGON_VERSION=8
SQLNET.ALLOWED_LOGON_VERSION_CLIENT=8
SQLNET.ALLOWED_LOGON_VERSION_SERVER=8
SQLNET.AUTHENTICATION_SERVICES = (NONE)
So I had to stop and start the listener:
lsnrctl stop
lsnrctl start
Finally, I restart the database.
This should only be a workaround
We had this error when we moved from 11g to 12c, I am able to get correct response through tnsping "servername" (this is the first step that we need to check through cmd).
After this we realize that database server is enable to handle only 64bit request (In my case I was using WinSQL and it always check for 32bit).
So to correct this, ask your admin to eanble server for 32 bit request as well or you can move to SQL developer which work for me.
When i try to make a remote ejb jndi look up, IBM message Broker throws ClassCastexception for the factory object.
But the same code works fine for a normal local java application and junit.Why this problem occurs when called only from IBM WMB
Context context = new InitialContext(ejbJndiProperties);
Object factoryObj = context.lookup("SampleBeanTAFJ/remote");
return (SampleBeanRemote) factoryObj;
This is often called by loading parts of the interface in a different classloader to the implementation classes.
I would use the env var:
IBM_JAVA_OPTIONS=-Dibm.cl.verbose=*
Then restart the broker, this will dump classloading trace to stdout / console.txt which might give you some clues.
What are the exact classes involved in the error and what jars are they stored in? Deployed to the EG or referenced via a SHARED-CLASSES? The exact details determine which classloaders should be in use here.