When I try to add a user in WSO2ESB I am getting this kind of message below"Error while loading roles. Error is: Error while fetching roles from JDBC user store according to filter: % & max item limit: -1"
What could be the problem? I am using Informix database. JDBC 3.7
This error could occur due to the version incapability in connector jar. Try updating the connector jar to a newer version.
Related
Overview
I tried creating a VPC network, having a subnet and adding a Serverless VPC connector with terraform in GCP. I was following the official guide ( https://cloud.google.com/vpc/docs/configure-serverless-vpc-access#terraform ) and initially everything was working well. After that I accidently commited my JSON key to github, someone stole it and used it for crypto, the project was disabled but shortly after that reinstated
After that my terraform VPC connector creations started to fail. I tried a lot of different things but nothing seems to work(running destroy, changine service accounts, changing names, deleting all of the terraform subfolders, deleting EVERY resource and restarting the process)
The errors I am getting are:
│ Error: Error waiting to create Connector: Error waiting for Creating Connector: Error code 13, message: An internal error occurred: Failed to create a VPC Access connector. Please delete the connector manually.
│
or
│ Error: Error creating Connector: googleapi: Error 409: Requested entity already exists
Today I tried to create VPC connector from the command line(gcloud) and from the UI tool. The errors persisted
Unknown error. Original error message: Operation failed: Insufficient CPU quota in region.
Max throughput of the connector per day over last seven days.
or
An internal error occurred: Failed to create a VPC Access connector. Please delete the connector manually.
errors while deleting:
│ Error: Error waiting for Deleting Network: The network resource 'projects/static-emblem-327016/global/networks/sun-serverless-network' is already being used by 'projects/static-emblem-327016/global/routes/default-route-5cbc9de02e21bb35'
│
I was lookint at this issue https://issuetracker.google.com/issues/164378672 In it I was problems with us-central1 but I tried a couple of different regions and still I have the same issue
Questions:
I am running out of ideas, I was wondering if this is an infrastructural issue, maybe I should dump the project and create a new one ? Where can I check if there are infra issues ? How can I resolve my issue?
I recently get this error Error: Error creating Connector: googleapi: Error 409: Requested entity already exists. So I can explain the root cause and it's fix.
What I was doing is like trying to create a GCP resource (Create PubSub topic) using terraform (plan and then apply).
But before executing the terraform apply, I created the resource manually long time back with the same name. I expected that the terraform plan or terraform apply will not try to create it again since the resource name is same. But instead of Refreshing state, I found it was trying to Creating the resource. The reason it that, terraform does not know about your resource history. Either you need to import your resource history using terraform import command or else delete the manually created resource and then run the terraform apply command.
The message “An internal error occurred: Failed to create a VPC Access connector. Please delete the connector manually” can indicate that you don't have enough resources in your project to create the connector. Please make sure you have enough Resource Quota available in your GCP project.
The message “googleapi: Error 409: Requested entity already exists” indicates that The resource that a client tried to create already exists.
If you want to know what the root cause is, you can check the logs of the VPC Connector creation in the System Event Audit Logs.
System Event audit logs contain log entries for Google Cloud actions that modify the configuration of resources. System Event audit logs are generated by Google systems; they aren't driven by direct user action. System Event audit logs are always written; you can't configure, exclude, or disable them. The instructions to access them are here.
On the other hand, generating and distributing service account keys poses severe security risks to your organization. They are long-lived credentials that are not automatically rotated. These keys can be leaked accidentally or maliciously allow attackers to gain access to your sensitive GCP resources. If you accidentally compromised your JSON Key, please read the recommendations in this link.
If you want to know more about the risk and alternatives to download Service Account, Key please follow this link. Please note that this is not GCP official documentation, so I cannot vouch for its accuracy.
I was able to resolve my issue. It turns out that I had deleted my default compute engine service account in panic. I was able to recover it and everything worked out from there. For more info go here: https://cloud.google.com/iam/docs/creating-managing-service-accounts#undeleting_a_service_account
you have to identify the default service acc for compute engine and undelete it:
gcloud beta iam service-accounts undelete ACCOUNT_ID
I am running mysql as docker (mysql:8.0.20). Gradle task deployNodes fails with error :
Could not create the DataSource: Validation Failed:
1 changes have validation failures
columnDataType is required for renameColumn on mysql, migration/node-info.changelog-v3.xml::column_host_name::R3.Corda.
Can somebody help me with this error?
As you can see columnDataType is a required attribute for MySql. Corda does not support MySql and hence this is not tested against MySql so you could get these kinds of MySql specific errors.
If you are using Corda enterprise, you could set the runMigration flag to false(which will not run the schema automatically on the db), use the database management tool to manually generate the schema, make changes (add columnDataType to the schema), and then run it manually onto mysql db.
Below image is taken from https://www.liquibase.org/documentation/changes/rename_column.html
I have been trying to connect to Oracle DB (11g) in SSIS (VS2015) from past few days and tried all possible solutions but still getting error. I am passing query through SSIS variable, no other parameter is passed, it is just test query which need to retrieve two rows.
Case 1: Tried using Oracle Provider for OLEDB, test connection is successful but getting below error while Preview data:
The system cannot find message text for message number 0x80040e51 in the message file for OraOLEDB. (OraOLEDB)
Case 2: Tried using Microsoft OLEDB Provider for Oracle, test connection is successful but getting below error while Preview data:
Provider cannot derive parameter information and SetParameterInfo has not been called.
I have been struggling to solve this problem, any help would be appreciated. Thanks in Advance.
Edit: After setting Run64bitRuntime to false, I can extract data when using Oracle Provider for OLEDB, but Preview still gives same error.
Regards,
Jazz
Set Run64bitRuntime to false for the package and then it should work.
Right click Project and click Properties
In Configuration Properties at left side click Debugging
set Run64BitRunTime options as false
Some advice on this is all focused on "Preview". Don't throw the baby out with the bath water. With VS 2017 enterprise I got the "cannot find message text ... " message with Preview but was able to put oracle data into ms sql as a job (the gui has a problem, but not the run time job). I did set AlwaysUseDefaultCodePage to true on Component Properties in Advanced Editor on the OLEDB oracle task.
We have migrated our Oracle database to 12c from 11g.
We have a legacy application running in Java 1.5 and using ojdbc14.jar.
Our application is not able to create connection to database error saying :
java.sql.SQLException: ORA-28040: No matching authentication protocol
I reffered to answer ORA-28040: No matching authentication protocol exception, and tried to upgrade my ojdbc14.jar to ojdbc6.jar.
I now have a different error message saying :
error: OracleCallableStatement is not public in oracle.jdbc.driver; cannot be accessed from outside package
import oracle.jdbc.driver.OracleCallableStatement;
^
error: OracleTypes is not public in oracle.jdbc.driver; cannot be accessed from outside package
cstmt.registerOutParameter(3,oracle.jdbc.driver.OracleTypes.CURSOR);
^
Ant build file :
<javac srcdir="${src}" destdir="${classes}" source="1.5" target="1.5">
<classpath refid="cpath" />
</javac>
Not sure what exactly we should do to get the application working.
I had the same error with 2 different applications recently:
a Java 7 app on Tomcat 7 using odbc6.jar with Oracle 12 c database.
a legacy ASP application with Oracle 12 c database.
The second solution mentioned in the
same post you referred to - worked well for us.
Workaround: Set SQLNET.ALLOWED_LOGON_VERSION=8 in the oracle/network/admin/sqlnet.ora file.
We worked with our DBAs to set the above option on the sqlnet.ora on the database server. This resolved our issue. I hope it helps someone.
I faced the same error.
Got it resolved by without removing ojdbc14.jar.
step 1 : set SQLNET.ALLOWED_LOGON_VERSION=8
Step 2 : change
Connection conn = (Connection) DriverManager.getConnection("jdbc:oracle:thin:#server:port:sid", "username", "passwrd");
to
java.sql.Connection conn = DriverManager.getConnection("jdbc:oracle:thin:##server:port:sid", "username", "passwrd");
It will works!
After migrating from Oracle 11 to Oracle 12.
In my case lib directory had both OJDBC14.jar & OJDBC8.jar.
After removing older OJDBC14.jar it worked for me.
I had a problem connecting to DB after migration to ORACLE 12c.
The error was:java.sql.SQLException: ORA-28040: No matching authentication protocol.
After setting this parametar SQLNET.ALLOWED_LOGON_VERSION=8 it was solved.
Thank you very much
If you are migrating your application to ojdbc6, one probable reason would be your old classes (compatible to old ojdbc version) might not be getting the class OracleTypes. Easiest way would be changing import statement from import oracle.jdbc.driver.OracleTypes; to import oracle.jdbc.OracleTypes; from the classes where you are getting the error.
I was getting ORA-28040 in Sqldeveloper (ver. 1.5.5) for all connections. I set SQLNET.ALLOWED_LOGON_VERSION=8 but the error didn't go away. Then I enabled "Use OCI/Thick driver" under Tools->Preferences->Database->Advanced Parameters. That did the trick.
The main problem is that the JDBC thin client of the 10g uses the
SHA-1 authentication protocol, this protocol is not allowed in the
12c, so it gives the error.
In my Oracle I could not find the sqlnet.ora file so I had to create it with the command:
vi $ORACLE_HOME/network/admin/sqlnet.ora
And I added the following:
SQLNET.ALLOWED_LOGON_VERSION=10
SQLNET.ALLOWED_LOGON_VERSION_CLIENT=10
SQLNET.ALLOWED_LOGON_VERSION_SERVER=10
SQLNET.ALLOWED_LOGON_VERSION=8
SQLNET.ALLOWED_LOGON_VERSION_CLIENT=8
SQLNET.ALLOWED_LOGON_VERSION_SERVER=8
SQLNET.AUTHENTICATION_SERVICES = (NONE)
So I had to stop and start the listener:
lsnrctl stop
lsnrctl start
Finally, I restart the database.
This should only be a workaround
We had this error when we moved from 11g to 12c, I am able to get correct response through tnsping "servername" (this is the first step that we need to check through cmd).
After this we realize that database server is enable to handle only 64bit request (In my case I was using WinSQL and it always check for 32bit).
So to correct this, ask your admin to eanble server for 32 bit request as well or you can move to SQL developer which work for me.
I'm receiving an HL7 2.3 ORU schema. I've configured the appropriate party to use a schema namespace of "http://mycompany.ca/application/HL7/2X/2.3/1"
I've built my custom HL7 Schema, ad set the targetNamespace to "http://mycompany.ca/application/HL7/2X/2.3/1", and ensured it has a root element of "ORU_R01_23_GLO_DEF".
I've deployed the schema to biztalk by importing it, then running the msi.
I can see that my BIzTalk application has the schema in it, and I can see that the MSI installed the schema on the drive.
When I send an HL7 to my receive location, I get an error in the evenlog:
Error happened in body during parsing
Error # 1
Alternate Error Number: 301
Alternate Error Description: Schema http://mycompany.ca/application/HL7/2X/2.3/1#ORU_R01_23_GLO_DEF not found
Alternate Encoding System: HL7-BTA
From this, I can tell that the party resolution worked correctly, but can't figure out why it can not find the schema.