I have simple search query on CORB job. I am following https://github.com/marklogic-community/corb2/wiki/ModuleExecutor-Tool for the set up that looks for one xqy file (PROCESS-MODULE). I don't have any issue on regular CORB job but with this set up I am getting
com.marklogic.xcc.exceptions.XccConfigException: Unrecognized connection scheme: null
Can somebody please help me figure out why?
The regular corb job is working and is fully functional but when I use https://github.com/marklogic-community/corb2/wiki/ModuleExecutor-Tool approach I am getting this XCC exception and I can't figured out why.
It sounds as if the XCC connection string is malformed, or you haven't set the XCC connection URI at all. The XCC connection string should start with the "xcc://" scheme.
The XCC-CONNECTION-URI can be passed on the commandline as the first argument to the ModuleExecutor Main method:
java -cp marklogic-corb-2.4.1.jar:marklogic-xcc-9.0.8.jar -DOPTIONS-FILE=job.options \
com.marklogic.developer.corb.ModuleExecutor xcc://user:password#localhost:8123
Or the property can be set in the options file:
XCC-CONNECTION-URI=xcc://user:password#localhost:8123
Or it can be set as a system property:
-DXCC-CONNECTION-URI=xcc://user:password#localhost:8123
Related
I am using airflow 2.0.2 to connect with databricks using the airflow-databricks-operator. The SQL Operator doesn't let me specify the database where the query should be executed, so I have to prefix the table_name with database_name. I tried reading through the doc of databricks-sql-connector as well here -- https://docs.databricks.com/dev-tools/python-sql-connector.html and still couldn't figure out if I could give the database name as a parameter in the connection string itself.
I tried setting database/schema/namespace in the **kwargs, but no luck. The query executor keeps saying that the table not found, because the query keeps getting executed in the default database.
Right now it's not supported - primarily reason is that if you have multiple statements then connector could reconnect between their execution, and result of use will be lost. databricks-sql-connector also doesn't allow setting of the default database.
Right now you can workaround that by adding explicit use <database> statement into a list of SQLs to execute (the sql parameter could be a list of strings, not only string).
P.S. I'll look, maybe I'll add setting of the default catalog/database in the next versions
I'm using Gremlin Server.
I save the contents of the database in an XML file (GraphML) with this line:
g.io(path).write().iterate()
To load the file I use this line:
g.io(path).read().iterate();
And then I get this error:
connection.js:282
new ResponseError(util.format('Server error: %s (%d)', response.status.message, response.status.code), response.status));
^
ResponseError: Server error: For input string: "-2555865115" (500)
This error is coming from gremlin server.
If I search for this value in the XML file (-2555865115) and I remove the last character (-255586511) then the problem is solved.
Why this happens? How can I fix this issue? The database is always saving a file that I have to fix manually.
If I have to change something in the configuration files of Gremlin Server, can you please tell me which file to modify and how? because I never did that before.
I'm using Gremlin server in my local computer just for testing, without any changes.
EDIT:
I changed conf/tinkergraph-empty.properties to this:
gremlin.tinkergraph.vertexIdManager=ANY
gremlin.tinkergraph.edgeIdManager=ANY
gremlin.tinkergraph.vertexPropertyIdManager=ANY
I restarted, but I get the same error when loading the XML file.
Given that removing the last integer from your numerical value solved the problem, I'd speculate you're hitting a limit; specifically, the lowest value an integer can have.
In Java, that value is -2147483647, and that happens to be the language that the default implementation of Gremlin Server is written in. As such, it's likely that the deserialization process is failing while trying to interpret that value as an integer. Since the value is below the minimum value of an integer, and since the error message talks about it being an input string, Integer.parseInt("-2555865115") is probably the call that's failing behind the scenes.
If Gremlin is both serializing and de-serializing the data, it might be a bug in that implementation, and you might want to file an issue. In the mean time, consider implementing and registering a custom serializer to give yourself more control over how the IO process works.
I have set up a docker container for tinkerpop/gremlin-server on my dev machine.
I have an .NET Core application that uses Gremlin.Net version 3.4.1
I connect to the localhost docker using IGremlinClient and when passing the following query to add a vertex:
g.addV("Root").property(id,"56b7ddc6-7629-42d4-b748-bfbce0992f13")
I then get the error:
ScriptEvaluationError: For input string: "56b7ddc6-7629-42d4-b748-bfbce0992f13"
When I run the query using the gremlin console the vertex is added:
gremlin> g.addV("Root").property(id,"56b7ddc6-7629-42d4-b748-bfbce0992f13")
==>v[56b7ddc6-7629-42d4-b748-bfbce0992f13]
How can I create a new vertex with a string as an Id when running through the IGremlinClient in my application?
Since you are using the default Gremlin Server Docker container you are getting this configuration for your hosted TinkerGraph:
gremlin.graph=org.apache.tinkerpop.gremlin.tinkergraph.structure.TinkerGraph
gremlin.tinkergraph.vertexIdManager=LONG
The IdManager is set to LONG so it will only accept input values that can be coerced to a long. You should change that setting to UUID perhaps given the string you are sending or perhaps ANY. You can read more about those options here.
As a side note, I'd agree that the error message isn't helpful. Note that it has been improved for next release of 3.4.3/3.3.8.
In R, I have an object of type connection and would prefer to use directly the connection instead of the initial path to check if the file exists:
file.exist("path")
Is there any method to get the path from the connection, apparently it is not in the attributes, but I can still see the path when printing the connection in the console...
The solution from #Stefan F comment is working perfectly :
path <- summary(my_connection)$description
*** Test cases ***
TestDB
Connect To Database Using Custom Params None database='TestDB', user='system', password='system', host='10.91.41.101', port=1521
Please help - the error is:
ImportError: No module named None
The error most probably comes from the way you call Connect To Database Using Custom Params - the first argument you're passing, which should be the value for the dbapiModuleName, is passed as a string object, with the value "None".
If you wanted to call it with the value None object (as it's written in the library's help), that should have been ${None} in robotframework format.
I doubt that's going to work though - the DatabaseLibrary probably needs some DB type identifier. So if you are using postgres for example, you'd call it with "psycopg2":
Connect To Database Using Custom Params psycopg2 database='TestDB', user='username', password='mypass', host='10.1.1.2', port=1521
Have in mind you'd need the DB driver already installed through pip, psycopg2 in the case of the example here.
P.S. please don't paste actual credentials in SO.
I assume your question should have been posted something like this...
Issue
When attempting to execute the following test case in Robot Framework, I receive the following error: ImportError: No module named None
Here is the test case in question:
*** Test Cases ***
TestDB
Connect To Database Using Custom Params None database='TestDB', user='system', password='system', host='10.91.41.101', port=1521
If so, your issue may be as simple as spacing. Robot Framework can accept pipes as delimiters, but if you choose to use spaces, you must use 2 or more.
Based on your copy/paste, it looks like you have only one space between Connect To Database Using Custom Params and None (which, I'm assuming you're specifying as the DB API Python module the system default - not sure if that's recommended or supported). Make sure you have at least two spaces (I generally try for 4 unless I have a lot of parameters) between keywords and their parameters.
So:
Make sure you have at least two spaces between the keyword and parameters. Here is the keyword in question's reference.
Verify that you don't need to specify the Python database driver for the database you are using. Based on the port you've specified, I'm guessing it's an Oracle database. Here's a list of Python Oracle drivers.
I had a similar error and read on it for hours not knowing I hadn't created a .env file yet. credit to a friend who brought me to this page. (which gave me the hint on what I was missing).
I created a .env file in my root folder where the manage.py file is and configured my database settings and voila. Thanks Suraj