ImportError: No module named None - robotframework

*** Test cases ***
TestDB
Connect To Database Using Custom Params None database='TestDB', user='system', password='system', host='10.91.41.101', port=1521
Please help - the error is:
ImportError: No module named None

The error most probably comes from the way you call Connect To Database Using Custom Params - the first argument you're passing, which should be the value for the dbapiModuleName, is passed as a string object, with the value "None".
If you wanted to call it with the value None object (as it's written in the library's help), that should have been ${None} in robotframework format.
I doubt that's going to work though - the DatabaseLibrary probably needs some DB type identifier. So if you are using postgres for example, you'd call it with "psycopg2":
Connect To Database Using Custom Params psycopg2 database='TestDB', user='username', password='mypass', host='10.1.1.2', port=1521
Have in mind you'd need the DB driver already installed through pip, psycopg2 in the case of the example here.
P.S. please don't paste actual credentials in SO.

I assume your question should have been posted something like this...
Issue
When attempting to execute the following test case in Robot Framework, I receive the following error: ImportError: No module named None
Here is the test case in question:
*** Test Cases ***
TestDB
Connect To Database Using Custom Params None database='TestDB', user='system', password='system', host='10.91.41.101', port=1521
If so, your issue may be as simple as spacing. Robot Framework can accept pipes as delimiters, but if you choose to use spaces, you must use 2 or more.
Based on your copy/paste, it looks like you have only one space between Connect To Database Using Custom Params and None (which, I'm assuming you're specifying as the DB API Python module the system default - not sure if that's recommended or supported). Make sure you have at least two spaces (I generally try for 4 unless I have a lot of parameters) between keywords and their parameters.
So:
Make sure you have at least two spaces between the keyword and parameters. Here is the keyword in question's reference.
Verify that you don't need to specify the Python database driver for the database you are using. Based on the port you've specified, I'm guessing it's an Oracle database. Here's a list of Python Oracle drivers.

I had a similar error and read on it for hours not knowing I hadn't created a .env file yet. credit to a friend who brought me to this page. (which gave me the hint on what I was missing).
I created a .env file in my root folder where the manage.py file is and configured my database settings and voila. Thanks Suraj

Related

specify a database name in databricks sql connection parameters

I am using airflow 2.0.2 to connect with databricks using the airflow-databricks-operator. The SQL Operator doesn't let me specify the database where the query should be executed, so I have to prefix the table_name with database_name. I tried reading through the doc of databricks-sql-connector as well here -- https://docs.databricks.com/dev-tools/python-sql-connector.html and still couldn't figure out if I could give the database name as a parameter in the connection string itself.
I tried setting database/schema/namespace in the **kwargs, but no luck. The query executor keeps saying that the table not found, because the query keeps getting executed in the default database.
Right now it's not supported - primarily reason is that if you have multiple statements then connector could reconnect between their execution, and result of use will be lost. databricks-sql-connector also doesn't allow setting of the default database.
Right now you can workaround that by adding explicit use <database> statement into a list of SQLs to execute (the sql parameter could be a list of strings, not only string).
P.S. I'll look, maybe I'll add setting of the default catalog/database in the next versions

Cannot load the GraphML file I just saved

I'm using Gremlin Server.
I save the contents of the database in an XML file (GraphML) with this line:
g.io(path).write().iterate()
To load the file I use this line:
g.io(path).read().iterate();
And then I get this error:
connection.js:282
new ResponseError(util.format('Server error: %s (%d)', response.status.message, response.status.code), response.status));
^
ResponseError: Server error: For input string: "-2555865115" (500)
This error is coming from gremlin server.
If I search for this value in the XML file (-2555865115) and I remove the last character (-255586511) then the problem is solved.
Why this happens? How can I fix this issue? The database is always saving a file that I have to fix manually.
If I have to change something in the configuration files of Gremlin Server, can you please tell me which file to modify and how? because I never did that before.
I'm using Gremlin server in my local computer just for testing, without any changes.
EDIT:
I changed conf/tinkergraph-empty.properties to this:
gremlin.tinkergraph.vertexIdManager=ANY
gremlin.tinkergraph.edgeIdManager=ANY
gremlin.tinkergraph.vertexPropertyIdManager=ANY
I restarted, but I get the same error when loading the XML file.
Given that removing the last integer from your numerical value solved the problem, I'd speculate you're hitting a limit; specifically, the lowest value an integer can have.
In Java, that value is -2147483647, and that happens to be the language that the default implementation of Gremlin Server is written in. As such, it's likely that the deserialization process is failing while trying to interpret that value as an integer. Since the value is below the minimum value of an integer, and since the error message talks about it being an input string, Integer.parseInt("-2555865115") is probably the call that's failing behind the scenes.
If Gremlin is both serializing and de-serializing the data, it might be a bug in that implementation, and you might want to file an issue. In the mean time, consider implementing and registering a custom serializer to give yourself more control over how the IO process works.

Airflow 1.10 Config Core hostname_callable - How To Set?

I'm regards to my previous Stackoverflow post here I've finally upgraded from Airflow version 1.9 to 1.10 since it's now released on PyPi. Using their release guide here I got Airflow 1.10 working. Now I inspected their udpates to 1.10 to see how they addressed the bug discovered in Airflow version 1.9 when run on an AWS EC2-Instance. And I found that they replaced all functions that got the servers IP address with a call to this new Airflow class's function get_hostname https://github.com/apache/incubator-airflow/blob/master/airflow/utils/net.py. Now, inside that small function you see the comment that says,
Fetch the hostname using the callable from the config or using
socket.getfqdn as a fallback.
So then after that comment you see the code,
callable_path = conf.get('core', 'hostname_callable')
Which tells us in the airflow.cfg under the section [core] there is a new key value field called hostname_callable which now lets us set how we want to fetch the servers IP address. So their fix for the bug seen in Airflow version 1.9 is to just let us choose how to fetch the IP address if we need to change it. Their default value for this new configuration field is seen here https://github.com/apache/incubator-airflow/blob/master/airflow/config_templates/default_airflow.cfg under the [core] section. You can see they have it set as,
[core]
# Hostname by providing a path to a callable, which will resolve the hostname
hostname_callable = socket:getfqdn
So they're using the function call socket:getfqdn that causes the bug to occur when run on an AWS EC2-Instance. I need it to use socket.gethostbyname(socket.gethostname()) (again this is mentioned in more detail on my previous post)
So my question is what's the syntax that I need to use in order to get socket.gethostbyname(socket.gethostname()) in the configuration style of using colons :. For example the function call socket.getfqdn() is written in the configuration file as socket:getfqdn. So I don't see how I would write that syntax with a nested call. Would I write something like socket:gethostbyname(socket.gethostname())?
If it were me, I would create an airflow_local_settings module with a hostname_callable function which returns the necessary value. From the code it looks like you could set the value to airflow_local_settings:hostname_callable.
Basically, in airflow_local_settings:
import socket
def hostname_callable():
return socket.gethostbyname(socket.gethostname())
Then install your airflow_local_settings module to the computer as you would any other module.
This is not a direct answer but might help someone facing this or similar issues.
I ran into a similar issue when using Airflow locally on macOS Catalina.
Sometimes (seemingly random) the local_task_job.py crashed saying
The recorded hostname mycustomhostname.local does not match this instance's hostname 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa
I could solve the issue by editing the airflow.cfg and replacing
hostname_callable = socket:getfqdn
with
hostname_callable = socket:gethostname

MarkLogic I don't know how to get all the result

Hello I am trying to read a module with this code:
(: Entry point - must be a read-only query. :)
xdmp:invoke(
'/path/mydocument.xqy',
(xs:QName('var1'), 'test',
xs:QName('var2'), "response"))
I am new in MarkLogic, I am using groovy and the api to connect to it, but also I saw I can invoke the module with this and indeed I did but it returns me
your query returned an empty sequence
I want to know if I can query xs:QName('var1'), 'test', changing test with a wildcard or how can I get all the information from the file called /path/mydocument.xqy?
I tried to use this:
xdmp:document-get("/path/mydocument.xqy)
but it says the file is not found. Although, if I use invoke I can query it, but I don't know what are the values I have to pass. I was wondering if there is something like sql using %% or something to give me all the data.
To answer the first question: "I am trying to read a module "
IF the module is in the database, then you must query the Modules database in which the module resides.
If the module is in the filesystem then you cannot directly access its source as a document but you can by executing xdmp:filesystem-file()
Simplification:
With the Default configuration of the server and REST client, user placed modules are in the "Modules" database and user placed documents are in the "Documents" database. This means, if you do a GET (read a "Document") with no additional parameters, it will return documents from the "Documents" database. Assuming you are using the default configuration for client and server, this would result in the behavior you are seeing. E.g. your Module code is in the Modules database, doing a GET for it by name will search the Documents database and correctly not find it.
You don't mention, and I don't know, the groovy library being used, but the REST API itself and all implementations of general purpose ML REST client libraries I am familiar with have options for overriding the default database with another. If the groovy library supports that, then specify the "Modules" database for your query and it should return the module document. Note: content-type will be application/text not text/xml.
You can simplify things for testing by bypassing the libraries and simply use a browser and try a URL like this http://yourserver.com:8000/v1/documents?uri=/your/module.xqy&database=Modules
Ref: https://docs.marklogic.com/REST/GET/v1/documents
Making the appropriate changes to the path and server for your use.
If you are still confused, then you should start with the basic MarkLogic tutorials and work through them one by one. You will most likely succeed faster by doing this then jumping straight into coding you don't understand yet.
DETAIL:
Note: The default behaviour is to EXECUTE documents when doing a GET call, using the Modules database. Thus doing a GET of http://yourserver:8000/your/module.xqy will EXECUTE it not return its source.
You will notice the REST API has a uri query parameter. This is EXECUTING the REST API code on /v1/documents which in turn will read the document specified by the uri and database parameters and return it.
I guess I can use:
xdmp:invoke(/pview/get-pview-browse-profiles.xqy,
cts:and-query((
cts:element-value-query(
xs:QName("letter"),"*", "wildcarded"),
cts:element-value-query(
xs:QName("collection"),"*", "wildcarded"))))
although it doesn't return anything

Parameter count does not match Parameter Value count

We're getting a server error saying "Parameter count does not match Parameter Value count." Anyone have any idea what this could mean?
Our site's on ASP.NET Webforms running DotNetNuke as a CMS.
I've tried uploading an older version of the web.config file but it doesn't seem to have changed since the error came up. It wasn't in any of our recent module file uploads because I reuploaded the old files from this morning that we changed.
Could any changes in the database cause this or would it have to originate from an error in the code?
Thanks.
Some SQL query or stored procedure has more parameters specified then parameters' values received.
Something like this:
command.CommandText = "EXEC test #a";
command.Parameters.Add("#a", "a");
command.Parameters.Add("#b", "b");
i.e. look at the database scheme. Was it changed? Were stored procedures changed?
I've found that if you are using parameters that have default values, certain libraries can't handle.
For instance, we have an application that uses an older version of Microsoft Enterprise Library Data Access method that allows you to pass parameters as an array.
It fails if the amount of items in the array doesn't match exactly the number of parameters on the stored procedure, regardless if some are 'optional' or not.
In cases like this have to use straight ADO.NET and use the cmd.Parameters.AddWithValues("#parameterName", value)
syntax for the required stored procedure parameters. You will not have to add, when using this method, command parameters for the 'optional' stored procedure parameters.
I had the same problem and I found that it was picking the fields from a temporary cache which was being maintained by MySQLHelper.

Resources