How to authenticate mongodb connection in robot framework - robotframework

I want to connect to MongoDB and query from MongoDB collection.
I have installed Mongodb by installing following libraries:
pip install pymongo
pip install robotframework-MongoDBLibrary
It installed properly.
after that i wrote the following statements in RIDE to query from mongodb
Connect to MongoDB dbHost=${host} dbPort=${port}
I just ran this statement, test script is pass.
Then to query, i have added one more statement as below:
${fields} = Retrieve Mongodb Records With Desired Fields ${MongoDBName} ${MongoDBCollection} {} profileDetails.customerCategory.masterCode return__id=False
after execution, i got the following error:
OperationFailure: database error: not authorized for query on clmpreprod.Profile
Normally, in java to connect to Mongodb we will follow the below steps
MongoClient mongoClient = new MongoClient(Arrays.asList(
new ServerAddress(MONGO_DBURL, 27017),
new ServerAddress(MONGO_DBURL, 27018),
new ServerAddress(MONGO_DBURL, 27019)));
DB database = mongoClient.getDB(MONGO_DBNAME);
boolean auth = database.authenticate(MONGO_USERNAME,MONGO_PASSWORD).toCharArray());
DBCollection collection = getCollection(MONGO_CUSTOMER_COLLECTION, database);
List<DBObject> obj = collection.find(queryDBParams, returnDBParams).sort(sortDBParams).limit(1).toArray();
Can anyone help me what keyword or what series of steps need to follow to use in robotframework for database authentication and then query
Thanks
Sarada

I found the answer in MongoDBLibrary documentation, here the link RobotFramework-MongoDBLibrary
Syntax is:
Connect To MondoDB | mongodb://admin:admin#192.20.33.226 | 27017 | 10 | None | <type 'dict'> | False |
Thanks
Sarada

Related

How to connect to MySQL database in Julia

I am new to Julia having used R for the last few years and I am struggling with my first task which is connecting to my AWS MySQL database.
I have followed many online tutorials but I get the same message no matter what I do.
Everything was installed yesterday so it should all be the current version.
julia-version = Version 1.5.2
Here is the code:
Pkg.add(PackageSpec(url="https://github.com/JuliaComputing/MySQL.jl"))
Pkg.add(PackageSpec(url="https://github.com/JuliaDB/DBI.jl"))
using MySQL
con = MySQL.connect("ec2blah.eu-west-2.compute.amazonaws.com", "name", "password", db = "database")
When I run this I get the following error:
UndefVarError: connect not defined
getproperty(::Module, ::Symbol) at Base.jl:26
top-level scope at data_prep.jl:18
Thanks
As per the documentation, you should probably do something like this:
using Pkg
Pkg.add("MySQL") # No need for a full PackageSpec here
Pkg.add("DBInterface")
using MySQL
using DBInterface
conn = DBInterface.connect(MySQL.Connection, "ec2blah.eu-west-2.compute.amazonaws.com",
"name", "password", db = "database")

RobotFramework-Oracle DB Connection issue-2

I am trying to connect to Oracle DB in Robot Frameowork, I am facing an issue.find the details.
Error info:
15:26:34.601 INFO Connecting using : cx_Oracle.connect(database=KIDS, user=St1_User, password=M...., host=st-kids..oss.abb.com, port=1581)
15:26:34.692 FAIL TypeError: 'database' is an invalid keyword argument for this function
If I remove the "Database" from Keyword then getting below error
"NoSectionError: No section: 'default'"
Below is the command used in the Test Case
Connect To Database cx_Oracle kIDS St1_User M.... st-kids.oss.abb.com 1581
Please help.
Use keyword Connect To Database Using Custom Params
Connect To Database Using Custom Params cx_Oracle ${connection string}
and as ${connection_string}:
user='${user}',password='${password}',dsn='${host}:${port}/${service_name}'

Can you use Athena ODBC/JDBC to return the S3 location of results?

I've been using the metis package to run Athena queries via R. While this is great for small queries, there still does not seem to be a viable solution for queries with very large return datasets (10's of thousands of rows, for example). However, when running these same queries in the AWS console, it is fast/straightforward to use the download link to obtain the CSV file of the query result.
This got me thinking: is there a mechanism for sending the query via R but returning/obtaining the S3:// bucket location where the query results live instead of the normal results object?
As mentioned in my comment above you could investigate the RAthena and noctua packages.
These packages connect to AWS Athena using AWS SDK's as their drivers. What this means is that they will also download the data from S3 in as similar method that is mentioned by #Zerodf. They both use data.table to load the data into R so they are pretty quick. Also you can get to Query Execution ID if required for some reason.
Here is an example of how to use the packages:
RAthena
Create a connection to AWS Athena, for more information around how to connect please look at: dbConnect
library(DBI)
con <- dbConnect(RAthena::athena())
Example in how to query Athena:
dbGetQuery(con, "select * from sampledb.elb_logs")
How to access the Query ID:
res <- dbSendQuery(con, "select * from sampledb.elb_logs")
sprintf("%s%s.csv",res#connection#info$s3_staging, res#info$QueryExecutionId)
noctua
Create a connection to AWS Athena, for more information around how to connect please look at: dbConnect
library(DBI)
con <- dbConnect(noctua::athena())
Example in how to query Athena:
dbGetQuery(con, "select * from sampledb.elb_logs")
How to access the Query ID:
res <- dbSendQuery(con, "select * from sampledb.elb_logs")
sprintf("%s%s.csv",res#connection#info$s3_staging, res#info$QueryExecutionId)
Sum up
These packages should do what you are looking for however as they download the data from the query output in s3 I don't believe you will need to go to the query execution ID to do the same process.
You could look at the Cloudyr Project. They have a package that handles creating the signature requests for the AWS API. Then you can fire off a query, poll AWS until the query finishes (using the QueryExecutionID), and use aws.s3 to download the result set.
You can also use system() to use AWS CLI commands to execute a query, wait for the results, and download the results.
For example: You could run the following commands on the command line to get the results of a query.
$ aws athena start-query-execution --query-string "select count(*) from test_null_unquoted" --execution-context Database=stackoverflow --result-configuration OutputLocation=s3://SOMEBUCKET/ --output text
XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
Once you get the query-execution-id, then you can check on the results.
$ aws athena get-query-execution --query-execution-id=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX --output text
QUERYEXECUTION select count(*) from test_null_unquoted XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
QUERYEXECUTIONCONTEXT stackoverflow
RESULTCONFIGURATION s3://SOMEBUCKET/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX.csv
STATISTICS 104 1403
STATUS 1528809056.658 SUCCEEDED 1528809054.945
Once the query succeeds, you can download the data.
$ aws s3 cp s3://stack-exchange/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX.csv
Edit:
You can even turn those commands into a one liner (Bash example here), but I'm sure you could do the same thing in powershell.
$ eid=`aws athena start-query-execution --query-string "select count(*) from test_null_unquoted" --query-execution-context Database=SOMEDATABASE--result-configuration OutputLocation=s3://SOMEBUCKET/ --output text --output text` && until aws athena get-query-execution --query-execution-id=$eid --output text | grep "SUCCEEDE
D"; do sleep 10 | echo "waiting..."; done && aws s3 cp s3://SOMEBUCKET/$eid.csv . && unset eid

Connect R to Filemaker Pro 15 on Mac

I'm trying to create a connection between R (3.3.3) Using RStudio (1.0.143) and Filemaker Pro Advanced 15 (15.0.3.305). I'm trying to create the connection using RODBC (1.3-15).
So far I:
Created a toy FM Pro database for testing
User id: Admin
Password: password
Followed these instructions for creating a DSN
Created a DSN for my toy FM Pro database called test_r
Successfully tested the connection to test_r
Unsuccessfully attempted to connect to the DSN in RStudio in the following two ways:
fm_connection <- odbcConnect(dsn="test_r", uid="Admin", pwd="password")
Which returns the following error:
[RODBC] ERROR: state IM002, code 0, message [unixODBC][Driver Manager]Data source name not found, and no default driver specifiedODBC connection failed
AND
constr <- paste("driver={FileMaker ODBC}",
"server=127.0.0.1",
"database=test_r",
"uid=Admin",
"pwd=password",
sep=";")
fm_connection <- odbcDriverConnect(constr)
Which returns the following error:
[RODBC] ERROR: state 01000, code 0, message [unixODBC][Driver Manager]Can't open lib 'FileMaker ODBC' : file not foundODBC connection failed
However, you can see that the driver is there:
Finally, I've unsuccessfully tried using these (and other) references to resolve this issue:
https://cran.r-project.org/web/packages/RODBC/vignettes/RODBC.pdf
https://community.filemaker.com/thread/165849
Nothing seems to work so far. I'm not tied to RODBC, but I do need a solution that works for Mac OS. Any help is appreciated!
Here are some imortant troubleshooting steps for MacOS. I've had the same error in R and therefore I think there's a good chance that this your issue. The setup of ODBC can be rather complicated since several software components with multiple versions are involved. You've already verified that ODBC sharing is on in this particular FileMaker database.
Verify your installation of unixodbc:
ODBC Manager is actually optional. It manages the ini files described below. But after installing unixodbc, you can also edit these ini files in a text editor without ODBC Manager.
To install Homebrew, execute this command.
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
Then, install unixodbc. This provides for ODBC connectivity at the system level.
brew update
brew install unixodbc
Verify the driver:
The driver should be installed here:
/Library/ODBC/FileMaker\ ODBC.bundle/Contents/MacOS
That folder will contain these two files:
SetupToolTemplate fmodbc.so
Here's the full driver path:
/Library/ODBC/FileMaker\ ODBC.bundle/Contents/MacOS/fmodbc.so
Verify the config files:
The folder
/Library/ODBC
should contain these files:
FileMaker ODBC.bundle odbc.ini odbcinst.ini
Additionally, the unixodbc should contain only the latest version. If you have an earlier version, delete it. Now, only 2.3.4 is present.
/usr/local/Cellar/unixodbc/2.3.4
also contains
odbc.ini odbcinst.ini
Mirror the odbc.ini and odbcinst.ini files:
As described above, there are two copies of each of these files in two different locations. Make sure that the contents of both are equal. That means both copies of odbcinst.ini will define the drivers. And both copies of odbc.ini will contain the connections. Maybe this isn't 100% necessary, but it's what I needed to do.
Test with isql:
B:etc bobby$ isql -v DSNname admin password
+---------------------------------------+
| Connected! |
| |
| sql-statement |
| help [tablename] |
| quit |
| |
+---------------------------------------+
If you still have any issues after completing these steps, please share additional details so that I can update the answer.
I got this to work using odbc instead of RODBC some new R code:
con <- DBI::dbConnect(odbc::odbc(),
driver = "/Library/ODBC/FileMaker ODBC.bundle/Contents/MacOS/FileMaker ODBC",
server = "127.0.0.1",
database = "/Users/bradcannell/Dropbox/Filemaker Pro/Notes/test_r.fmp12",
uid = "Admin",
pwd = "password")

Spark SQL ODBC Connection not connected

I have build the spark source using the following command
mvn -Pyarn -Phadoop-2.5 -Dhadoop.version=2.5.2 -Phive -Phive-1.1.0 -Phive-thriftserver -DskipTests clean package
I have started the thrift server using the following command
spark-submit --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --master local[*] file:///c:/spark-1.3.1/sql/hive-thriftserver/target/spark-hive-thriftserver_2.10-1.3.1.jar
Connected the thriftserver in beeline using the following command
Jdbc:hive2://localhost:10000
Created the table named as people using the following query
Create table people(Name String);
Load data local inpath ‘C:\spark-1.3.1\examples\src\main\resources\people.txt’ overwrite into table people;
How to read this table from C# application using odbc connection or thrift library?
I have use the following code snippet to read the table using C# code generated by thrift and Thrift dll
Console.WriteLine("Thrift hive server for Spark SQL Connection....");
TSocket hiveSocket = new TSocket("localhost", 10000);
TBinaryProtocol protocol =new TBinaryProtocol(hiveSocket);
ThriftHive.Client client = new ThriftHive.Client(protocol);
if (!hiveSocket.IsOpen)
{
hiveSocket.Open();
}
Console.WriteLine("Thrift server connected");
client.execute("select * from people1");
But i can not execute the query.
It is not throwing any error or exception because there was probably no error and the processing worked. You just need to actually retrieve the results using client.fetchAll().

Resources