I want to use R package SPARQL to run set of INSERT queries to a Virtuoso endpoint.
How to give username and password to the function and which url to be used?
When I tried
tmpRes=SPARQL('myserver:8890/sparql',update=updateQry)
I got error: Error: SPARQL Request Failed
I think you want:
tmpRes=SPARQL('http://dba:dba#myserver:8890/sparql', query=updateQry)
note both changes.
HTH
The following document on Analyzing Linked Open Data with R shows how to query a Virtuoso SPARQL endpoint from R.
Related
I have NebulaGraph database version 3.1.2 running in my AWS environment and I am testing basic openCypher.
If I run MATCH (n:Person{lastName:"Brown"})-[e:LIKES_COMMENT]-(m) RETURN m.locationIP, it fails to retrieve the user IP. Not sure where it went wrong. It should be a valid openCypher statement and Nebula Graph supports openCypher.
Simply returning m works. Screenshot is as follows:
As it propmpted, we should use var.TagName.PropName like
m.Comment.locationIP
This is one of where it differentiated from the vanilla OpenCypher in NebulaGraph.
I'm trying to connect to db in Azure Data Explorer from R using AzureKusto library. Following this documentation https://github.com/Azure/AzureKusto, after calling kusto_database_endpoint(...) function I need to open a browser page and insert the printed code manually. There's a way to skip this manual step and do it automatically? Or there are alternatives for connecting to ADX db?
Thanks for the help!
Co-creator of the package here. Thank you for the question. Yes, you can use the get_kusto_token function to obtain a token and then pass it to kusto_database_endpoint as the .query_token argument. get_kusto_token supports the following authentication flows:
"authorization_code"
"device_code"
"client_credentials"
"resource_owner"
For example, if you have an AAD application service principal that has access to the Azure Data Explorer cluster, you can use its ID and secret to authenticate:
# authenticate using client_credentials method: see ?AzureAuth::get_azure_token
token <- get_kusto_token("https://mycluster.kusto.windows.net",
tenant="mytenant",
authtype="client_credentials",
app="myappid",
password="myclientsecret")
kusto_database_endpoint(server = "mycluster.kusto.windows.net",
database = "mydb",
.query_token=token)
The help page ?AzureKusto::get_kusto_token provides more detailed information on this. Also, please note that the get_kusto_token function is a wrapper around AzureAuth::get_azure_token. The readme for the AzureAuth R package has more detailed examples of other methods of obtaining an Azure access token: https://github.com/Azure/AzureAuth
I'm trying to use R's DBI library to create a view on an Athena database, connected via JDBC. The dbSentStatement command, which is supposed to submit and execute arbitrary SQL without returning a result, throws an error when no result set is returned:
DBI::dbSendStatement(athena_con, my_query)
Error in .verify.JDBC.result(r, "Unable to retrieve JDBC result set", :
Unable to retrieve JDBC result set
JDBC ERROR: [Simba][JDBC](11300) A ResultSet was expected but not generated from query <query repeated here>
In addition, the view is not created.
I've tried other DBI commands that seemed promising (dbExecute, dbGetQuery, dbSentQuery), but they all throw the same error. (Actually, I expect them all to - dbSendStatement is the one that, from the manual, should work.)
Is there some other way to create a view using DBI, dbplyr, etc.? Or am I doing this right and its a limitation of RJDBC or the driver?
RJDBC pre-dates the more recent DBI specification and uses a different function to access this functionality: RJDBC::dbSendUpdate(con, query) .
DBI's dbSendStatement() doesn't work here yet. For best compatibility, RJDBC could implement this method and forward it to its dbSendUpdate() .
Without given more details of your query, I cannot promise this helps:
But for my case:
nrow <- dbExecute(con, paste0("CREATE VIEW ExampleView AS",
"Random statements"))
Would help you create a view on your backend.
Difference: I'm using SQLite.
We are seeking the most simple way for sending alfresco's audit log to elasticsearch.
I think using the alfresco supplying query and getting audit log would be most simple way.(since audit log data is hardly watchable on db)
And this query processes the effect measure as json type then I'd like to download the query direct using fluentd and send to elasticsearch.
I roughly understood that it would ouput at elasticsearc but I wonder whether I can download 'curl commend' using query direct at fluentd.
Otherwise, if you have other simple idea to get alfresco's audit log then kindly let me know.
I am not sure weather I understood it fully or not but based on your last statement I am giving this answer.
To retrieve audit entries from alfresco repository you could directly use REST APIs of Alfresco which allows you to access them.
I'm doing a sparql query against an authenticated endpoint in R using the SPARQL library.
The same query/endpoint/user works using the rrdf package. Unfortuantely, once I get the query working, I need to process the data in R and update the graph with the answers, which rrdf can't do.
Setting up a few variables first, the below query works using rrdf:
sparql.remote(myEndpoint,myQuery,'rowvar',myUsername,myUserpwd)
Using SPARQL, this does not work:
SPARQL(myEndpoint,myQuery,curl_args=c('username'=myUsername,'userpwd'=myUserpwd))
The error is Error: XML content does not seem to be XML: '' which I think means no document is coming back.
So, any tips on how to debug the curl call underneath all this?
And the solution in this case was that the username parameter is not used in curl.
The correct call is:
SPARQL(myEndpoint,myQuery,curl_args=c('userpwd'=paste(myUsername,':',myUserpwd,sep='')))
Actually debugging it was done via calls to getURL from RCurl against the basic endpoint until I got something that worked.
getURL(url=endpoint,userpwd="testusername:testpassword",verbose=TRUE)
Hope this helps someone.