virtuoso-opensource: trouble while (jena) querying data loaded using vload script? - virtuoso

I followed this article on "Installing and Managing Virtuoso SPARQL Endpoint" (http://logd.tw.rpi.edu/tutorial/installing_using_virtuoso_sparql_endpoint)
After loading data from a ntriple file with the following command
sudo ./vload nt /path/to/data/file/data.nt http://www.soctrace.org/ontologies/st.owl
I successfuly queried those data from the Web interface SPARQL endpoint located at http://localhost:8890/sparql
SELECT ?s ?p ?o WHERE { ?s ?p ?o }
However, I'm interested on querying those data from jena, so I ran the following Java code
public void queryVirtuoso( ) {
Model model = VirtModel.openDatabaseModel("http://www.soctrace.org/ontologies/st.owl", "jdbc:virtuoso://localhost:1111", "dba", "dba");
// Query string.
String queryString = "SELECT ?s ?p ?o WHERE {?s ?p ?o}" ;
System.out.println("Execute query=\n"+queryString) ;
System.out.println() ;
QueryExecution qexec = VirtuosoQueryExecutionFactory.create(queryString, model) ;
try {
ResultSet rs = qexec.execSelect() ;
System.out.println("Number of results founded " + rs.getRowNumber());
} finally {
qexec.close() ;
}
}
But unfortunatly the code returns no result.
It seems that the first parameter of the openDatabaseModel from my code is not correct but I don't know what the correct value is.
Does someone have any indication about how to query a virtuodo graph from Jena giving that data are imported using vload script ?
Best regards,

If you not sure about the graph-name, you might look them up in the LinkedData tab in you Virtuoso conductor. It should also be possible to use VirtModel.openDatabaseModel without a graph name (connectionURL, username, password)...

Related

How to exec MariaDB Store Proc from Visual Foxpro?

I have a function/store proc in Maria DB
CREATE DEFINER=`root`#`localhost` PROCEDURE `test1`(var1 varchar(100))
BEGIN
select * from ttype where kode=var1;
END
I need to get a cursor from store proc, Howto get a cursor in vfp application, database in MariaDb/MySQL StoreProc?
I try with this in my Visual Foxpro :
Sqlexec(kon,"call test1 ('ABC')","test") --> not running
But when I use common select like this :
sqlexec(kon,"select * from ttype where kode='ABC'","test") --it's running well..
"Can you show me how to use aerror() in my case?"
You would use the AError() function always when any ODBC Remote action would fail, i.e. any of Vfp's SQL*() functions, like SqlStringConnect() for example or your proposed
sqlexec(kon,"select * from ttype where kode='ABC'","test") --it's running well
&& Actually you cannot know whether "it's running well" unless you are evaluating its return value like this:
Local lnResult, laSqlErrors[1], lcErrorMessage
lnResult = SqlExec(kon,"select * from ttype where kode='ABC'","test")
If m.lnResult = –1 && as documented in the F1 Help
AERROR(laSqlErrors)
lcErrorMessage = ;
TRANSFORM(laSqlErrors[1]) + ", " + ;
TRANSFORM(laSqlErrors[2])
&& now write a log and/or inform the user
ENDIF
&& to be continued

Application Insights and Azure Stream Analytics Query export the whole custom dimensions as string

I have setup a continuous export from Application Insights into Blog. With a data stream I'm able to get out the JSON files into SQL DB. So far so good.
Also with help from Phani Rahul Sivalenka I'm able to query the individual properties of custom dimensions as described here: Application Insights and Azure Stream Analytics Query a custom JSON property
My custom dimensions looks like this when exporting manually into CSV file:
"{""OperatingSystemVersion"":""10.0.18362.418"",""OperatingSystem"":""WINDOWS"",""RuntimePlatform"":""UWP"",""Manufacturer"":""LENOVO"",""ScreenHeight"":""696"",""IsSimulator"":""False"",""ScreenWidth"":""1366"",""Language"":""it"",""IsTablet"":""False"",""Model"":""LENOVO_BI_IDEAPAD4Q_BU_idea_FM_""}"
Additionally to the single columns I like to have the whole custom dimensions as a string in a SQL Table column (varchar(max)).
In the "Test results" of my Data Stream Output Query I see the column as formated above - but when really exporting / wrinting into SQL DB all my tests ended having only the value "Array" or "Record" as value in my SQL Table column.
What do I have to do in the Data Stream Query to get the whole custom dimensions value as a string and I'm able to write this into SQL Table as a whole string?
What do I have to do in the Data Stream Query to get the whole custom
dimensions value as a string and I'm able to write this into SQL Table
as a whole string?
You could use UDF to merge all key-values of single raw into one single json format string.
UDF:
function main(raw) {
let str = "{";
for(let key in raw) {
str = str + "\""+ key+"\":\""+raw[key]+"\",";
}
str += "}";
return str;
}
SQL:
SELECT udf.jsonstring(INPUT1) FROM INPUT1
Output:
The answer brought me on the right track.
The above script don't include the values as expected. So I modified the script to get it work as needed:
function main(dimensions) {
let str = "{";
for (let i in dimensions)
{
let dim = dimensions[i];
for (let key in dim)
{
str = str + "\"" + key+ "\":\""+dim[key]+"\",";
}
}
str += "}";
return str;
}
Selecting:
WITH pageViews as (
SELECT
V.ArrayValue.name as pageName
, *
, customDimensions = UDF.flattenCustomDimensions(A.context.custom.dimensions)
, customDimensionsString = UDF.createCustomDimesionsString(A.context.custom.dimensions)
FROM [AIInput] as A
CROSS APPLY GetElements(A.[view]) as V
)
With this I'm getting the custom dimensions string as follow in my SQL table:
{"Language":"tr","IsSimulator":"False","ScreenWidth":"1366","Manufacturer":"Hewlett-Packard","OperatingSystem":"WINDOWS","IsTablet":"False","Model":"K8K51ES#AB8","OperatingSystemVersion":"10.0.17763.805","ScreenHeight":"696","RuntimePlatform":"UWP",}

Store add_graph appears to do nothing with rdflib and Fuseki

I have a Fuseki endpoint and want to use the python rdflib library to add graphs to it.
I connect to the Fuseki server as follows:
import rdflib
from rdflib import Graph, Literal, URIRef
from rdflib.plugins.stores import sparqlstore
query_endpoint = 'http://localhost:3030/dsone/query'
update_endpoint = 'http://localhost:3030/dsone/update'
store = sparqlstore.SPARQLUpdateStore()
store.open((query_endpoint, update_endpoint))
Then I set-up an rdflib ConjunctiveGraph as follows:
g = rdflib.ConjunctiveGraph(identifier=URIRef("http://localhost/dsone/g3"))
g.parse(data="""
#base <http://example.com/base> .
#prefix : <#> .
<> a :document1 .
""", format='turtle', publicID = "http://example.com/g3")
g now contains one triple:
for r in g:
print(r)
Which results in:
(rdflib.term.URIRef('http://example.com/base'), rdflib.term.URIRef('http://www.w3.org/1999/02/22-rdf-syntax-ns#type'), rdflib.term.URIRef('http://example.com/base#document1'))
Now I add the graph to the store with:
store.add_graph(g)
The server shows a 200 return code. When I do the insert directly over REST, it returns 204 - which could be a clue.
Now there is nothing in the store.
for r in store.query("""
select ?g (count(*) as ?count) {graph ?g {?s ?p ?o .}} group by ?g
"""):
print(r)
Which has no output.
As far as I can tell store.add_graph(g) is doing nothing.
Any ideas?
store.add_graph(g)
Just created the graph itself, it won't take your graph content and put it in there. So add_graph() is really only just using the graph's identifier, not it's data.
Try calling add_graph() then adding data to the graph.

System.data.Sqlite: Return number of affected rows no matter if Query or NonQuery is used

I'm using System.Data.SQLite ADO.NET provider for SQLite and the following Powershell code to execute queries (and nonqueries) against a Sqlite3 DB:
Function Invoke-SQLite ($DBFile,$Query) {
try {
Add-Type -Path ".\System.Data.SQLite.dll"
}
catch {
write-warning "Unable to load System.Data.SQLite.dll"
return
}
if (!$DBFile) {
throw "DB Not Found" R
Sleep 5
Exit
}
$conn = New-Object System.Data.SQLite.SQLiteConnection
$conn.ConnectionString="Data Source={0}" -f $DBFile
$conn.Open()
$cmd = $Conn.CreateCommand()
$cmd.CommandText = $Query
#$cmd.CommandTimeout = 10
$ds = New-Object system.Data.DataSet
$da = New-Object System.Data.SQLite.SQLiteDataAdapter($cmd)
[void]$da.fill($ds)
$cmd.Dispose()
$conn.Close()
write-host ("{0} Row(s) returned " -f ($ds.Tables[0].Rows|Measure-Object|Select -ExpandProperty Count))
return $ds.Tables[0]
}
The problem is: while it is trivial to know how many rows have been SELECTed in a query operation, the same is not true if the operation is an INSERT,DELETE or UPDATE (nonqueries)
I know I could use the ExecuteNonQuery method, but i need a generic wrapper which returns number of affected rows while being agnostic about the query it executed (as Invoke-SQLCmd would do, for example)
Is that possible?
Thanks!
A few comments before the answer:
System.data.Sqlite supports executing multiple SQL statements for one command, as long as the CommandText has each valid statements delimited by a semicolon (;). This means that there could be a mixture of queries and DML statements (i.e. INSERT, UPDATE, DELETE). The fact that you do not want to distinguish between the type of statement in $Query tells me that you are likely just passing statements blindly, so it could contain any combination of statements. Simply getting only one value (whether from a query or DML) seems too limiting.
Using a DataAdapter to fill a dataset just to get counts is inefficient. Instead, it may be better to just get a DataReader object and count the returned rows. This also allows a separate count for each query statement to be retrieved, something that gets obscured by using the DataAdapter object. (Perhaps enumerating all tables in the resultant dataset could get the same number, but I'm not certain that would always be equivalent.)
One good thing is that if you insist on using a DataAdapter, it will still execute DML statements (even though the expected result is query that returns rows). The dataset will not be changed (filled), but all statements in the command text will still affect changes in the database, so the following solution will still be useful.
Even if the code had works, I assume that the line which prints "{0} Rows returned" is meant to get a simple count, but $ds.Tables[0].Rows needs to be $ds.Tables[0].Rows.Count.
Notes about this particular solution:
The key is to call either of the sqlite SQL functions changes() or total_changes(). These can be retrieved using SQL: SELECT total_changes();. I recommend getting total_changes() before and after a command, then subtracting the difference. That will get changes for multiple statements executed by one command.
I'm not a PowerShell guru, so I tested everything in C#. Treat the code below more as pseudo code since it may need tweaking.
The code:
$conn = New-Object System.Data.SQLite.SQLiteConnection
try {
$conn.ConnectionString="Data Source={0}" -f $DBFile
$conn.Open()
$cmdCount = $Conn.CreateCommand()
$cmd = $Conn.CreateCommand()
try {
$cmdCount.CommandText = "SELECT total_changes();"
$beforeChanges = $cmdcount.ExecuteScalar()
$cmd.CommandText = $Query
$ds = New-Object System.Data.DataSet
$da = New-Object System.Data.SQLite.SQLiteDataAdapter($cmd)
$rows = 0
try {
[void]$da.fill($ds)
foreach ($tbl in $ds.Tables) {
$rows += $tbl.Rows.Count;
}
} catch {}
$afterChanges = $cmdcount.ExecuteScalar()
$DMLchanges = $afterChanges - $beforeChanges
$totalRowAndChanges = $rows + $DMLchanges
# $ds.Tables[0] may or may not be valid here.
# If query returned no data, no tables will exist.
} finally {
$cmdCount.Dispose()
$cmd.Dispose()
}
} finally {
$conn.Dispose()
}
Alternatively, you could eliminate the DataAdapter:
$cmd.CommandText = $Query
$rdr = $cmd.ExecuteReader()
$rows = 0
do {
while ($rdr.Read()) {
$rows++
}
} while ($rdr.NextResult())
$rdr.Close();

Querying FactForge Sparql Endpoint with RDF4J returning error

I'm trying to query Factforge Sparql endpoint with RDF4J but I receive an error.
I use RDF4J V: 2.3.2 with Maven.
I set my proxy settings on Java with:
System.setProperty("https.proxyHost", PROXY_HOST);
System.setProperty("https.proxyPort", PROXY_PORT);
System.setProperty("http.proxyHost", PROXY_HOST);
System.setProperty("http.proxyPort", PROXY_PORT);
Here is my code:
SPARQLRepository repo = new SPARQLRepository("http://factforge.net/sparql");
repo.initialize();
try (RepositoryConnection conn = repo.getConnection()) {
String queryStringA = "SELECT ?s ?p WHERE { ?s ?p ?o } ";
TupleQuery query = conn.prepareTupleQuery(QueryLanguage.SPARQL, queryStringA);
try (TupleQueryResult result = query.evaluate()) {
while (result.hasNext()) { // iterate over the result
BindingSet bindingSet = result.next();
Value valueOfX = bindingSet.getValue("s");
Value valueOfY = bindingSet.getValue("p");
System.out.println(valueOfX);
System.out.println(valueOfY);
}
}
}
I get the following error:
Exception in thread "main" org.eclipse.rdf4j.query.QueryEvaluationException: Failed to get server protocol; no such resource on this server: http://factforge.net/sparql?query=SELECT+%3Fs+%3Fp+WHERE+%7B+%3Fs+%3Fp+%3Fo+%7D+
I would appreciate a help. Thank you.
You're using the wrong endpoint URL. According to the Factforge overview, the SPARQL endpoint for Factforge is at http://factforge.net/repositories/ff-news.
Apart from that, please note that the query you're doing is "give me ALL your triples". Factforge has a lot of triples, so the result of this is going to be massive, and executing it will consume a lot of resources on the Factforge server. Your query might time out or Factforge might refuse to execute it.
If your aim is simply to test that SPARQL querying works, it would be better to put something like a LIMIT constraint in there:
String queryStringA = "SELECT ?s ?p WHERE { ?s ?p ?o } LIMIT 10";

Resources