I have an entry level question which my book could not help with and I also confused a colleague. Can I use RDF without specifiying prefixes and how do I specificy the predicate in the SPARQL query?
I'm doing this in R's RRDF package but if I set up a store as
A=new.rdf(ontology=F)
add.triple(A,"Ian","sonOf","Daddy")
add.triple(A,"Ian","sonOf","Mummy")
add.triple(A,"Ian","likes","Chocolate")
I get the following message
## Error: com.hp.hpl.jena.query.QueryParseException: Lexical error
at line 1, column 40. Encountered: " " (32), after : "sonOf"
to the query
sparql.rdf(A, "select ?son ?parent where {?son sonOf ?parent}")
Can I use sonOf in this way? Do I have to set up my own Schema first? Am I doing something fundamentally wrong? Do I have to use prefixes if all my data resides on the same data source?
An RDF predicate is always a URI (technically, actually a IRI, but let's set that aside).
It can be specified as a prefixed name:
namespace:sonOf
or an IRIRef, e.g:
<http://my.namespace.com#sonOf>
but I don't think it can be given as a plain label like you are attempting.
If you use the prefixed name style, then the prefix must be defined and bound to a namespace, so that a valid URI can be constructed from your input.
(The formal grammar is given in the SPARQL Specification.)
In the rrdf R package, the following syntax seemed to work:
library(rrdf)
D = new.rdf(ontology = F)
add.triple(D, "ian", "w:likes", "food")
add.triple(D, "ian", "w:hates", "homework")
sparql.rdf(D, "select ?child ?object where {?child <w:likes> ?object.}")
sparql.rdf(D, "select ?child ?object where {?child <w:hates> ?object.}")
You thus need to used the <> in the SPARQL calls but not calls to add.triple(). This seems obvious now but confused a few of us at the time.
i think you're asking about "blank" prefixes... if so:
when you define your prefixes, you can just say something like:
PREFIX : <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
which would then be used like:
?sub :sonOf ?obj
Related
How does one define constraints via Vapor? Here's my latest attempt:
database.schema(Thingmabob.schema)
.id()
.field("parentID", .uuid, .required, .references(Stuff.schema, .id))
.field("name", .string, .required)
.field("isDocumened", .bool, .required)
.field("value1", .uint, .required)
.field("value2", .uint)
.constraint(.custom("value2 > value1 + 1")) // << "syntax error"
.create()
Solution is two folds:
1) Seems the documentation — HA! — forgot to mention that any field mentionned in the .constraint(.custom( ... )) is set to lowercase. So if you have a field/column like .constraint(.custom("isDocumented ..."), the value in the constraint becomes isdocumented which fails to matches any column. Just beautiful!
2) Also, note that, while I'm using the .constraint() API, the framework has no idea what do to with it... I thus have to write the full statement CHECK notcamelcase > alsonotcamelcase (not just notcamelcase > alsonotcamelcase). Not sure why .constraint() isn't named .arbitrarySQLSegment() as that would've been more accurate and helpful than all that documentation I've read.
I have a JanusGraph database with a graph structure as follows:
(Paper)<-[AuthorOf]-(Author)
I'm want to use Gremlin's match clause to query the data and assign the results to a subgraph. This is what I have so far:
g.V().match(
__.as('a').has('Paper','paperTitle', 'The name of my paper'),
__.as('a').in('AuthorOf').outV().as('b')).
select('b').values()
This query returns what I want, the Authors of the paper I'm for which I'm searching. However, I want to assign the results to a subgraph so I can export it using:
sg.io(IoCore.graphml()).writeGraph("/home/ubuntu/myresults.graphml")
Previously, I've achieved this with a different query structure like this:
sg = g.V().has('paperTitle', 'The name of my paper').
inE('AuthorOf').subgraph('sg1').
outV().
cap('sg1').
next()
Is there away to achieve the same results using the 'match()' statement?
After a little trial and error I was able to create a working solution:
sg = g.V().match(
__.as('a').has('Paper','paperTitle', 'ladle pouring guide'),
__.as('a').inE('AuthorOf').subgraph('sg').outV().as('b')).
cap('sg').next()
At first, I was trying to use the 'select' statement to isolate the subgraph. After reviewing the documentation on 'subgraph' and learning more about side-effects in gremlin I realized it wasn't necessary.
I'm creating a xamarin.forms application, and we use sqlite-net-plc by Frank A. Krueger. It is supposed to support full text searching, which I am trying to implement.
Now, full text search seems to work. I created a query like:
SELECT * FROM Document d JOIN(
SELECT document_id
FROM SearchDocument
WHERE SearchDocument MATCH 'test*'
) AS ranktable USING(document_id)
which seems to work fine. However, I'd like to return the results in order of their rank, otherwise the result is useless. According to the documentation (https://www.sqlite.org/fts3.html), the syntax should be:
SELECT * FROM Document d JOIN(
SELECT document_id, rank(matchinfo(SearchDocument)) AS rank
FROM SearchDocument
WHERE SearchDocument MATCH 'test*'
) AS ranktable USING(document_id)
ORDER BY ranktable.rank
However, the engine doesn't seem to know the "rank" function:
[ERROR] FATAL UNHANDLED EXCEPTION: SQLite.SQLiteException: no such function: rank
It does know the "matchinfo" function though.
Can anyone tell me what I'm doing wrong?
Edit: After some more searching it seems that the rank function is simply not implemented in the library. I'm confused. How can people use the fulltext search without caring about the order of the results? Is there some other way of ordering the results so that the most relevant results are at the top?
It depends on SQLitePCLRaw.bundle_green. It's worth looking into that.
I'm parsing a Swedish library catalogue using R and the XML-package. Using the library's API, I'm getting XML back from a url containing my query.
I'd like to use xPath queries to parse each record, but everything I do with xPath of the XML-package returns blank lists, everything except "//*". I'm no expert in either xml-parsing nor xPath, but I suspect that it has to do with the xml that my API returns to me.
This is a simple example of one single post in the catalogue:
library(XML)
example.url <- "http://libris.kb.se/sru/swepub?version=1.1&operation=searchRetrieve&query=mat:dok&maximumRecords=1&recordSchema=mods"
doc = xmlParse(example.url)
# Title
works <- xmlRoot(doc)[[4]][["record"]][["recordData"]][["mods"]][["titleInfo"]][["title"]][[1]]
doesntwork <- getNodeSet(doc, "//title")
# The only xPath that returns anything
onlythisworks <- getNodeSet(doc, "//*")
If this has something to do with namespaces (as these answers sugests), all I understan about it is that the API returns data that seems to have namespaces defined in the initial tag, and that I could use that, but this doesn't help me:
# Namespaces are confusing:
title <- getNodeSet(xmlRoot(doc), "//xsi:title", namespaces = c(xsi = "http://www.w3.org/2001/XMLSchema-instance"))
Here's (again) the example return data that I'm trying to parse.
You have to use the right namespace.
Try the following
doesntwork <- getNodeSet(doc, "//mods:title")
#[[1]]
#<title>Horizontal Slot Waveguides for Silicon Photonics Back-End Integration [Elektronisk resurs]</title>
#
#[[2]]
#<title>TRITA-ICT/MAP AVH, 2014:17 \
# </title>
#
#attr(,"class")
#[1] "XMLNodeSet"
BTW: I usually get the namespaces via
nsDefs=xmlNamespaceDefinitions(doc,simplify = TRUE,recursive=TRUE)
But this throws an error in your case. It complains that there are different URIs for the same name space prefix. According to
this site this does not seem to be good coding style.
Update as per OP's comment
I am myself not an xml expert, but here is my take: You can define default namespaces via <tag xmlns=URI>. Non default namespaces are of the form <tag xmlns:a=URI> with a being the respective namespace name.
The problem with your document is that there are two different default namespaces. The first being in <searchRetrieveResponse xmlns="http://www.loc.gov/zing/srw/" ... >. The second is in <mods xmlns="http://www.loc.gov/mods/v3" ... >. Also, you will find the second default namespace URI in the first tag as xmlns:mods="http://www.loc.gov/mods/v3" (where it is non-default). This seems rather messy. Now, the <title> tag is within the <mods> tag. I think that the default namespace defined in <mods> gets overridden by the non default namespace of searchRetrieveResponse (because they have the same URI). So although <mods> and all following tags (like <title>) seems to have default namespaces they actually have the xmlns:mods namespace. But this does not apply to the tag <numberOfRecords> (because it's outside of <mods>). You can access this node via
getNodeSet(doc, "//ns:numberOfRecords",
namespaces = c(ns="http://www.loc.gov/zing/srw/"))
Here you extract the default namespace defined in <searchRetrieveResponse> and give it a name (ns in our case). Then you can explicitly use the default namespace name in your xPath query.
The transaction AL11 returns a mapping of "directory parameters" to file paths on the application server AFAIK.
The trouble with transaction AL11 is that its program only calls c modules, there's almost no trace of select statements or function calls to analize there.
I want the ability to do this dynamically, in my code, like for instance a function module that took "DATA_DIR" as input and "E:\usr\sap\IDS\DVEBMGS00\data" as output.
This thread is about a similar topic, but it doesn't help.
Some other guy has the same problem, and he explains it quite well here.
I strongly suspect that the only way to get these values is through the kernel directly. some of them can vary depending on the application server, so you probably won't be able to find them in the database. You could try this:
TYPE-POOLS abap.
TYPES: BEGIN OF t_directory,
log_name TYPE dirprofilenames,
phys_path TYPE dirname_al11,
END OF t_directory.
DATA: lt_int_list TYPE TABLE OF abaplist,
lt_string_list TYPE list_string_table,
lt_directories TYPE TABLE OF t_directory,
ls_directory TYPE t_directory.
FIELD-SYMBOLS: <l_line> TYPE string.
START-OF-SELECTION-OR-FORM-OR-METHOD-OR-WHATEVER.
* get the output of the program as string table
SUBMIT rswatch0 EXPORTING LIST TO MEMORY AND RETURN.
CALL FUNCTION 'LIST_FROM_MEMORY'
TABLES
listobject = lt_int_list.
CALL FUNCTION 'LIST_TO_ASCI'
EXPORTING
with_line_break = abap_true
IMPORTING
list_string_ascii = lt_string_list
TABLES
listobject = lt_int_list.
* remove the separators and the two header lines
DELETE lt_string_list WHERE table_line CO '-'.
DELETE lt_string_list INDEX 1.
DELETE lt_string_list INDEX 1.
* parse the individual lines
LOOP AT lt_string_list ASSIGNING <l_line>.
* If you're on a newer system, you can do this in a more elegant way using regular expressions
CONDENSE <l_line>.
SHIFT <l_line> LEFT DELETING LEADING '|'.
SHIFT <l_line> RIGHT DELETING TRAILING '|'.
SPLIT <l_line>+1 AT '|' INTO ls_directory-log_name ls_directory-phys_path.
APPEND ls_directory TO lt_directories.
ENDLOOP.
Try the following
data : dirname type DIRNAME_AL11.
CALL 'C_SAPGPARAM' ID 'NAME' FIELD 'DIR_DATA'
ID 'VALUE' FIELD dirname.
Alternatively if you wanted to use your own parameters(AL11->configure) then read these out of table user_dir.