How to dereference an uploaded URI node in Fuseki2 linked data server? - fuseki

I installed Fuseki2 and run it as a standalone server with default settings (http://localhost:3030/).
I created an in-memory dataset ('geography') by uploading a Turtle file with information like this, through the 'Manage datasets' console:
#prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
#prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
#prefix owl: <http://www.w3.org/2002/07/owl#> .
#prefix schema: <http://schema.org/> .
#prefix dcat: <http://www.w3.org/ns/dcat#> .
#prefix pd: <http://localhost:3030/geography/ontology/> .
pd:City
rdf:type owl:Class ;
rdfs:comment "Represents a City in the ontology" .
<http://localhost:3030/geography/City/>
a dcat:Dataset;
schema:name "City information";
schema:description "Dataset with all the country's cities' information from 2018." ;
<http://localhost:3030/geography/City/1100015>
a pd:City;
rdfs:label "Rome" ;
pd:isCapital "1"^^xsd:int .
Although I can get the values by SPARQL query (eg. PREFIX dcat: http://www.w3.org/ns/dcat# SELECT ?s WHERE {?s a dcat:Dataset.}), I cannot access this node's information (http://localhost:3030/geography/City/1100015), as I would, for example , like this: http://dbpedia.org/page/Rome.
Is there a way that I can configure Fuseki server to dereference the URI that I uploaded and return the information of the node?

Apparently the only way to make the imported URIs resolvable is by using a Linked Data Interface, such as Pubby (http://wifo5-03.informatik.uni-mannheim.de/pubby/) or LOD View (https://github.com/LodLive/LodView).
Once installed, you can point them to the Fuseki's SPARQL endpoint.
Example with LodView config file (config.ttl):
conf:IRInamespace <http://localhost:3030/geography/> ; # base namespace for URIs
conf:endpoint <http://localhost:3030/sparql>; # where to query
The, you can access http://localhost:8080/lodview/geography/City/1100015 through content negotiation.

Related

Show last image from directory on website

I have a video camera that uploads via FTP only to preset folders. It uploads to a tree, like this /www/wp-content/uploads/camer/10.121.0.202/2021-01-02/D3. I need to show the last uploaded image on the website. I need to take this folder /www/wp-content/uploads/camer/10.121.0.202/ and find the newest file and show it on the website. My website is running on WordPress.
I don't know whether there is a dedicated plugin for this but the following lets you do basically everything.
Install Insert PHP Code Snippet
Add the following code snippet and save it as e.g. 'getlatestfile'. Adjust the base_path and maybe the relative ../ according to which page you want to display the image OR make it absolute.
<?php
$base_path = 'wp-content/uploads/camera';
$latest_folder = scandir($base_path, SCANDIR_SORT_DESCENDING);
$latest_file = scandir($base_path . "/" . $latest_folder[0], SCANDIR_SORT_DESCENDING);
echo "<img src='../" . $base_path . "/" . $latest_folder[0] . "/" . $latest_file[0] . "' />";
?>
Add [xyz-ips snippet="getlatestfile"] to the page where you want your image.
I assumed that your camera folder only contains folders and that they all have the same date format. Also the files inside the date folder like 'D3' should be ascending in time (hexadecimal). So the next image would be 'D4' etc.
EDIT:
If D3, D4, ... are folders which contain the images add another scandir
<?php
$base_path = 'wp-content/uploads/camera';
$latest_date_folder = scandir($base_path, SCANDIR_SORT_DESCENDING);
$latest_folder = scandir($base_path . "/" . $latest_date_folder[0], SCANDIR_SORT_DESCENDING);
$latest_file = scandir($base_path . "/" . $latest_date_folder[0] . "/" . $latest_folder[0] , SCANDIR_SORT_DESCENDING);
echo "<img src='../" . $base_path . "/" . $latest_date_folder[0] . "/" . $latest_folder[0] . "/" . $latest_file[0] . "' />";
?>

SPARQL querying not possible

I installed Fuseki on my local Ubuntu. Accessible via http://localhost:3030. Added a dataset "fuwstats" and a named graph "fuwstats" and then I imported some triples (see code below).
When I want to run the standard SPARQL query, nothing is shown (0 results). Any idea what I'm doing wrong?
SPARQL Endpoint is: http://localhost:3030/fuwstats/query
Query is: SELECT * WHERE { ?s ?p ?o }
This is the RDF that I added to the dataset:
#prefix fuwarticle: <http://localhost/article/> .
#prefix dc: <http://purl.org/dc/elements/1.1/> .
#prefix dcterms: <http://purl.org/dc/terms/> .
#prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
fuwstats:1452406 dc:source fuwarticle:lonza-stellt-sich-neu-auf.
fuwstats:1452406 fuwstats:visit _:b.
_:b fuwstats:visitor "bbbbbbbbbb"^^xsd:int.
_:b dcterms:date "2019-02-25 10:37:10"^^dcterms:W3CDTF .
fuwstats:1452406 fuwstats:visit _:c.
_:c fuwstats:visitor "aaaaaaaaa"^^xsd:int.
_:c dcterms:date "2019-01-26 10:37:10"^^dcterms:W3CDTF .`

copy a line with specific pattern and paste it below in unix without opening a file

I have a specific requirement.
I have file:
some text
. . . . .
. . . . .
**todo: owner comments . . . .**
... .
sometext
Now I want the output like:
some text
. . . . .
. . . . .
**todo: owner comments . . . .**
**owner: todo comments . . . .**
... .
.
sometext
I want to grep for todo and copy that line and paste it below with above modification.
Can it be possible without opening a file... like sed,awk command ??
Thanks and Regards,
Dharak
I guess what you mean is opening the file in an editor. Here is an awk script you can tailor for your needs.
$ awk '/\*\*todo:/{print; print "**owner: todo ... ";next}1' file
some text
. . . . .
. . . . .
**todo: owner comments . . . .**
**owner: todo ...
... .
sometext
you can save the output to a temp file and move over to your original file.
sed 's/\(**todo: owner comments\)\(.*\)/\1 \2 \
> **owner: todo comments \2/g' filename
Patterns matched and replaced
New line is inserted manually by placing an enter after '\' at end of first line
'>' will come automatically pointing for the next characters to be inserted

Virtuoso Dump graph

Hello I have a probably simple problem but I am not able to find it anywhere in docs.
I use this code in Virtuoso Interactive SQL:
SPARQL clear graph <http://product-open-data.org/temp>;
SPARQL clear graph <http://linked.opendata.cz/resource/dataset/product-open-data.org/2014-01-01>;
DB.DBA.TTLP ('
#prefix rr: <http://www.w3.org/ns/r2rml#> .
#prefix foaf: <http://xmlns.com/foaf/0.1/> .
#prefix gr: <http://purl.org/goodrelations/v1#> .
#prefix s: <http://schema.org/> .
#prefix pod: <http://linked.opendata.cz/ontology/product-open-data.org#>
<#TriplesMapBrand>
a rr:TriplesMap;
rr:logicalTable [
rr:tableSchema "POD";
rr:tableOwner "DBA";
rr:tableName "BRAND"
];
rr:subjectMap
[
rr:template "http://linked.opendata.cz/resource/brand/{BSIN}";
rr:class gr:Brand;
rr:graph <http://linked.opendata.cz/resource/dataset/product-open-data.org/2014-01-01>
];
rr:predicateObjectMap [
rr:predicateMap [rr:constant pod:bsin];
rr:objectMap [rr:termType rr:Literal; rr:column "BSIN" ];
];
rr:predicateObjectMap [
rr:predicateMap [rr:constant gr:name];
rr:objectMap [rr:termType rr:Literal; rr:column "BRAND_NM" ];
];
rr:predicateObjectMap [
rr:predicateMap [rr:constant s:url];
rr:objectMap [rr:termType rr:IRI; rr:template "{BRAND_LINK}";];
];.
', 'http://product-open-data.org/temp', 'http://product-open-data.org/temp', 0);
exec ('sparql ' || DB.DBA.R2RML_MAKE_QM_FROM_G ('http://product-open-data.org/temp','http://linked.opendata.cz/resource/dataset/product-open-data.org/2014-01-01'));
SPARQL Select * from <http://linked.opendata.cz/resource/dataset/product-open-data.org/2014-01-01>
where {?s ?o ?p.} limit 1000000;
My problem is following: I want to get a TTL file with dump_one_graph procedure. But when I run the procedure like this in iSQL:
SQL> DB.DBA.dump_one_graph('http://linked.opendata.cz/resource/dataset/product-open-data.org/2014-01-01','../R2RML/pod_',1000000000);
the only thing I get is:
Dump of graph http://linked.opendata.cz/resource/dataset/product-open-data.org/2014-01-01, as of 2014-11-11 23:46:48.000004
So my question is: where are my triples stored and why is SPARQL SELECT returning a result set while dump_one_graph doesn't?
R2RML gets mapped to Virtuoso RDFViews which are not persisted to the Quad Store by default.
There is an option to make these material or persisted to the Quad Store.
Have a look at: r2rml. There should by an option 'Enable Data Syncs with Physical Quad Store' which should do the trick. Also have a look at the Generate RDB2RDF triggers option. I don't know exactly how this will look with turtle-syntax, but you can inspect the resulting commands by using the 'Prepare to Execute Button.
Hope this helps...

adding custom attributes to openldap

This is my first time at LDAP . I have setup an openldap on ubuntu machine and an ldap browser (phpldapadmin) on the remote system .I 'm trying to add two custom attributes to the cn=config and i get a successful message but if i see the attributes or the schema in the ldap browser its no where visible , please let me know where i'm going wrong . Below are the steps i have taken
1)Creating custom.schema file
#file to add custom schemas to the ldap
attributetype ( 1.7.11.1.1
NAME 'studentid'
DESC 'unique id given to each student of the college'
EQUALITY caseIgnoreMatch
SUBSTR caseIgnoreSubstringsMatch
SYNTAX 1.3.6.1.4.1.1466.115.121.1.15
SINGLE-VALUE )
attributetype ( 1.7.11.1.2
NAME 'pexpiry'
DESC 'indicated the date of password expiry'
EQUALITY caseIgnoreMatch
SUBSTR caseIgnoreSubstringsMatch
SYNTAX 1.3.6.1.4.1.1466.115.121.1.15
SINGLE-VALUE )
objectClass ( 1.7.11.1.1.100
NAME 'Studentinfo'
DESC 'Studentinfo object classes '
SUP top
AUXILIARY
MUST ( studentid $ pexpiry $
)
)
2)Create an ldif file
#ldif file containing the custom schema
dn: cn=custom,cn=schema,cn=config
objectClass: olcSchemaConfig
cn: custom
olcAttributeTypes: ( 1.7.11.1.1
NAME 'studentid'
DESC 'unique id given to each student of the college'
EQUALITY caseIgnoreMatch
SUBSTR caseIgnoreSubstringsMatch
SYNTAX 1.3.6.1.4.1.1466.115.121.1.15
SINGLE-VALUE )
olcAttributeTypes: ( 1.7.11.1.2
NAME 'pexpiry'
DESC 'indicated the date of password expiry'
EQUALITY caseIgnoreMatch
SUBSTR caseIgnoreSubstringsMatch
SYNTAX 1.3.6.1.4.1.1466.115.121.1.15
SINGLE-VALUE )
olcObjectClasses: ( 1.7.11.1.1.100
NAME 'Studentinfo'
DESC 'Studentinfo object class '
SUP top
AUXILIARY
MUST ( studentid $ pexpiry $
)
)
3)Add the ldif file to the cn=config using the below command
ldapadd -x -h 192.168.2.3 -D "cn=admin,cn=config" -W -f ./custom.ldif
It first asks for password , i enter the password and i get the message as
Adding entry "cn=custom,cn=schema,cn=config"
But when i goto browser i don't see the schema nor the attributes there .I tried to add an user it said invalid attributes .
1] Add custom schema in slapd.conf and restart LDAP service.If Everything is ok service will start properly otherwise it will give error.
2] After this if possible use Apache Studio for browsing,i was also not able to see the custom object in other browsers.

Resources