How to restrict units in property shapes? - linked-data

I'm looking to make some SHACL shapes for certain properties in my ontology that have a speicified unit. Example:
My instance data would look something like this:
otlc:surgeprotector_1
rdf:type otlc:Surgeprotector ;
otlc:Nominal_voltage[
rdf:type otlc:QuantityValue ;
rdf:value "225"^^xsd:float ;
otlc:hasUnit unit:KiloV ;
] ;
.
I then have some shapes to validate this data:
otlc:Nominal_voltageShape
rdf:type sh:PropertyShape ;
sh:path otlc:Nominal_voltage ;
sh:class otlc:QuantityValue ;
**otlc:hasUnit unit:KiloV ;**
sh:maxCount 1 ;
sh:minCount 1 ;
.
otlc:SurgeprotectorShape
rdf:type sh:NodeShape ;
sh:property otlc:Nominal_voltageShape ;
sh:targetClass otlc:Surgeprotector ;
.
My question: How can I specify the unit given for the predicate "otlc:hasUnit" in the instance data for each property in my ontology? My desired end result is to have a nodeshape for the surgeprotector, and a propertyshape for the property "nominal_voltage", to restrict the value for nominal_voltage to unit:KiloV. I'm hoping there is some kind of shacl keyword I had not heard of/realized I can use here, to add onto my propertyshape (right now, i put in what I would imagine exists for shacl in **). (for example, sh:pattern can be used to specify the value with regex, but I want to specify the value of a piece of metadata of my value, if that makes sense...)
Thanks in advance!
Robin

I believe you can replace your highlighted line with
sh:property [
sh:path otlc:hasUnit ;
sh:hasValue unit:KiloV ;
] ;
This would mean that this property shape must apply to all values of the surrounding property shape, i.e. all values of otlc:Nominal_voltage.
Alternative, you could use a path expression in a property shape one level higher up, e.g.
otlc:SurgeprotectorShape
...
sh:property [
sh:path ( otlc:Nominal_voltage otlc:hasUnit ) ;
sh:hasValue unit:KiloV ;
] ;
Note that sh:hasValue also implies sh:minCount 1, i.e. the value must be present. You may want to add sh:maxCount 1 as extra protection.

Related

Filemaker Pro - Using Script to populate report layout

I have a problem where I have a list of fields from a table (not static, can be modified by user), and I need to generate a report using these user selected fields. The report can show all the rows, no need for aggregation or filtering.
I thought I could create a report layout then using a filemaker script to populate it but can't seem to find the right commands, can someone let me know how I could achieve this?
I'm using filemaker pro 18 advanced
Thanks in advance!
EDIT: Since you want a dynamic report, then I recommend you look up a technique called "Virtual List" for rendering the data.
Here's an example script that iterates over a found set of records and builds the virtual list data in a variable (it doesn't show how to render it though):
# Field names and delimiter
Set Variable [ $delim ; Value: Char(9) // tab character ]
# Set these dynamically with a script parameter
Set Variable [ $fields ; Value: List ( "Contacts::nameFirst" ; "Contacts::nameCompany" ; "Contacts::nameLast" ) ]
Set Variable [ $fieldCount ; Value: ValueCount ( $fields ) ]
Go to Layout [ “Contacts” (Contacts) ; Animation: None ]
Show All Records
Go to Record/Request/Page [ First ]
# Loop over all the records and append a row in the $data variable for each
Set Variable [ $data ; Value: "" ]
Loop
# Get the delimited field values
Set Variable [ $i ; Value: 0 ]
Set Variable [ $row ; Value: "" ]
Loop
Exit Loop If [ Let ( $i = $i + 1 ; $i > $fieldCount ) ]
Set Variable [ $value ; Value: GetField ( GetValue ( $fields ; $i ) ) ]
Insert Calculated Result [ Target: $row ; If ( $i > 1 ; $delim ) & $value ]
End Loop
enter code here
# Append the new row of data to the list variable
Insert Calculated Result [ Target: $data ; If ( Get ( RecordNumber ) > 1 ; ¶ ) & $row ]
Go to Record/Request/Page [ Next ; Exit after last: On ]
End Loop
# Save to a global variable to show in a virtual list layout
Set Variable [ $$DATA ; Value: $data ]
Exit Script [ Text Result: ]
please note this code is just one of many possible formats the virtual list can take. A lot of people, myself included, prefer to use JSON objects or arrays for each row of the list since it automatically handle field values with carriage returns. This is sort of the old-fashioned way. Kevin Frank at FileMaker Hacks has some good recent articles about virtual list techniques if you're interested.
PS, another great technique for rendering table data dynamically is to collect the data in a JSON array and render it in a webviewer with https://datatables.net/
I did something like this for the oncology department of UM om 1980 or so using 4th Dimension and a new plug in that used one line of code to create a web browser with all the functions that a doctor might want. The data was placed inside a variable as it was sent/returned and 4D could use a variable in the report to display the data.
FileMaker does not have this ability built in as 4D did so you will have to do it yourself.JSON is the most likely tool that I am familiar with. YouTube has many videos on JSON.
You have two classes of variables for your report: Column headers and column data to display. Fortunately Filemaker is quite good and very easy to design. Just make a typical report and replace the text/header or field names with a JSON variable or any. $ColumnName = JSON variable.
Create a JSON calculated field in the database. In that calculated field set the JSON variable and this can be used for all of the columns.
This is the essence of the idea with the final result to be determined by you. What you are asking for is not easy and would require serious work by a skilled JSON scripter.

Write into tcl dictionary

I am relatively new to tcl dictionaries and don't see a good documentation on how to initialize an empty dictionary, loop over a log and save data into it. Finally I want to print a table that looks like this:
- Table:
HEAD1
Step 1 Start Time End Time
Step 2 Start Time End Time
**
- Log:
**
HEAD1
Step1
Start Time : 10am
.
.
.
End Time: 11am
Step2
Start Time : 11am
.
.
End time : 12pm
HEAD2
Step3
Start Time : 12pm
.
.
.
End Time: 1pm
Step4
Start Time : 1pm
.
.
End time : 2pm
You really don't have to initialise an empty dictionary in Tcl - you can simply start using it and it will get populated as you go along. As mentioned already, dict man page is the best way to start.
Additionally, I would suggest you check the regexp man page as you can use it nicely to parse your text file.
Not having anything better to do atm, I cobbled together a short sample code that should get you started. Use it as a starting tip, adjust it to your particular log layout and add some defensive measures to prevent errors from unexpected input.
# The following line is not strictly necessary as Tcl does not
# require you to first create an empty dictionary.
# You can simply start using 'dict set' commands below and the first
# one will create a dictionary for you.
# However, declaring something as a dict does add to code clarity.
set tableDict [dict create]
# Depending on your log sanity, you may want to declare some defaults
# so as to avoid errors in case the log file misses one of the expected
# lines (e.g. 'HEADx' is missing).
set headNumber {UNKNOWN}
set stepNumber {UNKNOWN}
set start {UNKNOWN}
set stop {UNKNOWN}
# Now read the file line by line and extract the interesting info.
# If the file indeed contains all of the required lines and exactly
# formatted as in your example, this should work.
# If there are discrepancies, adjust regex accordingly.
set log [open log.txt]
while {[gets $log line] != -1} {
if {[regexp {HEAD([0-9]+)} $line all value]} {
set headNumber $value
}
if {[regexp {Step([0-9]+)} $line all value]} {
set stepNumber $value
}
if {[regexp {Start Time : ([0-9]+(?:am|pm))} $line all value]} {
set start $value
}
# NOTE: I am assuming that your example was typed by hand and all
# inconsistencies stem from there. Otherwise, you have to adjust
# the regular expressions as 'End Time' is written with varying
# capitalization and with inconsistent white spaces around ':'
if {[regexp {End Time : ([0-9]+(?:am|pm))} $line all value]} {
set start $value
# NOTE: This short example relies heavily on the log file
# being formatted exactly as described. Therefore, as soon
# as we find 'End Time' line, we assume that we already have
# everything necessary for the next dictionary entry
dict set tableDict HEAD$headNumber Step$stepNumber StartTime $start
dict set tableDict HEAD$headNumber Step$stepNumber EndTime $stop
}
}
close $log
# You can now get your data from the dictionary and output your table
foreach head [dict keys $tableDict] {
puts $head
foreach step [dict keys [dict get $tableDict $head]] {
set start [dict get $tableDict $head $step StartTime]
set stop [dict get $tableDict $head $step EndTime]
puts "$step $start $stop"
}
}

st_within function of GeoSPARQL on Virtuoso

I installed Virtuoso Open Source Edition 07.20.3217.
But GeoSPARQL does not work as I expected.
I inserted 10 triples --
prefix owl: <http://www.w3.org/2002/07/owl#>
prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>
prefix geo: <http://www.opengis.net/ont/geosparql#>
prefix ex: <http://www.example.org/POI#>
prefix sf: <http://www.opengis.net/ont/sf#>
prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
ex:WashingtonMonument
rdf:type ex:Monument
rdfs:label "Washington Monument" ;
geo:hasGeometry ex:WMPoint .
ex:WMPoint
rdf:type sf:Point ;
geo:asWKT "POINT(-77.03524 38.889468)"^^geo:wktLiteral .
ex:NationalMall
a ex:Park ;
rdfs:label "National Mall" ;
geo:hasGeometry ex:NMPoly .
ex:NMPoly
a sf:Polygon ;
geo:asWKT "POLYGON((-77.050125 38.892086, -77.039482 38.892036, -77.039482 38.895393, -77.033669 38.895508, -77.033585 38.892052, -77.031906 38.892086, -77.031883 38.887474, -77.050232 38.887142, -77.050125 38.892086 ))"^^geo:wktLiteral .
Then I tried this GeoSPARQL query --
PREFIX geo: <http://www.opengis.net/ont/geosparql#>
SELECT *
WHERE {
?m geo:hasGeometry ?mgeo .
?p geo:hasGeometry ?pgeo .
FILTER (bif:st_within(?mgeo, ?pgeo))
}
But there is no result.
What did I do wrong?
Thank you for any response.
I think you might want to specify that you're looking for Monuments within Parks. Also, I don't think you want a wildcard result, but only the list of such Monuments and Parks.
PREFIX geo: <http://www.opengis.net/ont/geosparql#>
SELECT ?monument
?park
WHERE
{
?monument a ex:Monument ;
geo:hasGeometry ?mgeo .
?park a ex:Park ;
geo:hasGeometry ?pgeo .
FILTER (bif:st_within(?mgeo, ?pgeo))
}
Live examples are often more useful than hypotheticals, so here's an adjusted query that does produce results, though there seems to be an issue with Monument vs Park designation, and both have POINT geometry (i.e., no POLYGON data) --
PREFIX geo: <http://www.opengis.net/ont/geosparql#>
PREFIX lgdo: <http://linkedgeodata.org/ontology/>
PREFIX wgs: <http://www.w3.org/2003/01/geo/wgs84_pos#>
SELECT ?monument ?mlabel ?mgeo
?park ?plabel ?pgeo
WHERE
{
?monument a lgdo:Monument ;
rdfs:label ?mlabel ;
wgs:geometry ?mgeo .
?park a lgdo:Park ;
rdfs:label ?plabel ;
wgs:geometry ?pgeo .
FILTER (bif:st_within(?mgeo, ?pgeo))
}
I do not have time to find a live dataset that includes POLYGON geometry for Parks, and accurately places Monument POINTs within such Park POLYGONs, so I cannot dig much further ... but if you can make your instance public, or point to a live public instance that holds such data, we can go further.

finding first and last occurrence of a string using awk or sed

I couldn't find what I am looking for online so I hope someone can help me here. I have a file with the following lines:
CON/Type/abc.sql
CON/Type/bcd.sql
CON/Table/last.sql
CON/Table/first.sql
CON/Function/abc.sql
CON/Package/foo.sql
What I want to do is to find the first occurrence of Table, print a new string and then find last occurrence and print another string. For example, output should look like this:
CON/Type/abc.sql
CON/Type/bcd.sql
set define on
CON/Table/last.sql
CON/Table/first.sql
set define off
CON/Function/abc.sql
CON/Package/foo.sql
As you can see, after finding first occurrence of Table I printed "set define on" before the first occurrence. For the last occurrence I printed "set define off" after last match of Table. Can someone help me write an awk script? Using sed would be okay too.
Note: The lines with Table can appear in the first line of the file or middle or last. In this case they appear in the middle of the rest of the lines.
$ awk -F/ '$2=="Table"{if (!f)print "set define on";f=1} f && $2!="Table"{print "set define off";f=0} 1' file
CON/Type/abc.sql
CON/Type/bcd.sql
set define on
CON/Table/last.sql
CON/Table/first.sql
set define off
CON/Function/abc.sql
CON/Package/foo.sql
How it works
-F/
Set the field separator to /
$2=="Table"{if (!f)print "set define on";f=1}
If the second field is Table, then do the following: (a) if flag f is zero, then print set define on; (b) set flag f to one (true).
f && $2!="Table"{print "set define off";f=0}
If flag f is true and the second field is not Table, then do the following: (a) print set define off; (b) set flag f to zero (false).
1
Print the current line.
Alternate Version
As suggested by Etan Reisner, the following does the same thing with the logic slightly reorganized, eliminating the need for the if statement:
awk -F/ '$2=="Table" && !f {print "set define on";f=1} $2!="Table" && f {print "set define off";f=0} 1' file

Format numeric values in Filemaker 12 List() result

I'm using List() to retrieve a numeric field which I subsequently display on a report view via a merge variable inside a text field. The data being displayed is a list of employees who worked on a particular job on a particular day, and the number of hours they worked under various classifications (normal, overtime, non-billable, non-billable overtime, et al). The hours are all calculated fields pulled from another table, but they need to be stored numerically.
Each column has its own text field:
| <<$$Name>> | <<$$normalHours>> | <<$$otHours>> | ...
Giving output such as:
Jim Jones 8 2
Ralph Ryder 4.25 0
Foo McBar 10 2.5
The field height needs to be dynamic because there could be anywhere from 1 to 10 or so employees displayed.
The issue is that I would like to always display the hours field with two decimal places:
Jim Jones 8.00 2.00
Ralph Ryder 4.25 0.00
Foo McBar 10.00 2.50
This is normally trivial via Inspector -> Data for a single-value field, and perhaps it still is trivial -- but I'm just not seeing it.
I've tried using SetPrecision(hours ; 2) when populating the field, and also (though I didn't think it would actually work) when creating my list variable:
$$normalHours = SetPrecision( List( laborTable::normalHours ) ; 2 )
In both cases I still see plain integer output for whole numbers and no trailing zeroes in any case.
Please let me know if I can provide any further information that might help.
A few things you can try:
Auto-enter calculation replacing existing value
You could change your normalHours field to be an auto-enter calculation, uncheck 'do not replace existing value', and set the calculation to the following:
Let ( [
whole = Int ( Self ) ;
remainder = Abs ( Self ) - Abs ( whole )
] ;
Case ( remainder = 0 ;
whole & ".00" ;
Self )
)
This will append a ".00" to any whole numbers in your field. This should then come through your List() function later.
New calculation field
Alternately, if you don't want to automatically modify the existing number, you could make a new calculation field with a very similar calculation:
Let ( [
whole = Int ( normalHours ) ;
remainder = Abs ( normalHours ) - Abs ( whole )
] ;
Case ( remainder = 0 ;
whole & ".00" ;
normalHours )
)
And then you would use that calculation field in the List function, instead of your normalHours field.
For more complicated field formatting, you could also use a custom function like this: http://www.briandunning.com/cf/945
Can you replace this with a portal, perhaps?
If not, then try to set formatting on the merge text field itself. It can have formatting too; only one variant for each data type, but in your case it should be enough.

Resources