JSON-LD, is there any way to specify the default URI for the #id of a #type or the values of a property? - uri

(I've already asked this on the W3/JSON mailing list, I'll try here too.)
I'm fairly new to JSON-LD, although I have significant experience with Semantic Web technologies.
I've read the guideline document (https://www.w3.org/TR/json-ld/) and I haven't get if the feature at issue is supported:
Suppose you have JSON objects of #type Person and #type Address, both having the #id property. Typical API-coming data will have values like integers or some internal, context-dependant IDs. It's pretty common to RDF-translate those values to prefix-based URIs like http://www.example.com/Person/123 or http://www.example.com/Address/xh324m44.
What I would like to do is to specify those prefixes and keep data telling #id = '123', with the value joins happening at RDF serialisation stage (the same specification would make it possible to do the opposite conversion too). Clearly, in such a use case, the prefixes depend on the #type of objects, and the #base mechanism is not enough. Moreover, it would be useful to have this mechanism available for properties too, e.g., to associate the address URI prefix to the values of the "address" JSON property.
It doesn't seem that this is currently available in JSON-LD, or am I missing something? Any plan for future extensions?

You can use #base in the context to create a URI base for values of #id, but this will not include something from #type. This sounds like something you might get by defining a URI template and using variables to expand type and id to create a URI. You can do this in a templating language and create the JSON-LD, but not directly in JSON-LD itself. Not likely to be a feature included by the language in the future, either, as it's application is pretty narrow.

Related

oData v4 - ordering outer entity on property in related one-to-many entities

I have an oData model with a couple of one-to-many relationship, say person->addresses and person->driving-licences. I would like to be able to sort the result set based on properties in the address entity and driving licence entity. As there could be more than one address, I would initially select a single item from the addresses set, based on a property called IsPrimary. As there could be more that one driving licence, I would select the 'UK' driving licence. Is this possible?
I was hoping I could do something like:
/people?$expand=addresses($filter=isPrimary eq true),drivinglicences($filter=country eq 'UK')&$orderby=addresses/postcode,drivinglicences/active
Unfortunately I get the following error:
"The query specified in the URI is not valid. The parent value for a property access of a property 'isPrimary' is not a single value. Property access can only be applied to a single value."
Does anyone know if what I'm trying to do is supported by the spec? Or whether it is an issue with my query? Or whether it is an issue with the .NET library.
I'm using:
Microsoft.AspNet.OData - 7.2.3
Many Thanks.
What you see here is by design, or rather not supported by the specification, the error message even highlights the only type of expressions supported:
The query specified in the URI is not valid. The parent value for a property access of a property 'isPrimary' is not a single value. Property access can only be applied to a single value.
So the simplest solution is to modify the API either to include a Function bound to the people collection that applies the $filter or $order directly, or a Function that returns the data in a new shape, one that only has perhaps a singleton PrimaryAddress property. How you include driving license in this result is up to you, it could even be a parameter to the function, perhaps your people controller has a queryable function with this signature:
[EnableQuery]
public IHttpActionResult WithLicences(string countryCode)
However that is out of the scope of OPs question about specific syntax support
Although it seems like an important feature, we must remember that $select (Projection) and $filter are evaluated at different points in time, OData queries follow a similar execution sequence to SQL however the filter criteria and $orderby are evaluated separately, and the projection of the resultset is the last evaluation to be applied.
Due to $filter and $orderby being applied independently, neither concept is even aware of the other and as such neither can reference or assume to be applied before the other.
You can prove this by specifying a field in the $orderby and/or $filter that is not included in the $select, you can even reference singleton navigation fields that are not included in an $expand and the query will evaluate correctly.
The OData spec is similar to a law document, in that to properly understand and apply it we need to understand the original intent of the authors. We can get an initial understanding from the early listing of Addressing Entities
Addressing Entities describes functions that can be bound to collections or entities that return either a single entity or a collection of entities
By allowing special provision of custom functions to be applied the authors are encouraging API designers to provide natural extensions to their resource endpoints that can facilitate the execution of pre-determined queries that may be otherwise complex or problematic to express in pure OData query syntax.
In other words, we are encouraged to customise our APIs to make them easier for the end process to consume, and to guide the consuming developer to make the best use of the API, they shouldn't have to discover everything from first principals.
To achieve OPs type of query in pure SQL would still require either a nested lookup, CTE or self join... advanced syntax. In OData v4, the specification does not provide a syntax for targeting specific items within a collection for path expressions (of which $orderby derives from)
5.1.1.15 Path Expressions
Properties and navigation properties of the entity type of the set of resources that are addressed by the request URL can be used as operands or function parameters, as shown in the preceding examples.
Properties of complex properties can be used via the same syntax as in resource paths, i.e. by specifying the name of a complex property, followed by a forward slash (/) and the name of a property of the complex property, and so on,
Properties and navigation properties of entities related with a target cardinality 0..1 or 1 can be used by specifying the navigation property, followed by a forward slash (/) and the name of a property of the related entity, and so on.
If a complex property is null, or no entity is related (in case of target cardinality 0..1), its value, and the values of its components, are treated as null.
RE: I couldn't find anything explicit in the spec. :)
That is the very thing about the OData specification,the specification does not list what is not supported, only what should be supported. So by omission, if you cannot find a reference to how to do something, then that something is not required to be supported.
Introduction http://docs.oasis-open.org/odata/odata/v4.01/odata-v4.01-part2-url-conventions.html#sec_Introduction ... This specification defines a set of recommended (but not required) rules for constructing URLs to identify the data and metadata exposed by an OData service as well as a set of reserved URL query string operators, which if accepted by an OData service, MUST be implemented as required by this document.
This has been on ongoing discussion held in may threads, recently https://stackoverflow.com/a/55324393/1690217
Many people complain that this is surely a fundamental feature of a data access platform, however it is important to respect the original intent of the OData platform and keep our APIs simple by providing customised endpoints to suit our business domain.

What's the use of the Name parameter in RouteAttribute?

Looking at this:
[Route("", Name = "GetChanges")]
What's the use of the Name parameter? The only useful usage of this is I am able to refer to the action when calling CreatedAtRoute such that:
return CreatedAtRoute("GetChanges", new { id = model.ChangeId }, model);
So why and what's the use case for the "Name" in RouteAttribute?
I think the use case is simply ambiguity resolution. If there are more than two actions on a controller that may qualify somehow the name is not ambiguous. I would prefer not to use a name unless needed, but I could see organizations adopt the ‘though shall use names to resolve unambiguously” approach as well. Count me as not a proponent, but the mechanism is there should you need it.
WebAPI by itself doesn't allows function overloading. So if there are two functions with same name but differing in parameters (overloading within the implementing class), the name attribute allows for those methods to be called by a specific names.
Along with, the class naming may be governed by different coding standards, whereas route names are those exposed to client, hence may have to be following different guidelines

QMimeData encoding types

In my app I'm doing internal drag and drops with a QTreeView. Using the tutorial I can happily drag and drop a single leaf by encoding it into a string list using the mime type "application/vnd.text.list".
I then wanted to drag and drop a tree node that had some children and thought the best route to doing this would be to encode the pointer to the node and iterate through all the children in the dropMimeData method.
I declared a mime type in the mimeTypes() method:
QStringList toResultModel::mimeTypes() const {
QStringList types;
types << "text/plain";
types << "application/vnd.mypointerlist.list";
return types;
}
And tried to pass the same string list across, but the application crashes in the dropMimeData() method.
It seems the mime type "application/vnd.text.list" has some hidden meaning which I am unable to find.
I have found this source code: http://fossies.org/linux/tora/src/toresultmodel.cpp where the author sets up a custom coding type "application/vnd.tomodel.list" and also uses "application/vnd.int.list".
What are the rules in using encoding types?
Where are the built-in types strings defined?
Which type should I use for passing a pointer to a tree node?
Four years later...
From the information you give, if your method crashes, it's not related to Drag and Drop in particular, there is some error you need to find in the code. that said, let me clarify D&D in Qt, and answer your question about MIME types. While you have indeed solved the problem you had four years ago, this may be useful for other users today.
You may define your own type for the purpose of your application or reuse an existing one. How to choose?
Can you use an existing MIME type, like text/plain?
Think about your application being the target of a D&D operation initiated from another application. Can you accept an existing MIME type and retrieve your data from it?
Think about another application being the target of a D&D operation initiated from within your application. Could this application handle an existing MIME type?
If this answer is no to any of the questions, then you might have to use your own specific MIME type.
The format name itself is not important
The constraint is that it must be unique, so that you can't receive incorrectly formatted MIME data from another application, and other applications can identify the MIME type as one they cannot handle, and ignore it.
The exact MIME type name doesn't matter as you'll provide the encoder and the decoder into your data model (e.g. see this introduction to view/model for Qt), as well as other information about the MIME type(s) used.
In QAbstractItemModel::mimeTypes, list only the MIME types you are able to deal with. If you don't plan to accept or send MIME data from/to other applications, there is no need to allow more than your specific MIME type.
When your application is the source of a D&D operation, encode (serialize) the MIME data in QAbstractItemModel::mimeData(indexes). The result of the serialization must be a byte array, even when there are multiple indexes to be dragged. The internal format is yours. Include any information required to decode (de-serialize) MIME data. Note that you must provide encoded data in each of the MIME type you've listed in QAbstractItemModel::mimeTypes (see previous point).
When your D&D data are dragged over your application UI, QAbstractItemModel::canDropMimeData(self, mime_data, action, row, column, parent) is called to determine if this location is valid for a drop. You may determine here whether the drop should be allowed at this location. In particular, you may test the content of the MIME data provided, and use mime_data.hasFormat(mime_type) to check if the format you expect is found in the data about to be drooped. Returning false will prevent a drop at this location and a "not allowed here" indication will be provided to the user (this won't cancel the D&D operation itself, the user can continue to move the mouse elsewhere).
When the data is actually dropped, QAbstractItemModel::dropMimeData(mime_data, action, row, column, parent) is called. Get the MIME data format(s) used using QMimeData::hasFormat(mime_type). If you don't find the MIME type you expect, ignore the drop operation as you cannot decode the data provided (the D&D was initiated from another application). This shouldn't happen as prior to drop data, the application has called QAbstractItemModel::canDropMimeData as seen in the previous point. If everything is ok decode the MIME data, and update your model with the data received.
On the other hand, your tree leaf data may fit as path+name encoded in text/plain MIME data, so maybe you can just use this type too. However as other applications can generate text/plain data that don't contain a tree leaf description, you need, in this case, to have a mean to identify irrelevant data and ignore them. It's obvious such approach will need more code for verification of the validity of the drop action than when using a specific MIME type. However this allows to interact with other applications, and is indeed relevant to drag from well know applications like Excel (e.g. cell content) or Firefox (e.g. rich text or image), else we couldn't re-use information from these applications using D&D.
Do you need to use vnd prefix?
vnd in the MIME type means "vendor specific". This prefix is used to distinguish vendors created MIME types from those created by IANA authority. From RFC 6838:
Vendor-tree registrations will be distinguished by the leading facet
"vnd.". That may be followed, at the discretion of the registrant, by
either a media subtype name from a well-known producer (e.g.,
"vnd.mudpie") or by an IANA-approved designation of the producer's
name that is followed by a media type or product designation (e.g.,
vnd.bigcompany.funnypictures).
So in your Drag&Drop tutorial the application/vnd.text.list is a specific one supposedly created by some vendor for their own purpose. Same for application/vnd.mypointerlist.list
In contrast, text/plain is a standard MIME type defined by IANA in RFC 2046. This defines a human readable text:
Plain text does not provide for or allow formatting commands, font
attribute specifications, processing instructions, interpretation
directives, or content markup. Plain text is seen simply as a
linear sequence of characters, possibly interrupted by line breaks
or page breaks. Plain text may allow the stacking of several
characters in the same position in the text. Plain text in scripts
like Arabic and Hebrew may also include facilitites that allow the
arbitrary mixing of text segments with opposite writing directions.
For your type, you may want to use vnd followed by a subtype which is specific to your application, for consistency considerations. But as seen, the actual name is not important, as long as you know which one you use and you are not interacting with other applications in the D&D chain.

Servlet Initialization parameters using annotation

I am trying to learn Servlet annotations and came across this snippet
#WebServlet(urlPatterns="/MyPattern", initParams={#WebInitParam(name="ccc", value="333")})
This makes sense to me. However, I don't understand why it is not like this
#WebServlet(urlPatterns="/MyPattern", initParams={(name="ccc", value="333"), (name="abc", value="1")})
So, the question is why we need to put #WebInitParam annotation when we already declared the attribute as initParams. It seems redundant to me, or am I missing something?
The alternative you suggest would not even compile.
When you look at the JLS, it states this:
It is a compile-time error if the return type of a method declared in
an annotation type is not one of the following: a primitive type,
String, Class, any parameterized invocation of Class, an enum type
(§8.9), an annotation type, or an array type (§10) whose element type
is one of the preceding types.
So in order to group name and value together, which represent the initialization parameter the only option is to use annotation (#WebInitParam in this case) with corresponding values set as its parameters.
As with most questions about language design choices we can only speculate here. I think some reasons for this are:
Keeping the language simple.
It is kind of redundant, but the syntax for annotations can be reused and does not require new language constructs. This makes it easier to parse and to read. Sure, It's longer, but it's also more explicit to write the annotation's name.
Don't restrict possible future language enhancements.
The proposed syntax would not work if annotations would support inheritance. I don't know if that's even a planned feature but it would not be possible to implement straightforward it if the type can be omitted.
In many cases an array of annotations seems like a workaround anyway. It can be avoided in Java 8, where you can add multiple annotations of the same type:
#WebServlet(urlPatterns="/MyPattern")
#WebInitParam(name="ccc", value="333")
#WebInitParam(name="abc", value="1")
(I don't know if the servlet api actually supports this yet though)

SQLAlchemy, reflection, different backends and table/column case insensitivity

Intro: I'm writing web interface with SQLAlchemy reflection that supports multiple databases. It turns out that authors of application defined postgresql with lowercase tables/columns, eg. job.jobstatus while sqlite has mixed case, eg Job.JobStatus. I'm using DeclarativeReflectedBase from examples to combine reflection and declarative style.
The issue: Configure SQLAlchemy to work with tables/columns case insensitive with reflection
I have done so far:
I have changed DeclarativeReflectedBase.prepare() method to pass quote=False into Table.__init__
What is left to be solved:
relationship definitions still has to obey case when configuring joins, like primaryjoin="Job.JobStatus==Status.JobStatus".
configure __tablename__ based on engine type
The question: Are my assumptions correct or is there more straightforward way? Maybe I could tell reflection to reflect everything lowercase and all problems are gone.
you'd probably want to look into defining a ".key" on each Column that's in lower case, that way you can refer to columns as lower case within application code. You should use the column_reflect event (See http://docs.sqlalchemy.org/en/latest/core/events.html#schema-events) to define this key as a lower case version of the .name.
then, when you reflect the table, I'd just do something like this:
def reflect_table(name, engine):
if engine.dialect.name == 'postgresql':
name = name.lower()
return Table(name, autoload=True, autoload_with=engine)
my_table = reflect_table("MyTable", engine)
I think that might cover it.

Resources