I've been trying to wrap my head around authoring profiles in FHIR. The trouble I'm having is around the use of using extensions.
The documentation talks about extensions as if they are simply just there to extend existing elements of the resource which a profile belongs to, this is kind of confirmed to me when using forge because I can add new elements which don't have extensions.
It feels very foreign to me as in our proprietary storage system, we have the equivalent of profiles, and they have properties about them (which I think are similar to elements in fhir), however a property is only designed to store one type of thing; e.g. you might have a patient profile that has the properties DOB, ethniticy, identifier, etc. I don't really understand what profiles are for in the context of fhir, are they similar to my properties? Can I use the to limit the datatype that a profile instance can have for a particular element?
Is there any better documentation than the spec? I'm finding it really hard to get to grips with.
FHIR extensions are used to be able to enter extra data elements, when there's no field for that in the standard definition. Mother's maiden name is an example of that for the Patient resource.
The use of an extension is a standard FHIR mechanism and will always look like this:
<extension>
<url value="http://hl7.org/fhir/StructureDefinition/patient-mothersMaidenName"/>
<valueString value="Williams"/>
</extension>
The url is the canonical url for the definition of the extension, which is a StructureDefinition resource defining the extension and the datatype(s) of the value.
You can have extensions on every level of a resource/datatype.
Since profiling is a very overloaded term, it is hard for me to understand what you're saying about profiles and properties in your proprietary system, or how that relates to your question. But in general, FHIR profiling is needed and used to
be able to add data when there's no data field for it in the specification (i.e. an extension of the specs)
constrain the specification in places where you need to be more strict, for example to make an optional field mandatory (i.e. a constraint on the specs, also called a profile)
I recommend browsing through some of the profiles and their descriptions on the Simplifier repository to get an idea of why people are creating profiles on FHIR.
Related
I have a bunch of XSD Files which I did not write myself. The files sometimes import each other:
<xs:import namespace="http://www.mysite.com/xmlns/xXX-YYYY/V" schemaLocation="http://www.mysite.com/xmlns/xXX-YYYY/V/schema_A.xsd"/>
and I would like to get an overview of the dependencies without having to read through all of them.
The URI specified by schemaLocation does not exist, instead a catalog.xml File is used to resolve the schema locations.
http://de.wikipedia.org/wiki/XML_Catalogs
Can anybody recommend a tool that can visualize the dependencies of my schemas by also processing the information given in the catalog.xml file?
Thanks
Mischa
To follow up on my comment...
I am not aware of any tool that takes into account OASIS catalog files. Have a look at this response, see if it supports what you need (and your platform).
Strictly speaking, there are a number of issues with dependencies diagrams, which is why such a question should be qualified with why do you want it.
Some think that it truly shows dependencies between XSD files; it is not true: it may show what the author thinks the dependencies are, but that wouldn't be what the processor actually agrees to. "schemaLocation" is just a hint that processors may or may not use: "may not" use it if they're instructed otherwise (well known XSDs could be cached internally, through catalog entries or any other proprietary "catalogs"), or because the processor may decide there is no need to load an external reference when there's no use for it anyway (it may happen in some corner cases).
A diagram built as described by explicit schema locations is definitely easier to do. It only shows what the author intended; it doesn't mean that it is the "real one" (as in content is pulled indirectly, which makes the whole XSD set valid, while individual XSDs, open independently of the set, would be invalid).
Trying to build a diagram where dangling or non existing schemaLocation are overridden through a catalog, is way harder, due to the multitude of ways to structure the content, and the resolution mechanism. It would have the same shortcoming as the one above (except now the author is the one of the catalog file, rather than who authored the XSDs).
The "true" dependency can be built by traversing a schema set already loaded and compiled. Even then, you would still need to define criteria regarding dependencies due to substitutable components (elements in substitution groups or derived types, through the use of the xsi:type attribute). That is even harder.
Take a look at this tool: DocFlex/XML XSDDoc.
It is an XML schema documentation generator.
It doesn't visualize xsd dependencies, but it does work with XML catalogs.
The overview of each XSD file lists all other XSD files referenced from it
(i.e. imported, included or redefined).
There is also an opposite list of those schemas that reference the given one.
So, you can use it to figure out which XSD files depend on which.
At least, that will be easier than reading raw XSD files.
As an example, here is a documentation generated with that tool:
XML Schemas for DITA 1.1. It has been generated basically by two files:
http://docs.oasis-open.org/dita/v1.1/OS/schema/ditaarch.xsd
http://docs.oasis-open.org/dita/v1.1/OS/schema/catalog.xml
ditaarch.xsd is the schema driver that pulls all other schemas (25 in total); catalog.xml is the XML catalog, via which all file references are resolved.
What is specified in schemaLocation attributes in those schemas themselves are just opaque URIs.
We have a requirement to allow customising our core product and adding additional fields on a per client basis e.g. People entity some client wants to record their favourite colour etc. As far as I know we can't add properties to EF at runtime as it needs classes defined at startup. Each customer has their own database but we are deploying the same solution to all customers with all additional code. We are then detecting which customer they are and running customer specific services etc.
Now the last thing I want is to be forking my project or alternatively adding all fields for all clients. This would seem likely to become a nightmare. Also more often than not the extra fields would only be required in a very limited amount of place. Maybe some reports, couple of screens etc.
I found this article from Jermey Miller http://codebetter.com/jeremymiller/2010/02/16/our-extension-properties-story/ describing how they are adding extension properties and having them go from domain to the web front end.
Has anyone else implemented anything similar using EF? How did it work out? Are there any blogs/samples that anyone has seen? I am not sure if I am searching for the right thing even if someone could tell me the generic name for what we want to do that would help. I'm guessing it is a problem that comes up for other people.
Linked question still requires some forking or implementing all possible extensions in single solution because you are still creating strongly typed extensions upfront (= you know upfront what extensions customer wants). It is not generally extensible solution. If you want generic extensible solution you must leave strongly typed world and describe extensions as data.
You will need to use some metamodel. Your entity classes will contain only properties used by all customers and navigation property to special extension entity (additional table per every extensible entity) where you will be able to put additional properties as name / value pair (you can add other columns like type, validation, etc. if needed).
This will in general moves part of your model from hardcoded scenario to configuration based scenario and your customers will even be allowed to define extensions at runtime (if you implement such feature).
Plone Dexterity supports the definition of the content-type schema either through an interface (using zope.schema for the definition) or through an XML file. What is the preferred/recommended way?
In addition: is there documentation of the XML dialect used for defining a schema (models/mytype.xml) ?
This presentation appears close but not complete.
I personally much prefer the zope.schema route; I can, if I really wanted to, vary the interface attributes dynamically with python, while the XML definition is of course static.
Also, note that to register adapters and views against an XML-defined schema, you need to pull it into python code anyway:
from plone.dexterity import api
class IMyXMLDefinedType(api.Schema):
api.model('my_xml_defined_type.xml')
The XML dialect is part of plone.supermodel package; I was not able to locate any documentation beyond the source code.
I prefer an interface over an xml model. Partly that is because I prefer Python over XML. Partly it is because you cannot do some things with the XML. For example, if you want to register a field as searchable, with collective.dexteritytextindexer, you (currently) cannot set this in the Plone interface, so you will have to use Python code and therefore an interface. But Martijn shows in his answer that you can use api.model in an interface to refer to an xml file, so maybe that would be a way around it if you really want to.
I'm going to contribute to the mess by saying there is no hard and fast answer.
With simpler content types, or early in the development of more complex ones, I'm often oriented towards the supermodel XML because of how closely it works with the dexterity TTW editor. It allows me to work with a client with very rapid feedback on what they want from their content type.
Sometimes I'll even move into file system development of some features while still having the fields defined in the FTI via supermodel.
However, with more complex content types, you're nearly certainly going to hit something you can't do via supermodel alone. At that point, I usually translate to schema — and that's typically pretty easy to do.
Ideally, if you're doing a lot of dexterity development, you should probably be able to shift pretty easily back and forth. They're just different ways of representing the same objects and attributes.
My application uses urn:uuid as URIs for entities. Of course, when I get, e.g. RDF information about a resource, the referred entities (subject or objects) will contain URIs in the urn:uuid schema. To fetch the representation of the new entity, possibly in a REST way, I need a "resolver", similar in some way to dx.doi.org for DOIs. Another case could be the resolution of a isbn: URI, so to obtain a sensible representation of this URI.
My question is relative to what's out there, in terms of proposed standards, for URI-to-representation-URL resolution.
The concluded URN Working Group of the IETF has also done some work on resolving URNs and published quite a few RFCs on this topic. A list of references is contained in the group charter. Maybe some of them help you.
An UUID is a universally unique identifier, so I don't see how you would be able to resolve a uuid I just generated (e.g. 3136aa1a-fec8-11de-a55f-00003925d394) to something useful.
Only if you manage a database of uuids somewhere, you can retrieve more from it. Or you would have to ask everyone/everything "Do you know this uuid?"
The urn:uuid definition defines a clear space of unique identifiers you can use for defining something truly unique. But as nobody else can guess its value, you can't derive information from it.
There is no standard (proposed or otherwise) for resolving a URN. It's just a name (Uniform Resource NAME) and may have arbitrary meaning.
XML/RDF creates some confusion by using URNs which do resolve because they happen to also be URLs (Uniform Resource Locators) which point to objects describing their meaning, but this is merely a convention. They merely have to be unique and always mean the same thing.
If you are developing an application, you might want to consider use URNs which are also resolvable URLs for items with fixed meaning, and randomly generated URN's in the urn:uuid namespace to identify instances of objects.
That sounded about as confusing as the RDF spec:-)
Quick example:
Tiger: http://www.example.com/animals/tiger
Instance of a Tiger: urn:uuid:9a652678-4616-475d-af12-aca21cfbe06d
There might be a HTML page at http://www.example.com/animals/tiger, but there doesn't have to be. It's merely a convention.
[Additional Clarification Added]
The distinction here is between URNs (Names) and URLs (Locations).
A URN just names something. It's not a location of anything.
URLs are valid URNs, so you can use a URL for a URN if you want to.
In the above example, I could use e.g. http://www.example.com/tigers/9a652678-4616-475d-af12-aca21cfbe06d as the name of my tiger. I could put something at that address. But what would I put there? You can't download an instance of a tiger using http!
The convention in RDF is that if a URN is also a URL, it will point at some documentation defining what the name means.
What RDF is trying to give you is a convention for naming things which ensures that when two people use the same name, they mean the same thing. The UUID specification allows you to generate a unique name for something which is not likely to be used by anything else. But it's just a name, and there's no way of turning it into a thing.
Hope this helps.
One reason URNs exist is to give people the opportunity to create identifiers without the (implicit) responsibility of maintaining a service that describes the underlying resources. You could say that for RDF this is an advantage, but not a necessity, but you'd also be less inclined to use a particular vocabulary for example if you discovered that those HTTP URLs are no longer dereferenceable.
That being said, some URNs can be traced back to their representation. Here are some examples:
The ietf namespace defines several identifier schemes, so URIs like urn:ietf:rfc:2648 can be resolved if you implement the specific patterns.
Some namespaces are defined in other IANA registries, for example urn:ietf:params:xml: with the corresponding files for the resources.
Other namespaces point to already-established identifier spaces, like urn:isbn: (some metadata can be retrieved, but I don't think there is anything that will allow you to download the book from its ISBN), urn:oid:. There is also urn:publicid:, some of whose identifiers may be found somewhere deep inside ISO.
There is no general mechanism for URN resolution, and indeed there cannot be (that is also true for other URI schemes, like tag:).
Talking specifically about UUIDs, in my opinion, the best way out of this is not to use a URN at all. If you want to use a web server for the resolution, a "standard" way is to use the genid well-known service, thus your primary URI would be something like this: http://example.org/.well-known/genid/b47df9f0-a9c5-4e8a-9762-844a33ba7a3e. If you host RDF at that location, there is nothing wrong with adding owl:sameAs <urn:uuid:b47df9f0-a9c5-4e8a-9762-844a33ba7a3e> there if you have to.
To my knowledge, there is only one method that is in use today to create a link that conveys the question "Do you know this URN?", well, kind of: a magnet: link. There is nothing in principle that would require you to use a hash there like you usually find, so something like magnet:?xt=urn:uuid:b47df9f0-a9c5-4e8a-9762-844a33ba7a3e could work, provided you have your own client that can handle that.
I have seen questions here asking about xsd->actionscript objects, but these seem to require xsd->java->actionscript and is all in source code. Our requirements are a bit different:
receive an xsd during runtime that we have never seen before
Create an instance object based on the xsd
fill in the values of the instance (either from an xml document or user input - whatever)
Anyone know of an actionscript library or tool that would help us accomplish this at runtime? It would be nice if something like this already existed - but we would certainly settle for a library that gave us a programmatic interface to extract information from an xsd schema. Additionally, we would take suggestions on alternate methods to accomplish the same ends.
Have you looked at the SchemaLaoder...? Not EXACTLY what you're looking for ... But a great start.
First - you should check this blog entry and this blog entry which walks you through Dominic De Lorenzo experiences with utilising functionality within the Flex SDK that provides the automatic mapping of custom ActionScript classes to element definitions within an XML Schema (XSD).
The steps to get moving here include (from Dominic's blog):
0) Create an instance of SchemaLoader and asynchronously load an XML schema from a given URL
1) Once the schema is loaded, add it to the SchemaManager and register any ActionScript classes to their corresponding schema type
---- At this stage you can do several operation based on the schema
2) Load an XML file based off that schema
3) Once the XML is loaded, decode the contents using XMLDecoder. Any classes registered in the schemaTypeRegistry will be used when decoding the xml
4) Encode a custom ActionScript class back into XML using XMLEncoder. XMLEncoder.encode() supports various ways to define the corresponding element in the schema (top level element, a specific type or even a custom XSD definition) that will be used to encode the Actionscript object.
The blog entry has links to code samples, etc...
Hope this helps.