How to define a FHIR CodeScheme (DSTU2) - dstu2-fhir

TL;DR: does fhir DSTU2 contain a mechanism for formally defining CodeSystems?
I'm trying to port over a bunch of resources to a fhir server from a proprietary system with very similar design goals. In the old system it had a build in method for defining the equivalent of CodeSchemes and ValueSets.
I currently have an instance of HAPI running locally which is running DSTU2 (not sure if that's the right way to say that, but it's in default).
I've been looking at this part of the documentation, which shows how to create a ValueSet, and when I browse the root of the HAPI server it shows there is a resource type called 'ValueSet' so I guess that the xml ValueSets I define are of that resource type: https://www.hl7.org/FHIR/valueset.html
What I can't seem to get my head around; is there actually a way of defining CodeSchemes within FHIR? Lots of the documentation mentions them, but it's ambiguous if it expects them to be defined externally and just referenced by uri, or if there is actually a resource type to explicitly hold them, where I can give definitions to my codes and such.
I've found this piece of documentation, however it states something about it being a pre release for DSTU3. The format seems very similar to the inline CodeSchemes that can be defined in ValueSets, but the resource type 'CodeSystem' doesn't seem to exist in my local instance of HAPI: https://hl7.org/fhir/2016Sep/codesystem-example.json.html

In DSTU2, we use ValueSet for two purposes - defining true ValueSets and also for defining code systems. The latter uses ValueSet.define. (In DSTU3, this functionality gets split out into the CodeSystem resource.)

Related

How do I save a dynamically generated Lisp system in external files?

Basically, I want to be able to generate class definitions, compile the system, and save it for reuse. Would that involve a code walker, or is there a simpler option?
(save-lisp-and-die "isn't going to work for me")
Expanding to explain. I'm generating systems based on OpenAPI definitions, so a system roughly corresponds to an API client.
There will be dozens, if not hundreds of these.
The idea is to NOT keep them all in the image, but load at run time as required.
I see two possible routes here, and to some extent, I suspect they mainly differ in "the last mile" (as it were).
The route you seem to have settled on, run-time definition of classes and functions.
A route whereby you generate your function/class forms, but don't go the full way to get them "Live" in the image and instead emit the form(s) to a file.
I suspect that it would be possible to have most of the generating code shared between the two and for the first route have a wrapping macro that effectively returns a PROGN, and in the second calls a function to pretty-print what the macro would have returned on a stream.
Saying that, building a tailored environment and saving it to a "core" file is a pretty good way of getting excellent startup times.

Generating files to multiple paths with Swagger Codegen?

I'm creating a template for our server-side codegen implementation, but I ran into an issue for a feature request...
The developers who are going to use the generated base want the following pattern (the generator is based on the dotnetcore):
Controllers
v{apiVersion}
{endpoint}ApiController : Controller, I{endpoint}Api
Interfaces
v{apiVersion}
I{endpoint}Api
I{endpoint}DataProvider
DataProviders
-v{apiVersion}
-{endpoint}DataProvider : I{endpoint}DataProvider
Both interfaces are the same, describing the endpoints. The DataProvider implementation will allow us to use DI to hot-swap the actual data provider/business logic layer during runtime.
The generated ApiControllers will refer to the IDataProviders, and use the actual implementation (the currently active one, that is). For that we're going to use dotnetcore's built-in dependency injection system.
However I can't seem to find a way to have the operations generator output to three different folders, based on the template. It will all end up jumbled in a single folder, and I will need to manually move them.
Is there a way to solve these requirements, or should I solve it all the time manually?

Can someone explain FHIR extensions?

I've been trying to wrap my head around authoring profiles in FHIR. The trouble I'm having is around the use of using extensions.
The documentation talks about extensions as if they are simply just there to extend existing elements of the resource which a profile belongs to, this is kind of confirmed to me when using forge because I can add new elements which don't have extensions.
It feels very foreign to me as in our proprietary storage system, we have the equivalent of profiles, and they have properties about them (which I think are similar to elements in fhir), however a property is only designed to store one type of thing; e.g. you might have a patient profile that has the properties DOB, ethniticy, identifier, etc. I don't really understand what profiles are for in the context of fhir, are they similar to my properties? Can I use the to limit the datatype that a profile instance can have for a particular element?
Is there any better documentation than the spec? I'm finding it really hard to get to grips with.
FHIR extensions are used to be able to enter extra data elements, when there's no field for that in the standard definition. Mother's maiden name is an example of that for the Patient resource.
The use of an extension is a standard FHIR mechanism and will always look like this:
<extension>
<url value="http://hl7.org/fhir/StructureDefinition/patient-mothersMaidenName"/>
<valueString value="Williams"/>
</extension>
The url is the canonical url for the definition of the extension, which is a StructureDefinition resource defining the extension and the datatype(s) of the value.
You can have extensions on every level of a resource/datatype.
Since profiling is a very overloaded term, it is hard for me to understand what you're saying about profiles and properties in your proprietary system, or how that relates to your question. But in general, FHIR profiling is needed and used to
be able to add data when there's no data field for it in the specification (i.e. an extension of the specs)
constrain the specification in places where you need to be more strict, for example to make an optional field mandatory (i.e. a constraint on the specs, also called a profile)
I recommend browsing through some of the profiles and their descriptions on the Simplifier repository to get an idea of why people are creating profiles on FHIR.

VS2010 Local Resource Suffix Issue

Resource file generated from Tools--> Generate Local Resources creates respective keys having the suffix "Resource1".
Is there a way to get rid of the suffix "Resource1" and make it use the exact control name for the resource key?
It's described in this issue. The Resource suffix is to help prevent name clashes between controls. Without it, it would break in some circumstances.
Is it purely the code generation you want to customize? You could always use a Custom Resource Manager to remap the resource keys to your own convention (without suffix). It does mean creating your own implementation to pull out the resources created from RESX, but I've done it this in the past with some help (copy/paste) from reflector.
It would allow you to use shortcuts (no suffix) in your syntax when referring to resources, but it wouldn't affect the code gen side of things. A find and replace fixes that, or a custom tool.

GetGlobalResourceObject or Resources.Resource - what's better?

I have an application that is multilingual. I'm using the out-of-the-box .Net features for this. Each language has its own file in the App_GlobalResources (see iamge below)
In the code behind what is better?
GetGlobalResourceObject("LocalizedText", "ErrorOccured")
Resources.LocalizedText.ErrorOccured
The 2nd one uses less code and it's type safe, it will return an error during compile time and not run time.
alt text http://img340.imageshack.us/img340/5562/langl.gif
These are the advantages of each approach:
Advantages of GetGlobalResourceObject (and GetLocalResourceObject):
You can specify a particular culture instead of using the CurrentCulture.
You can use a late-bound expression (i.e. a string) to decide which resource to load. This is useful if you can't know ahead of time which resource you will need to load.
It works with any resource provider type. For example, it works not only with the built-in default RESX-based provider but it'll work the same against a database-based provider.
Advantages of strongly-typed RESX types:
You get compile-time errors if you access a resource that doesn't exist.
You get Intellisense while working on the project.
So, as with many "which is best" questions, the answer is: It depends! Choose the one that has advantages that will benefit your particular scenarios the most.
So use the second one, if you know up-front what the resource file and key will be.
The GetGlobalResourceObject() method is useful if you don't know what the resource file or (more likely) the key will be at compile time.

Resources